IBM’s defense large language model built with Janes data

IBM and Janes have developed a large language model fine-tuned for defense that queries continuously refreshed, human-vetted Janes data and can be deployed in air-gapped, classified, and edge environments. The product is positioned as an Artificial Intelligence decision-support tool for military planners and defense industry users.

IBM is launching a large language model purpose-built for defense and national security work that was developed with data from Janes and built on IBM’s Granite foundation models. Company and Janes officials provided DefenseScoop an exclusive preview ahead of the model’s public rollout, saying the system is engineered for deployment in air-gapped, classified, and edge environments and can be connected to secure customer networks via an application programming interface.

Janes is the primary data source for the model and supplies structured, human-vetted information collected from manufacturers, public government statements, and on-the-ground reporting at events like air shows. That dataset is delivered to customers through secure feeds on a schedule, and the model is designed to query live Janes data rather than memorize every fact. Janes and IBM officials emphasized that the approach reduces reliance on inconsistent internet sources and helps the model produce more reliable outputs about equipment, terminology, standards, and mission context.

Officials described early target use cases as operational planning, intelligence support, and strategy work within the defense industrial base. Integrators could embed the model into broader systems, including efforts tied to CJADC2 and the Maven smart system. The companies plan a subscription-based pricing model that supports continuous updates and integration work. IBM and Janes stressed the model is a decision-support tool meant to augment human analysts, not replace them, and they expect initial customer implementations in the coming months. The preview also noted the broader context that generative Artificial Intelligence tools can produce convincing but sometimes inaccurate outputs, and that the Department of Defense is actively investing in advanced algorithms and large language model capabilities.

68

Impact Score

Chrome downloads Gemini Nano model locally without clear consent

Google Chrome is reported to download a 4 GB Gemini Nano model onto some PCs automatically when certain Artificial Intelligence features are active. The process happens without clear notice in browser settings and can repeat after the model is deleted.

AMD plans specialized EPYC CPUs for Artificial Intelligence, hpc, and cloud

AMD is preparing a broader EPYC strategy with task-specific server CPUs aimed at agentic Artificial Intelligence, hpc, training and inference, and cloud deployments. The shift starts with the Zen 6 generation and adds Verano as an Artificial Intelligence-focused variant within the same EPYC family.

Nvidia expands spectrum-x ethernet with open mrc protocol

Nvidia is positioning Spectrum-X Ethernet as a foundation for large-scale Artificial Intelligence training, with Multipath Reliable Connection adding open, multi-path RDMA transport for higher resilience and throughput. OpenAI, Microsoft and Oracle are among the organizations using the technology in large Artificial Intelligence environments.

Anthropic explores Fractile chips to diversify supply

Anthropic is reportedly in early talks with London-based Fractile to secure high-performance Artificial Intelligence chips for inference workloads. The move would reduce reliance on Nvidia and broaden the company’s hardware supply chain.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.