Qualcomm unveils data center Artificial Intelligence chips to challenge Nvidia and AMD

Qualcomm introduced Hexagon NPU based Artificial Intelligence processors for data centers, with the AI200 slated for 2026 and the AI250 for 2027. The company touts lower power and cost, liquid cooled racks, and 768 gigabytes per board as it targets inferencing workloads.

Qualcomm announced new Artificial Intelligence chips for data centers built on its Hexagon neural processing units, expanding technology it previously used in smartphones. The roadmap includes the AI200 in 2026 and the AI250 in 2027, alongside off the shelf rack systems equipped with liquid cooling. With this launch, Qualcomm is positioning itself against incumbents in a market that has become one of the fastest growing areas in enterprise technology.

The company says the AI200 and AI250 are designed for inferencing, meaning they run models and generate responses rather than train them. Qualcomm claims its rack systems will be cheaper to operate for cloud providers, citing a 160 kW power draw per rack that it says is comparable to Nvidia based setups. Pricing was not disclosed, and Qualcomm did not specify how many NPUs fit into a single rack. The company is also promoting a memory architecture it says improves efficiency.

Qualcomm argues its approach delivers advantages in power consumption, total cost of ownership, and memory capacity. According to the company, each board supports 768 gigabytes of memory, which it says exceeds comparable solutions from Nvidia and AMD. The move builds on momentum from the personal computing side, where Snapdragon chips were recommended by Microsoft for Copilot+ standard notebooks as power efficient solutions optimized for Artificial Intelligence, after outperforming Intel processors in a prior competition.

The broader market context underscores why Qualcomm is entering now. Industry investment in data centers is projected to surge through 2030, with most spending directed to systems built around Artificial Intelligence chips. Nvidia currently dominates with more than 90 percent market share in graphics processing units used for these workloads, and rising sales have increased the company’s market capitalization. Nvidia hardware powered the training of OpenAI’s GPT models that underpin the ChatGPT chatbot.

At the same time, buyers are seeking alternatives. OpenAI announced plans this month to purchase Artificial Intelligence chips from AMD, while Google, Amazon, and Microsoft are developing their own chips for cloud services. Intel also produces the Gaudi series of Artificial Intelligence accelerators. Investor reaction was swift: Qualcomm shares jumped 21.6 percent in Monday trading to their highest level since July 2024, while Nvidia rose 2.6 percent and Advanced Micro Devices gained 0.3 percent.

60

Impact Score

Semiconductors and Artificial Intelligence chips weekly briefing for December 12, 2025

The latest semiconductor and Artificial Intelligence chip developments span relaxed Nvidia export rules to China, massive potential TSMC investments in the US, and new product launches from Samsung and AMD alongside strategic deals by Broadcom, Marvell, and Qualcomm. Geopolitics, national security, and data center scaling remain central themes across the industry.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.