Qualcomm tests Samsung LPDDR6X memory for future Artificial Intelligence accelerators

Qualcomm is evaluating Samsung's prototype LPDDR6X memory for its upcoming AI200 and AI250 accelerator platforms, signaling a push to adapt mobile-class low power DRAM for data center inference workloads.

Qualcomm is exploring the use of Samsung’s next generation LPDDR6X memory in server grade Artificial Intelligence accelerator hardware, extending low power DRAM beyond its traditional role in mobile devices. After unveiling the AI200 and AI250 accelerator cards and racks in late October, Qualcomm drew attention by specifying LPDDR rather than the more common high bandwidth memory used in most cutting edge Artificial Intelligence enterprise platforms. According to information reported by The Bell in South Korea, Samsung Electronics has already shipped prototype LPDDR6X samples to Qualcomm, indicating early collaboration around this yet to be finalized memory technology.

Industry sources quoted by The Bell describe the move as unusual because the LPDDR6 (non X) standard received official JEDEC certification not that long ago (Q3 2025). The Samsung LPDDR6X development project is still in progress but has been explicitly linked to Qualcomm’s 2027 launch of the AI250 inference focused server platform. A source cited by the publication believes that a tight AI200 series release roadmap has driven the decision to supply LPDDR6X modules at such an early stage, allowing Qualcomm to validate and optimize its accelerators around the emerging standard well before commercial availability.

Rumors in the previous month suggested that the AI200 and AI250 inference accelerators are being designed with SOCAMM2 in mind, aligning Qualcomm’s hardware plans with new low power memory module formats. In December, Samsung’s memory product planning team outlined future branches of non X technology and stated that ‘low-power DRAM technologies such as SOCAMM’ are being spun off from ‘AI-focused module solution(s) leveraging LPDDR6 architecture.’ The combined signals from Qualcomm’s platform disclosures and Samsung’s memory roadmap point to a strategy where LPDDR6 and LPDDR6X derivatives, including SOCAMM and SOCAMM2, play a central role in the next generation of power efficient Artificial Intelligence inference servers.

55

Impact Score

Compression and voice models reshape Artificial Intelligence efficiency

Recent releases focused on infrastructure rather than headline model breakthroughs, with gains in compression and voice systems pointing to lower inference costs and broader deployment. Google and Mistral highlighted two distinct paths for real-time audio, while TurboQuant targeted one of the most expensive bottlenecks in long-context inference.

Judge blocks Pentagon move against Anthropic

A federal judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk after finding major gaps between public threats, legal authority, and the government’s courtroom arguments. The dispute has become a test of how far the government can go in punishing an Artificial Intelligence company over political and contractual conflict.

Anumana wins FDA clearance for pulmonary hypertension ECG Artificial Intelligence tool

Anumana has received FDA 510(k) clearance for an Artificial Intelligence-enabled pulmonary hypertension algorithm designed for use with standard 12-lead electrocardiograms. The company says the software can help clinicians spot early signs of disease within existing workflows and without moving patient data outside the health system environment.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.