Qualcomm tests Samsung LPDDR6X memory for future Artificial Intelligence accelerators

Qualcomm is evaluating Samsung's prototype LPDDR6X memory for its upcoming AI200 and AI250 accelerator platforms, signaling a push to adapt mobile-class low power DRAM for data center inference workloads.

Qualcomm is exploring the use of Samsung’s next generation LPDDR6X memory in server grade Artificial Intelligence accelerator hardware, extending low power DRAM beyond its traditional role in mobile devices. After unveiling the AI200 and AI250 accelerator cards and racks in late October, Qualcomm drew attention by specifying LPDDR rather than the more common high bandwidth memory used in most cutting edge Artificial Intelligence enterprise platforms. According to information reported by The Bell in South Korea, Samsung Electronics has already shipped prototype LPDDR6X samples to Qualcomm, indicating early collaboration around this yet to be finalized memory technology.

Industry sources quoted by The Bell describe the move as unusual because the LPDDR6 (non X) standard received official JEDEC certification not that long ago (Q3 2025). The Samsung LPDDR6X development project is still in progress but has been explicitly linked to Qualcomm’s 2027 launch of the AI250 inference focused server platform. A source cited by the publication believes that a tight AI200 series release roadmap has driven the decision to supply LPDDR6X modules at such an early stage, allowing Qualcomm to validate and optimize its accelerators around the emerging standard well before commercial availability.

Rumors in the previous month suggested that the AI200 and AI250 inference accelerators are being designed with SOCAMM2 in mind, aligning Qualcomm’s hardware plans with new low power memory module formats. In December, Samsung’s memory product planning team outlined future branches of non X technology and stated that ‘low-power DRAM technologies such as SOCAMM’ are being spun off from ‘AI-focused module solution(s) leveraging LPDDR6 architecture.’ The combined signals from Qualcomm’s platform disclosures and Samsung’s memory roadmap point to a strategy where LPDDR6 and LPDDR6X derivatives, including SOCAMM and SOCAMM2, play a central role in the next generation of power efficient Artificial Intelligence inference servers.

55

Impact Score

Nvidia DGX Spark brings desktop supercomputing to universities worldwide

Nvidia’s DGX Spark desktop supercomputer is giving universities petaflop-class Artificial Intelligence performance at the lab bench, supporting projects from neutrino astronomy at the South Pole to radiology report analysis and robotics on campus. Institutions are using the compact systems to run large models locally, protect sensitive data and prototype workflows before scaling to big clusters or cloud resources.

ByteDance’s Seedance 2.0 ignites Artificial Intelligence video race in China

ByteDance’s Seedance 2.0 video generation model has gone viral in China, drawing comparisons to a “Sputnik moment” and stoking competitive and regulatory concerns from Hollywood to Beijing. The system’s hyper-realistic output and multimodal input support are being cast as a direct challenge to leading Western Artificial Intelligence video models.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.