DRAM stocks fall after Google TurboQuant debut

DRAM manufacturers came under pressure after Google introduced TurboQuant, which it says can sharply reduce the memory needs of Artificial Intelligence models while speeding up inference. The announcement coincided with notable declines in shares of Micron, SK Hynix, and Samsung Electronics.

Stock prices of DRAM manufacturers dipped by as much as 19% over the past 5 days, over the March 24 announcement of Google TurboQuant, a new technology that Google claims will reduce the memory footprint of Artificial Intelligence models by a factor of 6, and improve inference speeds by a factor of 8. As of this writing, Micron Technology (NASDAQ: MU) dipped 19.5% over the last 5 days. Over in Korea, SK Hynix saw its stock price drop by 6%, while Samsung Electronics saw a dip by 5%.

TurboQuant is an advanced quantization algorithm developed by Google that delivers massive data compression for LLMs and vector search engines. It targets memory bottlenecks in the key-value cache and accelerates similarity lookups without sacrificing model accuracy. The system is presented as a way to improve efficiency in workloads where memory use and retrieval speed are critical constraints.

TurboQuant achieves this efficiency by combining two novel techniques: PolarQuant, which simplifies data geometry using polar coordinates to eliminate traditional memory overhead, and Quantized Johnson-Lindenstrauss (QJL), a 1-bit mathematical error-checker. Capable of compressing the key-value cache to just 3 bits without requiring fine-tuning, TurboQuant enables up to 8x faster runtimes on GPUs, establishing a new standard for Artificial Intelligence efficiency.

65

Impact Score

Chrome downloads Gemini Nano model locally without clear consent

Google Chrome is reported to download a 4 GB Gemini Nano model onto some PCs automatically when certain Artificial Intelligence features are active. The process happens without clear notice in browser settings and can repeat after the model is deleted.

AMD plans specialized EPYC CPUs for Artificial Intelligence, hpc, and cloud

AMD is preparing a broader EPYC strategy with task-specific server CPUs aimed at agentic Artificial Intelligence, hpc, training and inference, and cloud deployments. The shift starts with the Zen 6 generation and adds Verano as an Artificial Intelligence-focused variant within the same EPYC family.

Nvidia expands spectrum-x ethernet with open mrc protocol

Nvidia is positioning Spectrum-X Ethernet as a foundation for large-scale Artificial Intelligence training, with Multipath Reliable Connection adding open, multi-path RDMA transport for higher resilience and throughput. OpenAI, Microsoft and Oracle are among the organizations using the technology in large Artificial Intelligence environments.

Anthropic explores Fractile chips to diversify supply

Anthropic is reportedly in early talks with London-based Fractile to secure high-performance Artificial Intelligence chips for inference workloads. The move would reduce reliance on Nvidia and broaden the company’s hardware supply chain.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.