Micron launches 192 GB socamm2 for low-power Artificial Intelligence data centers

Micron is sampling a 192 GB socamm2 module to expand adoption of low-power memory in Artificial Intelligence data centers. The module increases capacity in the same footprint and promises large reductions in time to first token for real-time inference.

Micron Technology announced customer sampling of a 192 GB socamm2, a small outline compression attached memory module designed to broaden adoption of low-power DRAM in Artificial Intelligence data centers. The socamm2 builds on Micron’s earlier low-power DRAM socamm and delivers 50 percent more capacity in the same compact footprint. Micron positions the added capacity as a direct enabler for performance improvements in real-time inference workloads.

The company says the additional capacity can cut time to first token, or TTFT, by more than 80 percent for real-time inference. The 192 GB socamm2 uses Micron’s 1-gamma DRAM process technology and delivers greater than 20 percent improvement in power efficiency, enabling tighter power design optimization across large data center clusters. Those efficiency gains become more significant at scale, where full-rack installations can include more than 40 terabytes of CPU-attached low-power DRAM main memory.

Micron highlights the socamm2’s modular design as improving serviceability and providing a pathway for future capacity expansion in data center deployments. The announcement focuses on customer sampling as the next step toward broader market availability and frames the product as a response to the industry shift toward more energy-efficient infrastructure to support growth in Artificial Intelligence workloads.

55

Impact Score

AMD plans specialized EPYC CPUs for Artificial Intelligence, hpc, and cloud

AMD is preparing a broader EPYC strategy with task-specific server CPUs aimed at agentic Artificial Intelligence, hpc, training and inference, and cloud deployments. The shift starts with the Zen 6 generation and adds Verano as an Artificial Intelligence-focused variant within the same EPYC family.

Nvidia expands spectrum-x ethernet with open mrc protocol

Nvidia is positioning Spectrum-X Ethernet as a foundation for large-scale Artificial Intelligence training, with Multipath Reliable Connection adding open, multi-path RDMA transport for higher resilience and throughput. OpenAI, Microsoft and Oracle are among the organizations using the technology in large Artificial Intelligence environments.

Anthropic explores Fractile chips to diversify supply

Anthropic is reportedly in early talks with London-based Fractile to secure high-performance Artificial Intelligence chips for inference workloads. The move would reduce reliance on Nvidia and broaden the company’s hardware supply chain.

OpenAI curbs odd creature references in chatbot responses

OpenAI has adjusted its models after users complained about overly familiar responses and strange references to goblins, gremlins, pigeons, and raccoons. The company traced the behavior to a retired “nerdy” personality whose habits spread into broader model training.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.