Samsung starts sampling 3 GB GDDR7 running at 36 Gbps

Samsung has begun sampling its fastest-ever GDDR7 memory at 36 Gbps in 24 Gb dies that translate to 3 GB per chip, and it is also mass producing 28.0 Gbps 3 GB modules reportedly aimed at a mid-cycle NVIDIA refresh.

Samsung has started sampling GDDR7 memory running at 36 Gbps using 24 Gb capacity dies, which translates to 3 GB per chip and positions the modules for next-generation graphics cards. The report notes that Samsung is not limited to the 36 Gbps parts. The company is already making 28.0 Gbps 3 GB modules and has put those into mass production, which the report links to what it assumes is NVIDIA’s next-generation mid-cycle SUPER refresh. The appearance of 3 GB modules in mass production is notable because such densities are rare.

The report also describes middle-end 32 Gbps 3 GB GDDR7 modules that are being sampled, but it suggests manufacturers may opt for the 28 Gbps solutions for mainstream designs in the near term. It is further suggested that the faster 32 Gbps and 36 Gbps modules will more likely be reserved for professional-level cards, where higher data rates and error correction are prioritized. The coverage specifically cites professional-grade applications as the likely early adopters of the higher frequency parts.

Examples in the report link the 3 GB GDDR7 modules to current professional NVIDIA products. NVIDIA’s RTX PRO 6000 ‘Blackwell’ GPU already includes the 3 GB modules, and NVIDIA recently updated the RTX PRO 5000 ‘Blackwell’ GPU to feature 72 GB of GDDR7 ECC memory, up from 48 GB. That RTX PRO 5000 update uses 24 modules, indicating each module has a 3 GB capacity to reach the 72 GB total. Taken together, the sampling of 32 Gbps and 36 Gbps parts and the mass production of 28.0 Gbps 3 GB modules indicate growing availability of faster VRAM, with the highest-speed parts expected to appear first in professional and pro-viz markets.

55

Impact Score

Mustafa Suleyman says Artificial Intelligence compute growth is still accelerating

Mustafa Suleyman argues that Artificial Intelligence development is being propelled by simultaneous advances in chips, memory, networking, and software efficiency rather than nearing a hard limit. He contends that rising compute capacity and falling deployment costs will push systems beyond chatbots toward more capable agents.

China and the US are leading different Artificial Intelligence races

The US leads in large language models and advanced chips, while China has built a major advantage in robotics and humanoid manufacturing. That balance is shifting as Chinese developers narrow the gap in model performance and both countries push to combine software and machines.

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

U.S. senators propose broader chip tool export ban for Chinese firms

A bipartisan proposal in the U.S. Senate would shift semiconductor equipment controls from specific fabs to targeted Chinese companies and their affiliates. The measure is aimed at cutting off access to advanced lithography and other wafer fabrication tools for firms such as Huawei, SMIC, YMTC, CXMT, and Hua Hong.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.