NVIDIA Unveils RTX PRO 6000 with 96 GB GDDR7 Memory in Innovative 3 GB Modules

NVIDIA´s RTX PRO 6000 introduces a groundbreaking 96 GB of GDDR7 memory using new 3 GB modules, signaling a major shift in memory technology for professional graphics and Artificial Intelligence workloads.

NVIDIA has redefined its professional graphics lineup by rebranding it as ´RTX PRO´ and launching the RTX PRO 6000, the first workstation GPU built with 96 GB of GDDR7 memory—made possible through innovative 3 GB memory modules. These modules are distributed evenly on both sides of the PCB, with 16 modules per side, bringing unprecedented memory capacity and supporting error-correcting code to ensure high reliability for demanding workloads.

The new PCB design, revealed in a recent Chiphell forum leak, does away with the previous 12 V-6×2 power connector, instead incorporating four solder points to accommodate a cable extension. This modification prepares the card for both Server and Max-Q editions, with the power inputs conveniently shifted to the rear, streamlining the overall footprint. The workstation GPU maintains its full GB202 Blackwell GPU and full memory configuration, balancing innovation with consistent high-end performance.

The RTX PRO 6000 Blackwell series is set to arrive in three primary configurations: Workstation, Server, and Max-Q. The Workstation and Server variants are equipped with 24,064 CUDA cores, 96 GB of GDDR7 ECC memory, and a 600 W power budget—enabling powerful performance for desktop towers and rack-mounted systems. The Max-Q version retains the same high-end GPU and memory capacity but operates at lower clock speeds and a 300 W power limit, targeting compact and noise-sensitive environments without sacrificing computational capabilities. This design positions the RTX PRO 6000 as a versatile and forward-looking tool across professional, scientific, and Artificial Intelligence applications.

76

Impact Score

Uk delays Artificial Intelligence copyright reform

The UK government has postponed immediate copyright reform for Artificial Intelligence, leaving developers, creatives, and rightsholders to operate under existing law. Licensing, transparency, digital replicas, and future litigation are now set to shape the next phase of policy.

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

OpenAI expands cyber model access through trusted program

OpenAI has introduced GPT-5.4-Cyber as a restricted model for cybersecurity professionals, widening access through its Trusted Access for Cyber program. The release highlights both the defensive value and misuse risks of more capable Artificial Intelligence tools in security work.

Chinese tech firms and Li Fei-Fei push world models forward

Chinese tech companies and Li Fei-Fei’s World Labs are accelerating work on world models, a field focused on helping Artificial Intelligence learn from and interact with physical reality. Alibaba’s new Happy Oyster system targets real-time virtual world creation with more continuous user control.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.