NVIDIA Unveils Grace Blackwell System at Computex 2025 Keynote

NVIDIA´s CEO Jensen Huang introduces the Grace Blackwell Artificial Intelligence inferencing system at Computex 2025, redefining datacenter power and capacity.

At Computex 2025, NVIDIA CEO Jensen Huang delivered a keynote address that focused on the company´s advancements in Artificial Intelligence hardware, highlighted by the official introduction of the Grace Blackwell inferencing system. This new platform promises compute throughput per node that matches the performance of the 2018 Sierra supercomputer, signaling significant progress in high-performance Artificial Intelligence workloads.

The Grace Blackwell system leverages NVIDIA´s next-generation NVLink Spine interconnect, connecting 72 GB300 nodes within a single rack. According to NVIDIA, a single NVLink Spine is capable of moving more traffic than the entire global Internet, showcasing the immense bandwidth achieved with this technology. Additionally, the NVL72 rack that houses the system requires 128 kVA of power, underlining its energy demands and raw computational capabilities.

NVIDIA´s terminology shift from ´datacenter´ to ´AI factory´ was explained during the keynote, as each installation is designed to draw hundreds of megawatts of power—enough to rival the compute power of hundreds of conventional datacenters. This rebranding reflects NVIDIA´s vision for Artificial Intelligence infrastructure at an unprecedented scale. The keynote also shared insights into the advanced manufacturing process behind the Blackwell GPU, emphasizing engineering and design innovation aimed at supporting sustained, large-scale Artificial Intelligence operations.

82

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.