Nvidia and HPE to build Blue Lion supercomputer in Germany

Nvidia and Hewlett Packard Enterprise team up with the Leibniz Supercomputing Centre to create Blue Lion—a supercomputer in Germany optimized for simulation, data processing, and Artificial Intelligence.

Nvidia and Hewlett Packard Enterprise unveiled plans at a major supercomputing event in Hamburg to develop a new high-performance computing system named Blue Lion in partnership with Germany´s Leibniz Supercomputing Centre. Blue Lion is set to deliver around 30 times the computational capability of the current SuperMUC-NG system. At the heart of Blue Lion will be Nvidia´s forthcoming Vera Rubin architecture, which melds the Rubin GPU with Nvidia’s first custom CPU, Vera. This integrated platform aims to bring together simulation, data processing, and Artificial Intelligence under a single high-bandwidth, low-latency umbrella, specifically tailored for ambitious scientific tasks.

The technical build will feature HPE’s latest Cray supercomputing technologies, leveraging Nvidia GPUs alongside state-of-the-art storage and connectivity solutions. An innovative aspect of the Blue Lion project is its use of HPE’s entirely fanless direct liquid-cooling system: warm water circulates through pipes to draw heat away from the supercomputer’s critical components. These environmental considerations extend further, as the system’s waste heat will actively provide warmth to surrounding buildings, demonstrating a sustainable approach to energy use in large-scale computing environments.

Scheduled for researcher access by early 2027 at the Leibniz Supercomputing Centre, Blue Lion will serve a broad range of scientific fields, notably climate research, physics, and machine learning. The announcement follows Nvidia´s disclosure of a parallel Vera Rubin-based supercomputer project at the Lawrence Berkeley National Lab in the United States, named Doudna, which will process data sources including telescopes, genome sequencers, and fusion experiments. As these new systems come online, they promise to set new benchmarks for scientific discovery, sustainability, and integration of Artificial Intelligence with traditional numerical simulation.

82

Impact Score

SK hynix starts mass production of 192 GB SOCAMM2

SK hynix has begun mass production of the 192 GB SOCAMM2, a next-generation memory module standard built on 1cnm LPDDR5X low-power DRAM. The module is positioned as a primary memory solution for next-generation Artificial Intelligence servers.

AMD taps GlobalFoundries for co-packaged optics in Instinct MI500

AMD is preparing a renewed manufacturing link with GlobalFoundries to bring co-packaged optics to its Instinct MI500 Artificial Intelligence accelerators. The move is aimed at improving bandwidth and power efficiency in data center systems by moving beyond copper-based interconnects.

Cerebras files for ipo with wafer-scale chip challenge to Nvidia

Cerebras has filed for a Nasdaq listing as it tries to turn its wafer-scale processor architecture into a challenger to Nvidia in Artificial Intelligence acceleration and local inference. The company is pitching extreme chip scale, high throughput, and lower system costs as demand for on-device and edge workloads grows.

Jensen Huang defends Nvidia chip sales to China

Jensen Huang argued that restricting Nvidia chip sales to China would not stop Chinese Artificial Intelligence development and could instead push developers onto a non-American technology stack. He said the better strategy is to keep global Artificial Intelligence work tied to the American ecosystem through continued innovation.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.