Nvidia plans stronger fp64 performance for next gen high performance computing gpus

Nvidia is reaffirming its commitment to 64-bit floating point performance in high performance computing, signaling that upcoming architectures will restore and enhance fp64 capabilities after recent generations prioritized lower precision throughput.

Nvidia is pushing back against the perception that it is moving away from high performance computing and 64-bit precision, clarifying that recent product choices do not signal an exit from the space. The company told HPCWire that 64-bit floating point data remains central to its roadmap, even as recent architectures such as Hopper and Blackwell have emphasized lower precision formats more aligned with acceleration for artificial intelligence workloads.

Dion Harris, senior director of high performance computing and artificial intelligence hyperscale infrastructure solutions at Nvidia, said the company is “definitely looking to bring some additional [FP64] capabilities in our future gen architectures” and stressed that Nvidia is “very serious about making sure that we can deliver the required performance to power those simulation workloads.” The comments are aimed at users who rely on sustained double precision throughput, particularly in scientific and engineering domains, and who have been concerned by stagnating fp64 metrics in newer flagship accelerators.

The acceleration of 64-bit floating-point data paths is described as crucial for the high performance computing community, with the life sciences called out as a key beneficiary. Users have noted that when a workload demands sustained high-precision support, Nvidia’s recent generations have not met expectations. For comparison, Nvidia’s current most powerful B300 “Blackwell Ultra” accelerator achieves only 1.2 TeraFLOPS of FP64 performance. In contrast, the older H200 “Hopper” reaches an impressive 34 TeraFLOPS of FP64 compute at its peak. For FP8 low-precision, the B300 delivers 9 PetaFLOPS, while the H200 provides 3.958 PetaFLOPS. These figures highlight how Nvidia has so far optimized its newest platforms for lower precision formats, even as it now publicly commits to improving double precision capabilities in its next generation designs.

55

Impact Score

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Generative Artificial Intelligence is reshaping cybercrime less than feared

Research into criminal underground forums suggests generative Artificial Intelligence is being used mainly as a productivity tool rather than a transformative criminal breakthrough. The biggest near-term risks may come from automation, fraud support, and attackers adapting content to influence chatbot outputs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.