NVIDIA announced a record-breaking benchmark result of 410 trillion traversed edges per second (TEPS), ranking No. 1 on the 31st Graph500 breadth-first search (BFS) list. The winning run was performed on an accelerated computing cluster hosted in a CoreWeave data center in Dallas and used 8,192 NVIDIA H100 GPUs to process a graph with 2.2 trillion vertices and 35 trillion edges. According to NVIDIA, the result is more than double the performance of comparable solutions on the list, including those hosted in national labs.
The company framed the performance with a real-world analogy: if every person on Earth has 150 friends, that would be 1.2 trillion edges in a social graph, and the system could search every friend relationship in just about three milliseconds. Beyond raw speed, the run emphasized efficiency: a comparable top 10 entry used about 9,000 nodes, while NVIDIA’s submission used just over 1,000 nodes and delivered 3x better performance per dollar. NVIDIA credits the outcome to combining CUDA, Spectrum-X networking, H100 GPUs and a new active messaging library under a full-stack approach.
The technical advance centers on reengineering graph processing for GPUs. While GPUs have accelerated dense workloads like Artificial Intelligence training, large-scale sparse and irregular graph workloads traditionally ran on CPUs. CPUs move graph data across nodes and encounter communication bottlenecks at trillions of edges. NVIDIA implemented a GPU-only solution that uses InfiniBand GPUDirect Async (IBGDA) and the NVSHMEM parallel programming interface to enable GPU-to-GPU active messages. With IBGDA the GPU can communicate directly with the InfiniBand network interface card, and message aggregation was built to support hundreds of thousands of GPU threads sending active messages simultaneously instead of the hundreds typical on CPUs.
Running on CoreWeave infrastructure, the GPU-native active messaging approach bypasses the CPU, leverages H100 parallelism and memory bandwidth, and reduces hardware footprint and cost. NVIDIA says the result validates a path for bringing supercomputing performance to commercially available infrastructure and suggests that other high-performance computing fields with sparse communication patterns, such as fluid dynamics and weather forecasting, can benefit from NVSHMEM and IBGDA to scale their largest applications.
