ASUS brings NVIDIA GB300 ´Blackwell Ultra´ to desktop supercomputing

ASUS reimagines desktop power with the NVIDIA Grace Blackwell Ultra, targeting Artificial Intelligence researchers with unmatched compute in a single chassis.

ASUS has taken an ambitious leap by integrating NVIDIA’s GB300 Grace Blackwell Ultra—originally designed for server infrastructure—into a desktop form factor. This custom system, unveiled as the ExpertCenter Pro ET900N G3, harnesses the hybrid design of NVIDIA’s latest CPU-GPU superchip, offering an extraordinary 20 PetaFLOPS of FP4 precision compute. It features a massive 784 GB of unified cache-coherent memory, split between 288 GB of HBM3E situated on the GPU and 496 GB of LPDDR5X dedicated to the Grace CPU. This marks a new paradigm where server-grade computational density moves into workstation desktops specifically for advanced computing workloads.

Distinct from standard desktops, ASUS’s implementation follows NVIDIA’s DGX blueprint, encouraging OEMs to craft purpose-built systems. Connectivity keeps pace with compute performance, thanks to the inclusion of an 800 Gb/s ConnectX-8 SuperNIC, designed to funnel data at hyperscale rates. The system runs NVIDIA’s DGX OS, a customized Ubuntu derivative, featuring kernel tweaks and optimizations to squeeze every drop of performance from the Blackwell architecture. These software-level enhancements underscore the platform’s intent for serious, purpose-driven research rather than general office work.

Designed for professionals and Artificial Intelligence researchers, the motherboard inside the ET900N G3 is engineered for expansion and extreme workloads. It offers three full-length PCIe x16 slots for GPU stacking or deploying specialty accelerators, alongside three M.2 slots for ultra-fast SSDs. Power infrastructure is equally robust, supporting up to 1,800 Watts via dedicated GPU power connectors beyond standard ATX and EPS12V plugs—highlighting readiness for multi-GPU configurations in compute-intensive environments. While ASUS has yet to announce pricing or availability, the advanced hardware and enterprise focus suggest a premium exceeding typical workstation costs, catering directly to those integrating desktop-scale supercomputing into Artificial Intelligence research and development pipelines.

73

Impact Score

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.