GIGABYTE Unveils Custom AI TOP Atom DGX Spark Supercomputer

GIGABYTE debuts the AI TOP Atom, a powerful custom NVIDIA DGX Spark box for localized Artificial Intelligence software development at Computex 2025.

At Computex 2025, GIGABYTE introduced the AI TOP Atom, its custom take on the newly opened NVIDIA DGX Spark compact Artificial Intelligence supercomputer. This unveiling follows NVIDIA´s announcement that the DGX Spark architecture would be available for third-party custom design partners, setting the stage for bespoke Artificial Intelligence hardware solutions built to diverse specifications and workloads.

The GIGABYTE AI TOP Atom is engineered around the NVIDIA GB10 ´Grace Blackwell´ module, a unified GPU and CPU platform that represents the next generation in localized Artificial Intelligence acceleration. The system is equipped with 128 GB of unified LPDDR5X memory, which facilitates high-throughput data handling and efficient multitasking for demanding AI-native applications. The inclusion of NVIDIA´s NVLink chip-to-chip interconnect ensures complete memory and cache coherence, allowing the processor to maintain high performance while preserving data integrity across the platform.

To maximize system scalability and foster high-speed collaboration between units, the AI TOP Atom includes a ConnectX-7 InfiniBand NIC. This enables users to seamlessly stack and connect multiple DGX Spark boxes, constructing purpose-driven Artificial Intelligence clusters tailored for research, development, or deployment scenarios. Delivering up to 1,000 Artificial Intelligence TOPS of compute performance, the AI TOP Atom is optimized for accelerating both 70-billion and 200-billion parameter models. Its design targets AI-native localized software development tasks, giving developers specialized tools to expedite model training, inference, and edge deployment without reliance on centralized datacenter resources.

77

Impact Score

Key large language model papers from October 13 to 18

A roundup of notable large language model research from the third week of October 2025, spanning generative modeling, multimodal embeddings, and evaluation. Highlights include a diffusion transformer built on representation autoencoders and a language-centric scaling law for embeddings.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.