Huawei Unveils 6 nm Ascend 920C Accelerator With 900 TeraFLOPS and HBM3

Huawei is advancing its Artificial Intelligence hardware with the Ascend 920C, promising 900 TeraFLOPS and faster HBM3 memory to compete more closely with NVIDIA´s high-end solutions.

Huawei has announced advancements in its Artificial Intelligence hardware lineup with the upcoming Ascend 920C accelerator, aimed at closing the efficiency gap with rivals such as NVIDIA. The new accelerator is part of the Ascend 920 family and is manufactured using SMIC´s 6 nm process node. Reports indicate that each Ascend 920C card will surpass 900 TeraFLOPS of BF16 half-precision compute power, a significant step forward compared to the current Ascend 910C model.

The memory subsystem also sees a major upgrade, as the 920C will be equipped with next-generation HBM3 modules, increasing total bandwidth to 4,000 GB/s from the 3,200 GB/s found in the 910C´s HBM2E configuration. Huawei is maintaining the chiplet-based architecture but is refining internal tensor acceleration engines to better serve demanding Transformer and Mixture-of-Experts models, used widely in large-scale Artificial Intelligence training. Along with this, the chip-to-chip interconnect and system support will advance to PCIe 5.0 and new high-throughput interconnect protocols, further boosting node-to-node communication crucial for dense cluster deployments.

Internal projections at Huawei estimate that training efficiency with the Ascend 920C could improve by 30 to 40 percent over the previous 910C, which peaks at 780 TeraFLOPS. This leap is expected to narrow the performance-per-watt differences versus competing solutions. While a firm release date for the new accelerator was not disclosed, sources suggest that mass production will commence in the second half of 2025, positioning Huawei to challenge rivals in the Artificial Intelligence infrastructure market in the near future.

67

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.