Huawei KunPeng 930 is an 80-core CPU made on TSMC N5

A teardown by creator Kurnal reveals Huawei´s KunPeng 930 combines TSMC 5 nm compute chiplets with an SMIC-made I/O die to deliver up to 80 cores, large on-die caches, and broad connectivity. the hybrid chiplet design aims to balance SRAM density and core scaling with easier I/O production.

a teardown of Huawei´s KunPeng 930 by creator Kurnal on YouTube and BiliBili shows a large package roughly 77.5 mm by 58.0 mm that pairs dense compute tiles manufactured at TSMC´s 5 nm node with a sizable input/output die produced at SMIC on a more mature node, possibly 14 nm. this hybrid split is designed to concentrate SRAM density and core scaling on the leading-edge node while shifting I/O and routing to a foundry that can absorb volume and ease supply pressure.

each compute tile contains forty Arm-derived cores tuned as the Taishan family, and platforms can be configured as dual-die SKUs to reach a maximum of 80 cores. the compute tiles include a pair of private 2 MB L2 caches per core and about 91 MB of shared L3 cache on the same die, a significant increase over the previous generation. DDR5 memory controllers are present on the compute tiles, and Huawei diagrams indicate 12 memory channels per chiplet, resulting in a theoretical total of 24 channels per processor. the photographed board shows sixteen DIMM sockets, which implies that not every theoretical memory channel is routed in the shown implementation.

the I/O die offers broad connectivity, with teardown images indicating the platform was designed for up to 96 PCIe lanes, although the photographed board exposes roughly 80 lanes due to routing and cost trade-offs. using an SMIC-made I/O die would ease volume production but will likely require firmware and software tuning to extract peak performance. at this stage, the teardown provides hardware and floorplan details but does not include server-centric benchmark results, so real-world performance and platform tuning remain to be seen.

75

Impact Score

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.