AWS releases Trainium3 ASIC for Artificial Intelligence workloads

AWS introduced Trainium3, a purpose-built ASIC for Artificial Intelligence workloads, at re:Invent. the chip delivers 2.52 PetaFLOPS of FP8 compute per chip and is available in Amazon EC2 Trn3 UltraServer instances.

aws unveiled Trainium3 during its re:Invent conference in Las Vegas as a new ASIC for internal Artificial Intelligence workloads and select external customers. the chip delivers 2.52 PetaFLOPS of FP8 compute per chip and raises on-chip memory capacity to 144 GB of HBM3E with a memory bandwidth of 4.9 TB/s. Trainium3 supports both dense and expert-parallel model topologies and introduces compact data types, MXFP8 and MXFP4, aimed at improving the balance between memory and compute for real-time, multimodal, and long-context reasoning tasks. the device is manufactured on TSMC’s N3 3 nm node and is now available in Amazon EC2 Trn3 UltraServer instances.

Trn3 UltraServers can scale up to 144 Trainium3 chips in a single server, achieving approximately 362 FP8 PetaFLOPS, and multiple servers can be combined into EC2 UltraClusters 3.0 for larger deployments. a fully equipped UltraServer provides about 20.7 TB of HBM3e memory and around 706 TB/s of aggregate memory bandwidth. the platform also incorporates the NeuronSwitch-v1 fabric, which doubles interchip interconnect bandwidth compared to the previous UltraServer generation, enabling higher throughput across the server footprint.

AWS highlights generational gains versus Trainium2, citing up to 4.4x higher performance, 3.9x greater memory bandwidth, and about 4x better performance per watt. the company also reports improvements in inference and token efficiency across various Amazon services, positioning Trainium3 and Trn3 UltraServers as an internally developed option to reduce reliance on third-party accelerator hardware. overall, the announcement emphasizes larger on-chip memory, new compact numeric formats, and expanded system-level scale as the primary vectors for performance and efficiency gains in Artificial Intelligence workloads.

70

Impact Score

Microsoft previews Shader Model 6.10 for gpu Artificial Intelligence engines

Microsoft has introduced Shader Model 6.10 in AgilitySDK 1.720-preview with a new matrix API designed to unify access to dedicated gpu Artificial Intelligence hardware from AMD, Intel, and NVIDIA. The change is aimed at making neural rendering features easier to deploy across multiple vendors with a single programming model.

Europe’s Artificial Intelligence challenge is structural dependence

Europe has talent, research strength, and rising investment in Artificial Intelligence, but startups remain reliant on American infrastructure, platforms, and late-stage capital. The argument centers on digital sovereignty, interoperability, and ownership as the conditions for building durable European champions.

Community backlash slows Artificial Intelligence data center expansion

Political resistance, regulatory scrutiny, and rising energy and water concerns are complicating the build-out of large Artificial Intelligence data centers across the United States. The pressure is increasing costs, delaying projects, and adding fresh risks to the economics behind Generative Artificial Intelligence infrastructure.

House panel advances export controls after China report

The House Foreign Affairs Committee moved export control legislation after a House Select Committee report detailed China’s use of illegal means to build its Artificial Intelligence and semiconductor sectors. The measure is aimed at chip smuggling and Artificial Intelligence model theft.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.