Tesla completes Artificial Intelligence 5 chip and revives Dojo 3 with Intel packaging deal

Tesla has finished development of its Artificial Intelligence 5 automotive processor and is restarting its Dojo 3 supercomputer project with Intel as a key packaging partner. The move spreads manufacturing across TSMC, Samsung, and Intel as Tesla chases higher performance and efficiency for both training and in-car compute.

Tesla has finalized the development of its Artificial Intelligence 5 automotive processor, clearing the way for volume production at both TSMC and Samsung facilities and simultaneously reviving the previously halted Dojo 3 supercomputer project. Chief executive Elon Musk has positioned the Artificial Intelligence 5 chip as offering performance on par with NVIDIA’s Hopper architecture, and Tesla states that two Artificial Intelligence 5 units will equal the power of a single Blackwell processor, setting clear performance expectations against current data center accelerators. The renewed focus on Dojo 3 is tightly linked to this silicon roadmap, aligning Tesla’s in-vehicle compute efforts with its custom training infrastructure.

The Dojo 3 restart comes as earlier rumors are confirmed that Intel will join as a key packaging partner, which marks a deliberate shift away from Tesla’s previous reliance on TSMC for end to end manufacturing. Intel will handle assembly and testing using its EMIB technology, which connects multiple dies via silicon bridges rather than a full wafer interposer, and this approach is described as better suited to Tesla’s large Dojo modules that combine several 654 mm² chips into a single package. Based on previous information, Samsung will produce the D3 training chips at its Texas facility using a 2 nm process, while Intel will focus exclusively on packaging operations to address capacity constraints and give Tesla more freedom to customize interconnect layouts inside each module.

For the Artificial Intelligence 5 automotive chips, Tesla is pursuing a dual foundry strategy in which both Samsung and TSMC will fabricate distinct versions while the company aims to deliver identical software performance across the two implementations. Initial projections suggest the Artificial Intelligence 5 could operate at just 150 W while matching the performance of NVIDIA’s H100 accelerator that requires 700 W, highlighting Tesla’s emphasis on efficiency for automotive deployment. This projected power profile is attributed to removing general purpose graphics subsystems and optimizing the architecture specifically around Tesla’s neural processing workloads, tightening the integration between its custom hardware and the software stacks that drive its autonomous and driver assistance features.

68

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.