Tesla completes Artificial Intelligence 5 chip and revives Dojo 3 with Intel packaging deal

Tesla has finished development of its Artificial Intelligence 5 automotive processor and is restarting its Dojo 3 supercomputer project with Intel as a key packaging partner. The move spreads manufacturing across TSMC, Samsung, and Intel as Tesla chases higher performance and efficiency for both training and in-car compute.

Tesla has finalized the development of its Artificial Intelligence 5 automotive processor, clearing the way for volume production at both TSMC and Samsung facilities and simultaneously reviving the previously halted Dojo 3 supercomputer project. Chief executive Elon Musk has positioned the Artificial Intelligence 5 chip as offering performance on par with NVIDIA’s Hopper architecture, and Tesla states that two Artificial Intelligence 5 units will equal the power of a single Blackwell processor, setting clear performance expectations against current data center accelerators. The renewed focus on Dojo 3 is tightly linked to this silicon roadmap, aligning Tesla’s in-vehicle compute efforts with its custom training infrastructure.

The Dojo 3 restart comes as earlier rumors are confirmed that Intel will join as a key packaging partner, which marks a deliberate shift away from Tesla’s previous reliance on TSMC for end to end manufacturing. Intel will handle assembly and testing using its EMIB technology, which connects multiple dies via silicon bridges rather than a full wafer interposer, and this approach is described as better suited to Tesla’s large Dojo modules that combine several 654 mm² chips into a single package. Based on previous information, Samsung will produce the D3 training chips at its Texas facility using a 2 nm process, while Intel will focus exclusively on packaging operations to address capacity constraints and give Tesla more freedom to customize interconnect layouts inside each module.

For the Artificial Intelligence 5 automotive chips, Tesla is pursuing a dual foundry strategy in which both Samsung and TSMC will fabricate distinct versions while the company aims to deliver identical software performance across the two implementations. Initial projections suggest the Artificial Intelligence 5 could operate at just 150 W while matching the performance of NVIDIA’s H100 accelerator that requires 700 W, highlighting Tesla’s emphasis on efficiency for automotive deployment. This projected power profile is attributed to removing general purpose graphics subsystems and optimizing the architecture specifically around Tesla’s neural processing workloads, tightening the integration between its custom hardware and the software stacks that drive its autonomous and driver assistance features.

68

Impact Score

OpenClaw pushes autonomous Artificial Intelligence agents into enterprises

OpenClaw’s rapid growth is accelerating interest in persistent, self-hosted autonomous agents that run continuously instead of waiting for prompts. NVIDIA is positioning NemoClaw as a more secure reference implementation for organizations that want local control, auditability and hardened deployment defaults.

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.