Hyperscalers accelerate custom semiconductor and artificial intelligence infrastructure deals in early 2026

Hyperscale cloud providers are ramping multi-gigawatt semiconductor deals across GPUs, custom accelerators, and optical interconnects, with Meta, Google, OpenAI, and Anthropic locking in long-term capacity. Broadcom, AMD, NVIDIA, Marvell, Intel, and MediaTek are reshaping data center and networking roadmaps around custom artificial intelligence silicon and rack-scale systems.

Hyperscale and large artificial intelligence customers are driving an aggressive new wave of semiconductor deals in early 2026, centered on multi-gigawatt compute capacity and tighter co-design partnerships. AMD signed a multi-year 6-gigawatt artificial intelligence infrastructure agreement with Meta built around Instinct GPUs, EPYC CPUs, a custom MI450 GPU, and the Helios rack architecture, and Meta can receive up to 160M AMD shares tied to GPU shipment milestones through warrants that could convert to ~10% of AMD. A similar 6 GW artificial intelligence processor deal is in place with OpenAI, positioning AMD as a second supplier to major artificial intelligence labs, while Oracle Cloud is building artificial intelligence superclusters with ~50,000 Instinct GPUs and AMD has previewed the Instinct MI430X, MI440X, and MI455X as part of the MI400-series roadmap. NVIDIA is pushing its own rack-scale strategy with the Rubin platform that combines Rubin GPUs, Vera CPUs, and NVLink 6 into Vera Rubin NVL72 systems, and is backing its networking vision with a ~$4B total investment in Lumentum and Coherent, including $2B each plus multiyear optical purchase commitments and capacity guarantees.

Broadcom has emerged as a central custom chip partner, reporting that artificial intelligence semiconductor revenue more than doubled year-over-year to $8.4B in Q1 FY2026, helping drive 29% revenue growth to $19.3B and underpinning a projection of over $100B in artificial intelligence chip revenue by 2027. Broadcom expects to supply 1 GW of custom artificial intelligence chips to Anthropic in 2026, scaling to 3 GW by 2027, and is targeting 1M units shipped by 2027 using advanced 3D-stacked packaging for larger accelerators. The OpenAI partnership covers 10 gigawatts of custom artificial intelligence accelerators under the “Titan” program, with TSMC N3-based chips ramping at the end of 2026, a second-generation “Titan 2” planned on A16, deployments starting in 2H 2026 and completing by end of 2029, and Broadcom’s CEO emphasizing that material revenue is unlikely in 2026 as the rollout has already slipped from Q2 to at least Q3. Broadcom is also embedded across hyperscaler artificial intelligence ASIC programs for Google, Meta, Microsoft, Amazon, and ByteDance, and is balancing this with VMware integration, a $10B share buyback, and continued leadership in Tomahawk switching while it develops 1.6T-3.2T networking silicon for next-generation artificial intelligence clusters.

Custom accelerators at the hyperscalers are scaling rapidly, with Google’s TPU v7 “Ironwood” projected by Fubon to reach about 36,000 racks in 2026 and total TPU production estimated at 3.1 to 3.2 million units in 2026, with Google confirming that its custom TPUs have outshipped general-purpose GPUs in volume for the first time. Google is spreading cost and production risk across MediaTek and Broadcom, citing MediaTek’s costs approximately 20-30% lower than alternative partners, and is preparing a TPUv7e variant, while Anthropic has access to as many as 1 million Google TPUs that are expected to bring over a gigawatt of new artificial intelligence compute capacity online in 2026 and has committed $21 billion to custom chip orders via Broadcom. Meta is advancing its MTIA program with the 3rd generation Iris chip in broad deployment on a 3nm process with eight HBM3E 12-high stacks exceeding 3.5 TB/s, a roadmap that includes MTIA-2, MTIA-3, and MTIA v4 “Santa Barbara” with liquid-cooling systems exceeding 180kW per rack in 2026, and an Arke inference-only variant co-developed with Marvell, while also announcing a multiyear agreement with AMD on February 24 worth more than $100 billion for MI450 GPUs and committing up to $135 billion in capital expenditures for 2026 to build out artificial intelligence infrastructure.

68

Impact Score

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.