Chip giants back Ayar Labs to push optical interconnects for Artificial Intelligence

Ayar Labs has attracted investments from Nvidia, AMD, Intel, MediaTek and major funds by promising optical interconnects that tackle bandwidth, latency and power bottlenecks in Artificial Intelligence data centers. Its TeraPHY and SuperNova platform combines silicon photonics with open chiplet standards to link accelerators over distances from millimeters to kilometers.

Ayar Labs has rapidly evolved from an academic research project into a Silicon Valley unicorn by targeting the communication bottlenecks limiting modern computing and Artificial Intelligence infrastructure. Originating in 2011 from a cross university team at the Massachusetts Institute of Technology, the University of California, Berkeley, and the University of Colorado, the founders set out to overcome the physical limits of copper based data transmission as Moore’s law drove processors faster than interconnects could handle. Co founders Chen Sun, Mark Wade and Vladimir Stojanovic, later joined by entrepreneur Alex Wright Gladstein, translated their silicon photonics research into a commercial venture after winning two grand championships and a 275,000 prize in the MIT Clean Energy Entrepreneurship Competition. Despite early resistance from investors wary of deep tech hardware, the company secured backing from Founders Fund and a 24 million Series A led by Playground Global in 2018, which funded its first silicon photonics chip designs.

Investor confidence has surged as Artificial Intelligence workloads expose the limits of traditional copper interconnects in large accelerator clusters. In December 2024, Ayar Labs announced a 155 million Series D financing round led by Advent Global Opportunities and Light Street Capital, with participation from NVIDIA, AMD, Intel, GlobalFoundries, VentureTech Alliance and 3M. Recently, it has received approximately 500 million in financing and this round of financing has also pushed the valuation of this low key company to 3.8 billion. The company has shipped approximately 15,000 devices to some customers and plans to achieve mass production of chips by the middle of 2026, and by 2028 and beyond, the annual shipment volume may exceed 100 million units. Today its team includes veterans from Intel, IBM, Micron, MIT, Berkeley and Stanford, with strategic partnerships spanning GlobalFoundries, Applied Materials, TSMC, Intel and NVIDIA.

Ayar Labs positions its optical I/O platform, built around the TeraPHY chiplet and SuperNova light source, as a solution to bandwidth, latency and power constraints in large scale Artificial Intelligence clusters. The operating efficiency of a single GPU can reach 80%, but it may drop to 50% when expanded to 64 GPUs and only 30% when further expanded to 256 GPUs, underscoring the scaling penalty of copper based networks. TeraPHY is an in package monolithic optical I/O chip that combines silicon photonics with standard CMOS processes and integrates with GPUs or CPUs inside a common package. It contains approximately 70 million transistors and more than 10,000 optical devices, with modules including grating coupler arrays, micro ring based optical transceivers, an advanced interface bus and glue logic. TeraPHY can support 8 optical channels, equivalent to an x8 PCIe Gen5 link, and its total bidirectional bandwidth of 4 Tbps and high speed transmission capacity of 256 Gbps per port are paired with a low latency performance of 5 ns and a standard UCIe electrical interface so that any compliant chip can treat it as a transparent optical converter.

SuperNova complements TeraPHY as a remote, multi wavelength laser source designed with MACOM and manufactured by Sivers Photonics, and it is the first multi wavelength, multi port light source that complies with the CW WDM MSA standard. Each optical fiber can transmit up to 16 wavelengths, so it can drive 256 optical carriers, providing a bidirectional bandwidth of 16 Tbps, meeting the bandwidth requirements of AI workloads, while its 64 fold wavelength advantage over CWDM4 pluggables reduces package complexity and cost. Ayar Labs’ data indicates that compared with pluggable optical devices and electrical SerDes, bandwidth increased by 5 – 10 times from hundreds of Gbps in traditional solutions to the 4 – 16 Tbps level, energy efficiency increased by 4 – 8 times as power per bit falls below 5 pJ/b versus 6 – 10 pJ/b for 112 Gbps electrical I/O and about 15 pJ/b for pluggables, and latency reduced to 1/10 with 5 nanoseconds per chiplet plus time of flight and no forward error correction. These metrics are enabled by advances such as micro ring modulators that maintain accurate wavelength output in the temperature range of 15 – 100°C, solving long standing stability issues that hindered commercialization. By aligning TeraPHY with UCIe, CXL and PCIe and having SuperNova comply with CW WDM MSA and GR 468 reliability requirements, Ayar Labs focuses on open standards so chipmakers can integrate optical interconnects with minimal redesign, positioning its technology as a foundational building block for future high density Artificial Intelligence and high performance computing systems.

70

Impact Score

Adobe plans outcome-based pricing for Artificial Intelligence agents

Adobe is positioning its Artificial Intelligence agents around performance-based pricing, charging only when the software completes useful work. The approach points to a more results-oriented model for selling generative Artificial Intelligence tools to business customers.

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.