Artificial Intelligence boom drives demand for ultra-large packaging as ASICs expected to shift from CoWoS to EMIB

TrendForce reports that the Artificial Intelligence and HPC expansion is raising the priority of heterogeneous integration and advanced packaging, with TSMC's CoWoS leading today but some CSPs considering Intel's EMIB as ASIC package sizes grow.

TrendForce’s latest investigations find that the rapid expansion of Artificial Intelligence and high performance computing is accelerating demand for heterogeneous integration and making advanced packaging a strategic priority. The research highlights that TSMC’s CoWoS platform is currently the leading solution connecting compute logic, memory, and I/O dies, but changes in customer requirements are driving reconsideration of packaging approaches.

CoWoS operates by using an interposer to connect multiple dies and mount them on a substrate. The platform has diversified into CoWoS-S, CoWoS-R, and CoWoS-L, and TrendForce says demand has been shifting strongly toward CoWoS-L because it incorporates a silicon interposer into the package. The report cites NVIDIA’s Blackwell platform moving toward mass production in 2025 as a factor increasing demand for larger CoWoS-L packages, and it expects that demand to continue with NVIDIA’s upcoming Rubin architecture, which is described as having even larger reticle sizes.

At the same time, cloud service providers are accelerating in-house ASIC development to support more complex functions, which is increasing their packaging size requirements. As those requirements grow, some CSPs are contemplating a shift from TSMC’s CoWoS to Intel’s EMIB to accommodate ultra-large packaging needs. TrendForce frames this as a notable industry dynamic where packaging choices will evolve in response to new compute and memory integration demands driven by Artificial Intelligence workloads and next-generation accelerator architectures.

55

Impact Score

Samsung starts sampling 3 GB GDDR7 running at 36 Gbps

Samsung has begun sampling its fastest-ever GDDR7 memory at 36 Gbps in 24 Gb dies that translate to 3 GB per chip, and it is also mass producing 28.0 Gbps 3 GB modules reportedly aimed at a mid-cycle NVIDIA refresh.

FLUX.2 image generation models now released, optimized for NVIDIA RTX GPUs

Black Forest Labs, the frontier Artificial Intelligence research lab, released the FLUX.2 family of visual generative models with new multi-reference and pose control tools and direct ComfyUI support. NVIDIA collaboration brings FP8 quantizations that reduce VRAM requirements by 40% and improve performance by 40%.

Aligning VMware migration with business continuity

Business continuity planning long focused on physical disasters, but cyber incidents, particularly ransomware, are now more common and often more damaging. In a survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.