Intel and Amkor team up to scale emib packaging production

Intel has started emib assembly at Amkor's Incheon facility to meet surging orders from large Artificial Intelligence customers, outsourcing work to boost output while keeping some capacity for its own products.

Intel has begun emib assembly at Amkor’s facility in Incheon, South Korea as part of a move to scale packaging output quickly in response to unexpected market demand. The company is working with long-time osat partner Amkor to accelerate production volumes and avoid the delays that internal expansion would bring. According to the report, some emib capacity will be held for Intel’s own upcoming products while the rest helps meet orders from large Artificial Intelligence customers.

The arrangement underscores the current capacity crunch for advanced heterogeneous packaging, with hyperscalers and Artificial Intelligence chip companies placing large orders for cowos and similar services. While TSMC has been a primary choice for many high-density assemblies, growing interest in Intel’s emib and foveros options is prompting partners such as MediaTek, Google, Qualcomm, and Tesla to consider alternatives. Outsourcing to Amkor reduces lead times compared with shifting production solely within the United States and positions emib as a potential external revenue source ahead of Intel’s next process node launches.

This collaboration builds on the Intel-Amkor partnership announced in late May of this year and represents a tactical step to capture near-term demand for advanced packaging. By leveraging Amkor’s capacity, Intel can respond quickly to surges from major customers while retaining the flexibility to allocate capacity to its own roadmap. The move illustrates how packaging partnerships are being used to bridge immediate supply gaps as the industry adapts to increasing demand for complex assemblies.

55

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.