AMD taps GlobalFoundries for co-packaged optics in Instinct MI500

AMD is preparing a renewed manufacturing link with GlobalFoundries to bring co-packaged optics to its Instinct MI500 Artificial Intelligence accelerators. The move is aimed at improving bandwidth and power efficiency in data center systems by moving beyond copper-based interconnects.

AMD is reviving manufacturing ties with GlobalFoundries for Co-Packaged Optics in the next-generation Instinct MI500 series of Artificial Intelligence accelerators. The partnership marks a return to working with its former silicon manufacturing venture, which AMD sold back in 2008. AMD is pursuing the effort to strengthen its position in the Artificial Intelligence data center market, where traditional copper-based wiring limits signal transfer.

Co-Packaged Optics is intended to improve data movement by using light-based connections instead of copper, reducing speed loss and latency across links between multiple nodes and even multiple facilities. AMD is working to secure Photonics Integrated Circuit manufacturing through GlobalFoundries, while ASE will manage the packaging needed to complete the Co-Packaged Optics design. That division of responsibilities is designed to combine photonics manufacturing with advanced packaging for the final system.

The Instinct MI500 series of Artificial Intelligence accelerators is scheduled for release in 2027, while the current focus remains on the Instinct MI400 series, which includes multiple SKUs for Artificial Intelligence and HPC workloads. AMD expects the 2027 product generation to use Co-Packaged Optics to push performance further, with lower power use and much higher overall bandwidth than conventional copper data transfers. The effort also reflects a broader industry shift, as NVIDIA is also working with semiconductor manufacturers on a Co-Packaged Optics system for ‘Vera Rubin,’ especially the ‘Rubin Ultra’ variant.

62

Impact Score

Cerebras files for ipo with wafer-scale chip challenge to Nvidia

Cerebras has filed for a Nasdaq listing as it tries to turn its wafer-scale processor architecture into a challenger to Nvidia in Artificial Intelligence acceleration and local inference. The company is pitching extreme chip scale, high throughput, and lower system costs as demand for on-device and edge workloads grows.

Jensen Huang defends Nvidia chip sales to China

Jensen Huang argued that restricting Nvidia chip sales to China would not stop Chinese Artificial Intelligence development and could instead push developers onto a non-American technology stack. He said the better strategy is to keep global Artificial Intelligence work tied to the American ecosystem through continued innovation.

Generative Artificial Intelligence shifts toward cognitive dependency

Generative Artificial Intelligence is moving beyond content creation into a phase where professionals increasingly offload thinking, judgment, and planning to machines. That shift promises efficiency, but it also raises concerns about weakened critical thinking, creativity, and independent problem-solving.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.