From family to rivals: the peak artificial intelligence chip showdown between Lisa Su and Jensen Huang

Under Lisa Su, AMD is challenging NVIDIA’s dominance in artificial intelligence chips after a record third quarter and major customer deals with OpenAI and Oracle. The article traces Su’s turnaround of AMD, her engineering-led strategy and how MI300 chiplets and software efforts position the company as an alternative to NVIDIA.

Advanced Micro Devices reported a record-breaking third quarter for fiscal 2025, with revenue reaching ?.246 billion, a year-over-year increase of 36%, and multiple metrics such as gross profit and free cash flow also hitting new highs. That performance, described as seven times “record-breaking” in the earnings release, underpins AMD’s bid to contest a market long dominated by NVIDIA for artificial intelligence computing. The company has publicly positioned its core strategy as “artificial intelligence + data center accelerators” and forecasts strong market growth in data center accelerators through 2027, citing projections of a ? billion market and a compound annual growth rate near 50%.

AMD’s competitive push includes product and ecosystem moves. The MI300 series, launched in 2023, uses a chiplet modular architecture; the MI300X is described as delivering 2.4 times the HBM density and 1.6 times the bandwidth of NVIDIA’s comparable H100, while AMD offers variants such as the MI300A and MI300X targeting different workloads. On customers and deployment, the article cites a multi-generation agreement with OpenAI to deploy AMD Instinct GPUs totaling 6 gigawatts, with an initial 1-gigawatt delivery slated for the second half of 2026 and a possible equity option for OpenAI of around 10% tied to the partnership. Oracle has also committed to deploy 50,000 MI450 GPUs beginning in the third quarter of 2026.

The piece traces Lisa Su’s turnaround strategy and leadership style. After taking the helm in 2014 when AMD faced steep declines and restructuring, she focused on high-performance computing and the Zen architecture, launching Ryzen in 2017 and later advancing chiplet designs and a 7-nanometer data center GPU. Su emphasizes performance, power consumption and cost in product decisions, prioritizes customer relationships and has pushed to build AMD’s software stack, ROCm, to close gaps with NVIDIA’s CUDA ecosystem. The article contrasts Su’s engineering-driven, pragmatic approach with Jensen Huang’s vision-led product launches, and presents AMD as an increasingly credible “second pole” in artificial intelligence infrastructure rather than a simple follower.

70

Impact Score

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Google unveils eighth-generation tensor processor units

Google introduced its eighth generation of custom tensor processor units with separate designs for training and inference. The new TPU 8t and TPU 8i are aimed at large-scale model training, serving, and agentic workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.