AWS challenges Nvidia with Graviton4 and Trainium chips

Amazon´s AWS unveils Graviton4 and Trainium chips, targeting Nvidia´s dominance in Artificial Intelligence hardware by boosting performance and efficiency in cloud computing.

Amazon Web Services (AWS) is mounting a direct challenge to Nvidia´s dominance within the artificial intelligence chip sector through the development and rollout of its custom Graviton4 CPUs and Trainium accelerator series. These chips are designed to optimize profit margins for AWS by reducing data transfer costs in cloud workloads. AWS claims that the upgraded Graviton4 will deliver industry-leading network bandwidth, reaching up to 600 gigabits per second, positioning the chip as the fastest currently available in the public cloud. This move not only signals a strategic shift for Amazon but also puts pressure on traditional semiconductor giants such as Intel and AMD.

With the Graviton4 upgrade and the expansion of Trainium chips—part of Project Rainier—AWS aims to control the entire Artificial Intelligence infrastructure stack, from training to inference and networking. The Trainium chips underpin major models such as Anthropic’s Claude Opus 4, with more than half a million chips already powering new cloud projects—orders that would previously have gone to Nvidia. AWS executives have highlighted their ambition to provide cheaper, more energy-efficient alternatives to Nvidia’s largely dominant and costlier GPUs. The upcoming Trainium3 is set to double the performance of Trainium2 while reducing energy consumption by 50 percent, with demand reportedly surpassing supply.

Amazon is also emphasizing its strategic collaborations with innovative startups, including Anthropic, Scale AI, and Fiddler. By supporting these companies through investment and infrastructure partnerships, AWS is broadening its reach across the Artificial Intelligence ecosystem. The late-2025 rollout of enhanced Graviton4 and Trainium3 chips is projected to provide a fourfold performance increase and 40 percent greater energy efficiency, directly targeting Nvidia’s high-margin dominance. Despite continued recognition of Nvidia’s leadership in Artificial Intelligence hardware, industry analysts agree that robust market demand leaves room for AWS and others to carve out their own significant presence, promising more competitive pricing and technological diversity in next-generation cloud architectures.

77

Impact Score

LLM-PIEval: a benchmark for indirect prompt injection attacks in large language models

Large language models have increased interest in Artificial Intelligence and their integration with external tools introduces risks such as direct and indirect prompt injection. LLM-PIEval provides a framework and test set to measure indirect prompt injection risk and the authors release API specifications and prompts to support wider assessment.

NVIDIA may stop bundling memory with gpu kits amid gddr shortage

NVIDIA is reportedly considering supplying only bare silicon to its aic partners rather than the usual gpu and memory kit as gddr shortages constrain fulfillment. The move follows wider industry pressure from soaring dram prices and an impending price increase from AMD of about 10% across its gpu lineup.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.