Nvidia backs OpenAI data centre buildout to scale Artificial Intelligence

Nvidia and OpenAI announced a strategic partnership that ties Nvidia’s investment to deploying at least 10 gigawatts of data centre capacity. The first gigawatt is slated to come online in the second half of 2026 on Nvidia’s Vera Rubin platform.

OpenAI has secured a new strategic partner in Nvidia, which agreed to invest alongside a large-scale rollout of data centres aimed at expanding the capacity required for the next generation of Artificial Intelligence. The arrangement designates Nvidia as OpenAI’s preferred partner for computing and networking, aligning both companies’ hardware and software roadmaps to improve efficiency. The initiative is anchored around at least 10 gigawatts of data centre capacity powered by Nvidia systems, positioning the partners to accelerate model development and deployment.

The investment will be phased, with each stage of infrastructure deployment unlocking additional funding. The first 1 gigawatt is planned to go live in the second half of 2026 on Nvidia’s Vera Rubin platform. Company leaders framed the pact as the next step in a long-running relationship: Nvidia founder and CEO Jensen Huang characterized it as a milestone advancing the next era of intelligence, while OpenAI leaders Sam Altman and Greg Brockman emphasized that compute has been central to the company’s progress from its earliest days and will underpin future breakthroughs and distribution at scale.

The partnership arrives as OpenAI scales its services to more than 700 million weekly active users, intensifying demand for computational resources. OpenAI’s broader ecosystem now includes Microsoft, Oracle, SoftBank, and other members of the Stargate consortium. Microsoft, once the exclusive compute partner, now holds a first-refusal arrangement as OpenAI diversifies its infrastructure footprint, including a recently announced cloud capacity expansion with Oracle. With these relationships, OpenAI is targeting the levels of compute it believes are necessary for advancing toward superintelligence, with final details of the Nvidia partnership expected to be settled in the coming weeks.

72

Impact Score

LLM-PIEval: a benchmark for indirect prompt injection attacks in large language models

Large language models have increased interest in Artificial Intelligence and their integration with external tools introduces risks such as direct and indirect prompt injection. LLM-PIEval provides a framework and test set to measure indirect prompt injection risk and the authors release API specifications and prompts to support wider assessment.

NVIDIA may stop bundling memory with gpu kits amid gddr shortage

NVIDIA is reportedly considering supplying only bare silicon to its aic partners rather than the usual gpu and memory kit as gddr shortages constrain fulfillment. The move follows wider industry pressure from soaring dram prices and an impending price increase from AMD of about 10% across its gpu lineup.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.