Bloomberg reports that NVIDIA and AMD have agreed to pay a flat 15 percent of their revenues from sales of chips to China to the U.S. government as part of an arrangement to secure export licenses. The payment is structured as a fixed share of revenue rather than a tariff or ad hoc fee. Companies that make specialized chips for data-centers and cloud providers stand to absorb a new cost, and they are likely to pass most of that cost on to Chinese customers through higher prices.
The reported measure is explicitly intended to raise the operating cost of Chinese Artificial Intelligence acceleration data-centers, making facilities in other regions more competitive. Observers say the move targets not only chip access but also the broader economics of running large-scale model training and inference. With Moore´s Law offering less room for free performance improvements, scale-out acceleration has become more dependent on energy and operational efficiency; any incremental cost imposed on chips can amplify the price of running industrial-scale compute.
The mechanism is unusual because it functions like a revenue-sharing condition tied to export approval rather than a standard export control list or a simple licensing fee. That distinction could influence how both customers and suppliers react. Chinese cloud providers will face higher hardware bills, which could shift procurement, delay capacity expansion, or accelerate moves to alternative architectures and domestic suppliers. For NVIDIA and AMD the arrangement secures market access but complicates price competitiveness and margins in a large market.
Policy and market watchers will be watching for ripple effects: changes to procurement strategies, alterations in cloud pricing, and potential responses from Chinese regulators or suppliers. Bloomberg is the source for the initial report. Until detailed terms are public, the immediate consequence most visible to end users will be higher chip prices in China and a recalculation of where and how massive Artificial Intelligence workloads are routed and run.