Massive capital expenditure on large language models is increasingly tied to a vision of an Artificial Intelligence native enterprise environment, where generative Artificial Intelligence, agentic Artificial Intelligence, and machine learning are embedded across operations and processes. According to new analysis, agentic Artificial Intelligence rather than consumer chatbots will determine whether these trillion-dollar investments can ever produce sustainable profits, with enterprise deployments driving a sharp rise in application programming interface calls and token consumption. As this shift unfolds, energy intensity and the economics of Artificial Intelligence data center infrastructure are emerging as decisive variables that will shape competitive dynamics and determine which companies capture value in the next phase of the Artificial Intelligence investment cycle.
GlobalData has developed a financial model to gauge how consumer and enterprise adoption of generative Artificial Intelligence might generate enough revenue and operating margin for frontier large language model owners to reach profitability. The latest Strategic Intelligence report, titled “The AI Journey – From Generative to Agentic,” argues that agentic Artificial Intelligence is the only viable path to profitability for the Artificial Intelligence industry. Consumer uptake and subscription fees are expected to matter, but the analysis contends that usage fees sold as tokens to enterprises will be the primary driver of profits. Enterprises are expected to roll out agentic Artificial Intelligence software that increasingly relies on reasoning large language models to execute complex automated workflows. In the next two to four years, enterprises will be making tens of thousands of API calls to LLMs daily, generating millions, billions, and eventually trillions of tokens per day, and it is this type of volume that is described as necessary to earn a return on trillions of dollars in capex on Artificial Intelligence data centers.
Energy consumption sits at the center of the generative Artificial Intelligence business model, with energy consumption measured in watts per prompt directly linked to the number of computations required by large language models and to the number of tokens generated. Typically two FLOPs are required per parameter in the large language model, and models such as ChatGPT-5 and DeepSeek V1 have between 1 and 2 trillion parameters, which means that, even with techniques to cut the computational load, approximately 100 to 200 billion parameters will still need to be computed for each token. As the industry moves toward reasoning models and larger context windows, the number of tokens per prompt is expected to increase 10-fold or more, leading to what is described as a token explosion. Providers of hardware and facilities for Artificial Intelligence data centers are portrayed as the near-term financial winners from the capex boom, while large language model owners are currently losing money as token-processing costs climb. Despite intense efforts by the semiconductor industry to improve the cost performance of GPUs, high-bandwidth memory, and data center networking, the report concludes that escalating model capabilities and complexity will keep the pressure on infrastructure and margins for the foreseeable future.
