Harvard Business School professor Andy Wu frames the current surge in generative artificial intelligence investment as a high-stakes, capital-intensive bet whose risk profile varies sharply across the industry. He says generative artificial intelligence is perhaps the most exciting technology since the rise of the internet and that he agrees with the consensus about the long-term value creation potential of generative artificial intelligence, but stresses that realizing that potential requires massive spending on data centers, chips, and electricity for both training and inference. While generative artificial intelligence can do amazing things, he calls it perhaps the most wasteful use of a computer ever devised, contrasting a simple calculator operation with the potentially a trillion calculations needed to do the same operation inside a generative artificial intelligence model, which consumes a huge amount of chip capacity and electricity.
Wu explains that many companies, including hyperscalers and newer “neocloud” providers specializing in renting GPUs, are taking on significant debt and unprecedented equity financing to build out this infrastructure ahead of proven revenue. Several of these firms are borrowing money now to build data centers based on hypothetical future cash flows from customers that are themselves unprofitable. He points to OpenAI, which has promised $100 billion contracts to several vendors even though OpenAI today does not generate anywhere near the amount of revenue to pay for any of that, and notes that those vendors have raised money to build data centers on the assumption OpenAI is going to pay them $100 billion later. If OpenAI cannot grow revenue fast enough to meet those commitments, Wu warns that several of those vendors will be underwater financially. He says the industry faces two timing problems: a long-term need for large-scale buildout, especially in the electrical grid, and a near-term risk that artificial intelligence usage may not grow fast enough to cover fixed costs.
Concerns about an artificial intelligence bubble, Wu says, stem from the amount of debt in the system and unusual circular financing arrangements, such as cases that make it appear Nvidia is paying its customers to buy its products. He defines a technology bubble as a significant mismatch between the vision for potential value creation and the current reality of value capture, where companies must meet real financial obligations before sustainable business models exist. He emphasizes that generative artificial intelligence has a significant variable cost and that it costs OpenAI real money every time users query ChatGPT, so for now growth does not fix the economics and can increase losses. Wu argues that big technology companies like Microsoft, Amazon, Meta, and Google have taken shrewd and conservative strategies that position them to profit from adjacencies such as cloud, chips, and applications rather than relying on core artificial intelligence technology as a standalone business, so they are largely insulated even if artificial intelligence growth slows. The most exposed, he concludes, are the model builders and neoclouds that are entirely dependent on a particular growth trajectory, and he notes that ambitious technological visions always require some degree of irrational faith to give markets time to let costs fall and business models mature, so if the market can remain irrational long enough, the vision eventually becomes the reality.
