Should U.S. be worried about an artificial intelligence bubble

Harvard Business School professor Andy Wu argues that worries about an artificial intelligence bubble hinge on how much debt and risk smaller players and vendors take on, while big technology firms appear structurally insulated from a potential bust.

Harvard Business School professor Andy Wu frames the current surge in generative artificial intelligence investment as a high-stakes, capital-intensive bet whose risk profile varies sharply across the industry. He says generative artificial intelligence is perhaps the most exciting technology since the rise of the internet and that he agrees with the consensus about the long-term value creation potential of generative artificial intelligence, but stresses that realizing that potential requires massive spending on data centers, chips, and electricity for both training and inference. While generative artificial intelligence can do amazing things, he calls it perhaps the most wasteful use of a computer ever devised, contrasting a simple calculator operation with the potentially a trillion calculations needed to do the same operation inside a generative artificial intelligence model, which consumes a huge amount of chip capacity and electricity.

Wu explains that many companies, including hyperscalers and newer “neocloud” providers specializing in renting GPUs, are taking on significant debt and unprecedented equity financing to build out this infrastructure ahead of proven revenue. Several of these firms are borrowing money now to build data centers based on hypothetical future cash flows from customers that are themselves unprofitable. He points to OpenAI, which has promised $100 billion contracts to several vendors even though OpenAI today does not generate anywhere near the amount of revenue to pay for any of that, and notes that those vendors have raised money to build data centers on the assumption OpenAI is going to pay them $100 billion later. If OpenAI cannot grow revenue fast enough to meet those commitments, Wu warns that several of those vendors will be underwater financially. He says the industry faces two timing problems: a long-term need for large-scale buildout, especially in the electrical grid, and a near-term risk that artificial intelligence usage may not grow fast enough to cover fixed costs.

Concerns about an artificial intelligence bubble, Wu says, stem from the amount of debt in the system and unusual circular financing arrangements, such as cases that make it appear Nvidia is paying its customers to buy its products. He defines a technology bubble as a significant mismatch between the vision for potential value creation and the current reality of value capture, where companies must meet real financial obligations before sustainable business models exist. He emphasizes that generative artificial intelligence has a significant variable cost and that it costs OpenAI real money every time users query ChatGPT, so for now growth does not fix the economics and can increase losses. Wu argues that big technology companies like Microsoft, Amazon, Meta, and Google have taken shrewd and conservative strategies that position them to profit from adjacencies such as cloud, chips, and applications rather than relying on core artificial intelligence technology as a standalone business, so they are largely insulated even if artificial intelligence growth slows. The most exposed, he concludes, are the model builders and neoclouds that are entirely dependent on a particular growth trajectory, and he notes that ambitious technological visions always require some degree of irrational faith to give markets time to let costs fall and business models mature, so if the market can remain irrational long enough, the vision eventually becomes the reality.

52

Impact Score

What businesses need to know about the EU cyber resilience act

The EU cyber resilience act is turning product cybersecurity into a legal requirement for companies that sell digital products into the European Union. A key compliance milestone arrives in September 2026, well before the full regulation takes effect in 2027.

Claude Mythos and cyber insurance’s next inflection point

Claude Mythos is being treated by governments and regulators as a potential systemic cyber risk with implications for financial stability and insurance markets. Its emergence is intensifying pressure on insurers to clarify whether Artificial Intelligence-enabled cyber losses are covered, excluded, or require new stand-alone products.

OpenAI expands ChatGPT ads with self-serve manager

OpenAI is widening its ChatGPT ads pilot with a beta self-serve Ads Manager, new bidding options and broader measurement tools. The push signals a deeper move into advertising as the company expands the program into several international markets.

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.