OpenAI faces criticism over scattershot strategy and mounting costs

A critical essay argues OpenAI is drifting from a coherent plan, leaning on leaks about new products while subsisting on ChatGPT subscriptions and heavy spending. It portrays the company as a conventional Artificial Intelligence startup wrestling with losses, a weak API business and underwhelming upgrades.

The article argues that OpenAI is presenting itself as many companies at once without a coherent plan, citing a flurry of reported initiatives that range from a new social feed of generative video called Sora 2 to a potential productivity suite aimed at Microsoft’s turf. Other purported efforts include an Artificial Intelligence powered hiring platform targeted for mid-2026, advertising inside ChatGPT by 2026, a possible move into selling infrastructure services later, in-house Artificial Intelligence chips with Broadcom slated for 2026 but for internal use, consumer hardware by late 2026 or early 2027, and even a browser. The author frames many of these as strategic leaks designed to bolster valuation and facilitate massive future fundraising on a trillion-dollar scale.

At the core, the piece contends that OpenAI lacks focus and that its flagship model update, GPT-5, was underwhelming and more expensive to operate than its predecessor due to how it processes prompts. Citing projections reported by The Information, the author says ChatGPT is expected to remain the dominant revenue driver until at least 2027, when new “agents” and monetization for free users are supposed to contribute meaningfully. The article questions whether OpenAI is a hardware company, software vendor, ads platform or cloud provider, noting that even ideas like certifying Artificial Intelligence experts are floated while the company’s identity remains unclear.

Financially, the essay characterizes OpenAI as a standard software business that makes most of its money from ChatGPT subscriptions. It references 20 million paid subscribers as of April and 5 million business subscribers as of August, including 500,000 seats from the Cal State University system. The author says the company loses large amounts of money and that API revenue appears to be a very small share in 2025, with the company’s “Operator” agent described as barely functional. That dynamic, the piece argues, makes OpenAI resemble any other Artificial Intelligence startup trying to bolt large language models onto products while struggling to monetize.

Beyond business execution, the article points to foundational limits in large language models, noting that “hallucinations” are described as mathematically inevitable by OpenAI’s own research. It further claims that OpenAI’s growth is slowing, its models are increasingly commoditized, and the broader generative Artificial Intelligence narrative has cooled. According to The Information, OpenAI spent roughly 150 percent of its first-half 2025 revenue on research and development, producing the muted GPT-5 release and Sora 2. The author estimates that Sora 2 carries high per-video generation costs based on published cloud rates for the earlier Sora model and questions whether those economics are sustainable.

55

Impact Score

Nvidia to sell fully integrated Artificial Intelligence servers

A report picked up on Tom’s Hardware and discussed on Hacker News says Nvidia is preparing to sell fully built rack and tray assemblies that include Vera CPUs, Rubin GPUs and integrated cooling, moving beyond supplying only GPUs and components for Artificial Intelligence workloads.

Navigating new age verification laws for game developers

Governments in the UK, European Union, the United States of America and elsewhere are imposing stricter age verification rules that affect game content, social features and personalization systems. Developers must adopt proportionate age-assurance measures such as ID checks, credit card verification or Artificial Intelligence age estimation to avoid fines, bans and reputational harm.

Large language models require a new form of oversight: capability-based monitoring

The paper proposes capability-based monitoring for large language models in healthcare, organizing oversight around shared capabilities such as summarization, reasoning, translation, and safety guardrails. The authors argue this approach is more scalable than task-based monitoring inherited from traditional machine learning and can reveal systemic weaknesses and emergent behaviors across tasks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.