In August 2019, senior reporter Karen Hao began in-depth reporting on OpenAI, a company that was still largely under the radar outside the Artificial Intelligence research community. Granted unprecedented access to the company´s leadership, including then-CTO Greg Brockman and chief scientist Ilya Sutskever, Hao probed into OpenAI’s evolving mission and the significant internal and external changes transpiring at the company. These included the surprising withholding and promotion of GPT-2, the hiring of Sam Altman as CEO, the adoption of a ´capped-profit´ structure, and a pivotal exclusive commercialization agreement with Microsoft. Together, these moves signaled OpenAI´s rapid transition from a nonprofit playground for ambitious ideas to a key player shaping both the technical and policy landscape of Artificial Intelligence.
Throughout her embedded interviews, Hao interrogated the leadership about the practical justification and ethics behind pursuing artificial general intelligence (AGI) instead of more modest, attainable Artificial Intelligence goals. Brockman and Sutskever defended their vision, asserting that AGI could tackle complex global problems, such as climate change and healthcare, in ways human institutions could not. However, they struggled to provide concrete details about implementation, risk mitigation, or assurance of public benefit. Hao’s investigation also highlighted internal contradictions: OpenAI presented itself as open and transparent while fiercely guarding its internal operations, employees, and data. After Hao’s visit, OpenAI ramped up its security and explicitly warned staff not to communicate with her outside supervised channels—a notable tension with its stated ethos of openness.
The eventual publication of Hao’s profile in 2020 identified a stark misalignment between OpenAI’s public principles and its operational reality, fueling public debate. Elon Musk, a founder, publicly criticized the company, calling for greater transparency and regulation. Sam Altman, OpenAI’s CEO, acknowledged the article internally, characterizing its criticism as ´fair´ but primarily addressing it through planned messaging changes, rather than substantive policy shifts. Altman also revealed concerns about leaked internal documents and a desire to contain public disputes. The fallout led to OpenAI severing communications with Hao for several years, underscoring the profound sensitivity surrounding its identity, ambition, and influence in the accelerating Artificial Intelligence arms race. This insider reporting forms the basis for Hao’s forthcoming book, ´Empire of AI,´ which promises an expansive behind-the-scenes look at OpenAI’s formative years and the societal stakes of its mission.