OpenAI’s new app Sora delivers an endless stream of exclusively Artificial Intelligence-generated videos up to 10 seconds long, built around user-created “cameos” that mimic a person’s appearance and voice. Despite critics who see it as disposable novelty, it quickly reached the top spot on Apple’s US App Store. Early popular content trends toward surreal and provocative scenarios, from bodycam-style pet pullovers and trademarked characters to deepfake memes of historical and religious figures. The result is a platform that invites users to suspend questions of authenticity in favor of pure, synthetic entertainment.
The first big question is durability. OpenAI is betting that there is an appetite for an environment where everything is known to be fabricated by Artificial Intelligence, removing the ambiguity that often plagues other social feeds. That appeal could fade once the novelty wears off, or it could signal a shift in what viewers consider engaging, thanks to the app’s capacity for fantastical creativity. Key choices ahead will influence retention, including how ads are implemented, how copyrighted content is limited, and how ranking algorithms shape what people see.
The second question is cost. OpenAI is not profitable and video generation is the most energy-intensive use of Artificial Intelligence the company offers, far exceeding images or text responses in computational demand. Sora currently lets users generate videos for free without limits, even as OpenAI invests in massive data centers and new power infrastructure. The company has begun moving toward monetization elsewhere and its CEO, Sam Altman, has said OpenAI must find a way to make money on video generation. Still, the emissions profile remains unclear. Altman has characterized emissions per ChatGPT query as extremely small, but the footprint for a 10-second Sora video has not been disclosed, inviting scrutiny from climate and technology researchers.
The third question is legal risk. Sora is already saturated with copyrighted and trademarked characters, makes it easy to deepfake deceased celebrities, and pairs videos with copyrighted music. According to the Wall Street Journal, OpenAI told copyright holders they must opt out if they do not want their material included, a posture likely to be tested given unsettled law around Artificial Intelligence training and outputs. OpenAI says it will provide rightsholders with granular control over character use, while acknowledging that some improper generations may slip through. The app’s cameo system also raises consent and misuse issues, prompting new controls that let users restrict political contexts or certain words. Whether those safeguards prevent nefarious, explicit, or illegal uses, and who bears responsibility when they fail, are unresolved.
With access still gated by invite codes, Sora has not yet been tested at full scale. Its trajectory will show whether Artificial Intelligence videos tuned for constant engagement can outcompete real footage for attention. In that sense, Sora will test not only OpenAI’s technology and business model but also users’ willingness to trade more of their reality for an infinite scroll of simulation.