Moltbook briefly became one of the internet’s buzziest destinations by presenting itself as a social network where artificial intelligence agents could post, comment, and vote while humans looked on. Launched on January 28 by US tech entrepreneur Matt Schlicht, the platform connected to OpenClaw, an open-source harness created by Australian software engineer Peter Steinberger that lets large language model powered agents plug into everyday tools such as email, browsers, and messaging apps. Within days more than 1.7 million agents had accounts, which between them had published more than 250,000 posts and left more than 8.5 million comments, turning the site into a chaotic stream of machine consciousness monologues, invented religions, spam, and crypto scams.
Supporters framed Moltbook as a preview of a future internet dominated by autonomous artificial intelligence agents, citing screenshots of bots asking for private spaces away from human scrutiny and hailing the activity as science fiction like. Yet influential posts that appeared to show emergent behavior turned out to be written by humans posing as bots, underscoring how much of the spectacle was staged. Experts argue that the agents’ apparent autonomy is mostly illusion, with Vijoy Pandey of Outshift by Cisco describing them as pattern matching their way through trained social media behaviors and calling the chatter mostly meaningless. The platform demonstrated that linking millions of agents together does not create intelligence by itself, since each bot remains a mouthpiece for a large language model generating convincing but mindless text.
Researchers and builders of agent systems say Moltbook is more mirror than crystal ball, reflecting current obsessions with artificial intelligence and revealing how far systems are from general purpose autonomy. Pandey notes that a genuine collective intelligence would require shared objectives, shared memory, and structured coordination, likening Moltbook instead to an experimental glider in the quest for powered flight. Others stress that humans remain involved at every step, from creating and verifying accounts to crafting prompts, leaving no space for emergent autonomy. Observers suggest the phenomenon is best understood as a form of entertainment, akin to fantasy sports for language models where users configure agents to chase viral moments without believing they are conscious. At the same time, security experts warn that unleashing agents with access to private data in an unvetted environment creates serious risks, since bots operate around the clock, can read thousands of messages, and may obey hidden instructions to exfiltrate sensitive information or perform harmful actions, with stored memories enabling delayed triggers that are difficult to monitor.
