Five artificial intelligence failure modes shared by humans

Ian Leslie draws parallels between machine failure modes and human behavior, focusing on model collapse and overfitting. He argues that better data curation and deliberate novelty can counter both.

Ian Leslie argues that research in artificial intelligence and cognitive psychology increasingly informs each other, yielding useful concepts for both fields. Drawing on Dwarkesh Patel’s interview with Andrej Karpathy, a co-founder of OpenAI who has since departed, and the book Algorithms To Live By by Brian Christian and Tom Griffiths, he frames five shared failure modes between machines and people. He then develops two in detail: model collapse and overfitting.

Model collapse, he writes, arises when models trained on rich human data begin learning from model-generated data as the internet fills with synthetic text and images. Because synthetic outputs are more predictable and less diverse, each generation amplifies prior biases and errors while shedding nuance, creativity, and signal. The feedback loop produces a generic, repetitive monoculture. Leslie notes the human analogue: over time people overfit to their own internal models, become rigid in thought, and rely on the same small set of friends and information sources. He extends the diagnosis to culture, citing pop music that chases streaming algorithms, formulaic Hollywood scripts, and thin imitations in contemporary visual art, describing postmodernism as a kind of cultural model collapse. Mitigations include raising quality control on training data, filtering out artificial intelligence generated content, and privileging human, rare, and anomalous data while still removing clear nonsense such as QAnon-style conspiracy theories. He points to OpenAI hiring domain experts to create exclusive high-quality content. For individuals, he recommends actively curating an information diet, reading great books, seeking novelty, and finding knowledgeable contrarian voices outside familiar circles.

Overfitting, in machine learning, occurs when a model memorizes its training set rather than learning generalizable patterns. It excels on familiar examples but fails on new inputs. Engineers counter it by penalizing reliance on specific patterns or stopping training before the model becomes too tuned to a particular dataset. Leslie sees a parallel in everyday life: routines provide stability but can narrow perception and make unfamiliar situations hard to interpret, leading either to misplaced confidence outside one’s domain or to fear that shrinks one’s world. He suggests periodically breaking habits to discover better approaches, citing a study in which a Tube strike forced commuters to find more efficient routes. He highlights neuroscientist Erik Hoel’s theory that dreams function as an injection of noise that disrupts rigid neural patterns, remixing mundane memories into bizarre forms to preserve flexibility, like the Fool in King Lear keeping sense clear by inverting it.

58

Impact Score

What businesses need to know about the EU cyber resilience act

The EU cyber resilience act is turning product cybersecurity into a legal requirement for companies that sell digital products into the European Union. A key compliance milestone arrives in September 2026, well before the full regulation takes effect in 2027.

Claude Mythos and cyber insurance’s next inflection point

Claude Mythos is being treated by governments and regulators as a potential systemic cyber risk with implications for financial stability and insurance markets. Its emergence is intensifying pressure on insurers to clarify whether Artificial Intelligence-enabled cyber losses are covered, excluded, or require new stand-alone products.

OpenAI expands ChatGPT ads with self-serve manager

OpenAI is widening its ChatGPT ads pilot with a beta self-serve Ads Manager, new bidding options and broader measurement tools. The push signals a deeper move into advertising as the company expands the program into several international markets.

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.