Five artificial intelligence failure modes shared by humans

Ian Leslie draws parallels between machine failure modes and human behavior, focusing on model collapse and overfitting. He argues that better data curation and deliberate novelty can counter both.

Ian Leslie argues that research in artificial intelligence and cognitive psychology increasingly informs each other, yielding useful concepts for both fields. Drawing on Dwarkesh Patel’s interview with Andrej Karpathy, a co-founder of OpenAI who has since departed, and the book Algorithms To Live By by Brian Christian and Tom Griffiths, he frames five shared failure modes between machines and people. He then develops two in detail: model collapse and overfitting.

Model collapse, he writes, arises when models trained on rich human data begin learning from model-generated data as the internet fills with synthetic text and images. Because synthetic outputs are more predictable and less diverse, each generation amplifies prior biases and errors while shedding nuance, creativity, and signal. The feedback loop produces a generic, repetitive monoculture. Leslie notes the human analogue: over time people overfit to their own internal models, become rigid in thought, and rely on the same small set of friends and information sources. He extends the diagnosis to culture, citing pop music that chases streaming algorithms, formulaic Hollywood scripts, and thin imitations in contemporary visual art, describing postmodernism as a kind of cultural model collapse. Mitigations include raising quality control on training data, filtering out artificial intelligence generated content, and privileging human, rare, and anomalous data while still removing clear nonsense such as QAnon-style conspiracy theories. He points to OpenAI hiring domain experts to create exclusive high-quality content. For individuals, he recommends actively curating an information diet, reading great books, seeking novelty, and finding knowledgeable contrarian voices outside familiar circles.

Overfitting, in machine learning, occurs when a model memorizes its training set rather than learning generalizable patterns. It excels on familiar examples but fails on new inputs. Engineers counter it by penalizing reliance on specific patterns or stopping training before the model becomes too tuned to a particular dataset. Leslie sees a parallel in everyday life: routines provide stability but can narrow perception and make unfamiliar situations hard to interpret, leading either to misplaced confidence outside one’s domain or to fear that shrinks one’s world. He suggests periodically breaking habits to discover better approaches, citing a study in which a Tube strike forced commuters to find more efficient routes. He highlights neuroscientist Erik Hoel’s theory that dreams function as an injection of noise that disrupts rigid neural patterns, remixing mundane memories into bizarre forms to preserve flexibility, like the Fool in King Lear keeping sense clear by inverting it.

52

Impact Score

CSEM France pushes responsible Artificial Intelligence

CSEM France is positioning itself as a key force in France’s push for responsible Artificial Intelligence, combining technical research with ethics, policy engagement, and industry partnerships. Its work centers on trustworthy systems designed for transparency, fairness, and public accountability.

Eu parliament backs ban on Artificial Intelligence nudifier apps

European parliament committees have endorsed changes to the Artificial Intelligence Act that would ban apps used to create non-consensual nude or sexually explicit images of real people. Lawmakers also backed delays and targeted adjustments to compliance rules for high-risk systems and watermarking requirements.

Chancellor sets principles for UK-EU alignment

Rachel Reeves has outlined a growth plan built around closer UK-EU ties, faster Artificial Intelligence adoption, and stronger regional development. The strategy sets new principles for regulatory alignment, expands support for innovation, and shifts more investment power to city regions.

Nvidia denies report on Groq chip plans for China

Nvidia says a report that it is preparing Groq inferencing chips for shipment to China is “totally false,” even as interest in H200 sales to the country remains strong. The dispute highlights how closely watched Nvidia’s China strategy has become across training and inferencing hardware.

AMD targets desktop Artificial Intelligence PCs with Copilot+ chips

AMD has introduced the first desktop processors certified for Microsoft Copilot+, aiming to challenge Intel in x86 PCs as demand for on-device Artificial Intelligence computing rises. The company is also balancing that push with export limits that could constrain advanced chip sales in China.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.