Model autophagy disorder and the risk of self consuming Artificial Intelligence models

Glow New Media director Phil Blything warns that as Artificial Intelligence systems generate more online text, future language models risk training on their own synthetic output and degrading in quality. He draws a parallel with the early human driven web, arguing that machine generated content could undermine the foundations that made resources like Wikipedia possible.

Glow New Media’s Phil Blything introduces the term Model Autophagy Disorder (M.A.D.) as a growing concern within the Artificial Intelligence community, arguing that it could become a serious problem for the large language models that dominate today’s tools. He explains that the phrase, which might sound like a biomedical diagnosis, actually describes what happens when language models start to consume and learn from their own machine generated content. In his view, this self consumption threatens the reliability and usefulness of future Artificial Intelligence systems.

Blything outlines how large language models are trained on huge corpora of text, typically drawn from the public web. He notes that large language models are trained on large chunks of language, usually, the web, and that when Artificial Intelligence systems generate web content, as they are increasingly doing, that Artificial Intelligence content is used to train a subsequent model. He points to experimental work, referenced as [1.] and [2.], showing that when successive models are trained on content produced by previous models, the resulting systems rapidly deteriorate, with the new model getting worse and failing completely back to gibberish in only a few iterations.

Drawing on his own experience “working the web” since 1997, Blything contrasts the emerging problem of Model Autophagy Disorder with the human created web of the late 1990s and early 2000s. He recalls how Wikipedia and early web content were built and curated largely by people, with few bots, limited automation and relatively high barriers to publishing that encouraged deliberation and care. Looking ahead, he predicts that the next few years will see exponential growth in large language model generated web content and stresses that this synthetic material is already and quietly feeding into the training data of tomorrow’s systems. He closes with a stark metaphor, defining autophagy as self consumption and concluding that Artificial Intelligence will increasingly eat itself.

68

Impact Score

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.