A new study from SEO firm Graphite reports that artificial intelligence now generates the majority of written articles online. Analyzing more than 65,000 English language web articles published since January 2020, the firm found that machine authored pieces surpassed human authored ones in November 2024. By May 2025, 52 percent of written content was identified as artificial intelligence created, up from 39 percent in the year following the November 2022 launch of ChatGPT. The article notes growth has plateaued since May 2024, signaling a shift from pure volume to strategic deployment.
Graphite’s methodology used an artificial intelligence detector called Surfer, classifying an article as artificial intelligence generated if more than half of its text was machine produced. The dataset came from Common Crawl. The findings align with broader industry forecasts, including analyst Nina Schick’s January 2025 projection that 90 percent of all online content across formats could be artificial intelligence generated by the end of 2025. Unlike earlier template driven automation, today’s large language models produce contextually nuanced, stylistically varied text that can pass as human to many readers, intensifying concerns about misinformation and eroding trust.
The rapid adoption is reshaping competitive dynamics. Developers of generative systems, including OpenAI, Google, and Anthropic, stand to benefit as their models power content across industries. At the same time, low cost, high velocity production pressures traditional content operations and affects adjacent fields like design, video, and audio. Demand is rising for verification tools and standards, such as the Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity. Search quality remains a critical battleground. The article highlights that only 14 percent of top ranking search results are classified as artificial intelligence generated, suggesting search systems continue to reward higher quality, human centric material and may dampen incentives for low quality machine output.
The societal stakes are significant. As artificial intelligence generated media proliferates, distinguishing original work from synthetic output becomes harder for users, elevating a content authenticity crisis. The economics of the web are also shifting as discovery tilts toward artificial intelligence summaries and answers rather than clicks to source pages, risking visibility and revenue for human creators. Regulatory responses are emerging. The EU Artificial Intelligence Act introduces transparency requirements for synthetic media, and other jurisdictions are weighing measures, including penalties for unlabeled artificial intelligence content.
Looking ahead, the article anticipates more sophisticated and multimodal generation, from personalized education materials to automated journalism and customer communications. With ranking growth plateauing, the focus is likely to move toward quality, provenance, and editorial oversight, including common use of artificial intelligence co authorship. Challenges remain, including an arms race between generation and detection, watermarking and provenance, intellectual property concerns, bias, and job displacement. The piece concludes that online media may bifurcate into a vast layer of utility driven artificial intelligence content and a smaller, premium tier of human authored work valued for originality, perspective, and trust.