Publishers are mounting a more coordinated effort to limit unauthorized Artificial Intelligence scraping and push for compensation when their content is used by Artificial Intelligence systems. The shift comes as chat-based tools become a meaningful discovery layer between content creators and audiences, increasing pressure on publishers, brands, and agencies to reckon with how their work is being surfaced and monetized through Artificial Intelligence services.
OpenAI recently revealed that ChatGPT has 900 million users, up from 800 million in the fall. That growth has sharpened concerns across the media industry that Artificial Intelligence platforms may not drive traffic like Google, but now occupy an influential position between publishers and readers. Publishers are increasingly focused not only on getting cited by generative search tools, but also on the foundational issue of how content entered these systems and whether compensation should follow when journalism is ingested by Artificial Intelligence engines.
In late February, a group of U.K. media companies, including the BBC, the Financial Times, and The Guardian, launched SPUR, or Standards for Publisher Usage Rights. The coalition is designed to give publishers a collective voice in negotiations with Artificial Intelligence companies and to establish shared technical standards and licensing frameworks for access to high-quality journalism. The central bet is that coordinated action can succeed where isolated publishers have struggled, whether the alternative has been costly litigation, one-off licensing deals, or defensive measures such as paywalls and bot blocking.
Technical enforcement is a key part of that strategy. Cloudflare has aligned itself with publishers and introduced Pay Per Crawl, a tool that lets publishers charge bots for content access. SPUR is not endorsing a single product, but supports the broader idea of making scraping harder and more expensive. That matters because existing defenses such as robots.txt are limited and easy to ignore, while unauthorized crawling has expanded through headless browsers that mimic human visitors at scale.
A major unresolved issue is how to treat Artificial Intelligence agents that act on behalf of users. Artificial Intelligence companies crawl content for training data, search results, and individual user queries, and the last category has become especially contentious. Publishers argue that these agents may stand in for people but do not behave like them, particularly because they do not see ads. SPUR is moving quickly in a market where media habits are changing fast, betting that a broader coalition backed by technical allies could still influence the rules before they become entrenched.
