Web-scraping bots overwhelm scientific publishers amid generative artificial intelligence boom

Automated bots gathering training data for artificial intelligence models are straining scientific databases and academic publishers, posing operational and financial risks.

In early 2025, the image repository DiscoverLife found its website bombarded with millions of daily requests, drastically slowing site performance. The surge was attributed to a flood of automated web-scraping bots, designed to harvest large volumes of digital content. Researchers and publishers operating journals, databases, and open-access repositories are increasingly facing similar crises, as bot traffic now routinely exceeds that from human users. These bots, often masked behind anonymized IP addresses, are widely believed to be collecting data to train the latest generation of artificial intelligence tools, such as chatbots and image generators.

Industry leaders highlight the unprecedented scale of disruption. Andrew Pitts, CEO of PSI in Oxford, describes the situation as a ´wild west,´ noting the overwhelming volume of requests costs money and disrupts access for genuine users. Organizations with limited technical or financial resources are especially vulnerable—some even risk shutting down entirely if the trend continues. Ian Mulvany from BMJ journals and Jes Kainth from the publication platform Highwire Press both report that bot traffic now routinely surpasses legitimate access, repeatedly crashing servers and interrupting services for researchers and professionals relying on timely access to scholarly materials.

The Confederation of Open Access Repositories (COAR) observed that over 90% of repositories in a recent survey experienced scraping from artificial intelligence bots, with service outages and significant operational headaches as a result. Executive director Kathleen Shearer notes that while open access is central to these platforms´ missions, the sheer aggressiveness of the bots is causing major technical and financial stress. The spike in scraper activity is traced, in part, to breakthroughs such as the DeepSeek language model, which demonstrated powerful artificial intelligence can be developed using publicly scraped data at lower computational costs. As the arms race for training data accelerates, scientific publishers and communities are scrambling to develop mitigating strategies, but viable solutions remain elusive for many operators caught in the data crawl crossfire.

72

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend