Web-scraping bots overwhelm scientific publishers amid generative artificial intelligence boom

Automated bots gathering training data for artificial intelligence models are straining scientific databases and academic publishers, posing operational and financial risks.

In early 2025, the image repository DiscoverLife found its website bombarded with millions of daily requests, drastically slowing site performance. The surge was attributed to a flood of automated web-scraping bots, designed to harvest large volumes of digital content. Researchers and publishers operating journals, databases, and open-access repositories are increasingly facing similar crises, as bot traffic now routinely exceeds that from human users. These bots, often masked behind anonymized IP addresses, are widely believed to be collecting data to train the latest generation of artificial intelligence tools, such as chatbots and image generators.

Industry leaders highlight the unprecedented scale of disruption. Andrew Pitts, CEO of PSI in Oxford, describes the situation as a ´wild west,´ noting the overwhelming volume of requests costs money and disrupts access for genuine users. Organizations with limited technical or financial resources are especially vulnerable—some even risk shutting down entirely if the trend continues. Ian Mulvany from BMJ journals and Jes Kainth from the publication platform Highwire Press both report that bot traffic now routinely surpasses legitimate access, repeatedly crashing servers and interrupting services for researchers and professionals relying on timely access to scholarly materials.

The Confederation of Open Access Repositories (COAR) observed that over 90% of repositories in a recent survey experienced scraping from artificial intelligence bots, with service outages and significant operational headaches as a result. Executive director Kathleen Shearer notes that while open access is central to these platforms´ missions, the sheer aggressiveness of the bots is causing major technical and financial stress. The spike in scraper activity is traced, in part, to breakthroughs such as the DeepSeek language model, which demonstrated powerful artificial intelligence can be developed using publicly scraped data at lower computational costs. As the arms race for training data accelerates, scientific publishers and communities are scrambling to develop mitigating strategies, but viable solutions remain elusive for many operators caught in the data crawl crossfire.

72

Impact Score

Governance gaps emerge as agentic Artificial Intelligence scales

Agentic Artificial Intelligence is moving from assisted chatbots to autonomous workflows faster than enterprise governance is adapting. The shift raises accountability, security, lifecycle, and cost control challenges that organizations must address in operational code from the start.

Where OpenAI technology could appear in Iran

OpenAI’s Pentagon deal and defense partnerships could place its models in targeting workflows, drone defense systems, and military administration tied to the Iran conflict. The company’s role reflects a broader push to weave generative Artificial Intelligence into US military operations.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.