BBC warns Perplexity over unauthorised use of news content

The BBC has accused artificial intelligence startup Perplexity of using its news content without consent—escalating a global clash between media outlets and generative Artificial Intelligence firms.

The BBC has formally threatened United States-based artificial intelligence company Perplexity with legal action, accusing the startup of copying and distributing BBC news articles without authorisation. In a letter sent to CEO Aravind Srinivas, the broadcaster demanded that Perplexity immediately stop using its material, delete any existing BBC data it holds, and propose financial compensation for past usage. The BBC argues that Perplexity´s chatbot has been presenting verbatim excerpts of BBC reporting to users, which it claims constitutes a violation of United Kingdom copyright law and BBC terms of use.

This unprecedented legal step comes amid broader unrest between news publishers and generative artificial intelligence organisations over the use—and potential misuse—of journalist-produced content. The BBC contends that not only is its intellectual property being reproduced without permission, but that inaccurate or misleading summaries created by Perplexity’s tool also threaten the broadcaster’s reputation for impartiality among licence fee payers. BBC research highlighted several major generative artificial intelligence products, including Perplexity, as falling short of the corporation’s editorial standards and failing to ensure accuracy or source credibility in news summaries.

Central to the conflict is Perplexity´s method of collecting information through web scraping, a process that pulls content from websites for use in real-time answers. The broadcaster maintains it explicitly prohibited Perplexity’s crawlers, yet observed ongoing scraping activity regardless, raising serious concerns about the effectiveness of industry-standard robots.txt restrictions, which are voluntary for web-crawling bots. Perplexity’s leadership disputes the accusations, with Srinivas stating the company’s bots respect website directives and do not use scraped content to train foundation models, instead focusing on live aggregation rather than corpus-based training.

Industry groups such as the Professional Publishers Association have echoed the BBC’s alarm, warning that the unauthorised repackaging of journalism for artificial intelligence chatbots undermines both the economic viability of news organisations and the core principles of original journalism. Prominent voices within media and advocacy argue that unchecked scraping could erode the business model sustaining reliable news and credibility. While some publishers have signed licensing deals with artificial intelligence players, high-profile lawsuits—such as that filed by The New York Times against OpenAI and Microsoft—reflect growing legal resistance. The BBC’s current demands, if pursued, could set a major legal precedent in the fast-evolving clash between technology and journalism worldwide.

74

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend