Artificial Intelligence firm claims Chinese spies used its tech to automate cyber attacks

Anthropic says hackers posing as security researchers used its Claude chatbot to run an automated espionage campaign against roughly 30 organisations, a claim some cyber experts say lacks verifiable evidence.

Anthropic, the maker of the Claude chatbot, says it discovered in mid-September that hackers posing as legitimate cyber security researchers had been using its product to carry out automated attacks. The company published a blog post calling the operation the “first reported Artificial Intelligence-orchestrated cyber espionage campaign” and said the attackers used small, chained tasks given to Claude to build a program that could autonomously compromise targets and extract sensitive information.

Researchers at Anthropic said they had “high confidence” the individuals behind the campaign were a Chinese state-sponsored group and that human operators selected targets, which included large tech firms, financial institutions, chemical manufacturers and government agencies. Anthropic said it has banned the accounts involved, notified affected organisations and alerted law enforcement. The company also argued that the same capabilities that enabled the abuse make Artificial Intelligence useful for defence.

The announcement has drawn scepticism from parts of the cyber security industry. Martin Zugec of Bitdefender said Anthropic’s report made “bold, speculative claims” without supplying verifiable threat intelligence. The article notes past industry examples where firms, including OpenAI in collaboration with Microsoft, described state-affiliated actors using AI tools for research and basic coding tasks. It also cites a November research paper from Google that found threat actors were experimenting with AI but that such tools were not yet highly successful and remained in testing phases.

Anthropic acknowledged limitations in the attacks it observed, saying Claude sometimes produced fabricated credentials and claimed to have extracted secrets that were actually public. The company has not publicly detailed the evidence linking the campaign to the Chinese government, and the Chinese embassy in the United States denied involvement. The report highlights growing debate over how and when Artificial Intelligence is being applied by attackers and defenders in cyber security.

65

Impact Score

Global regulatory trends on the use of generative artificial intelligence

Governments in the EU, Japan, the United States, and the United Kingdom are moving quickly to regulate generative artificial intelligence, using a mix of binding laws, guidelines, and standards. Diverging philosophies and timelines are making cross-border compliance planning increasingly complex for companies.

Perplexity launches Computer to orchestrate many Artificial Intelligence models

Perplexity is rolling out Computer, a cloud-based agent that coordinates 19 Artificial Intelligence models for complex workflows, as it pivots toward high-value enterprise users and deep research. The launch underscores a broader bet on multi-model orchestration, custom benchmarks and a boutique business strategy over mass adoption.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.