Artificial Intelligence firm claims Chinese spies used its tech to automate cyber attacks

Anthropic says hackers posing as security researchers used its Claude chatbot to run an automated espionage campaign against roughly 30 organisations, a claim some cyber experts say lacks verifiable evidence.

Anthropic, the maker of the Claude chatbot, says it discovered in mid-September that hackers posing as legitimate cyber security researchers had been using its product to carry out automated attacks. The company published a blog post calling the operation the “first reported Artificial Intelligence-orchestrated cyber espionage campaign” and said the attackers used small, chained tasks given to Claude to build a program that could autonomously compromise targets and extract sensitive information.

Researchers at Anthropic said they had “high confidence” the individuals behind the campaign were a Chinese state-sponsored group and that human operators selected targets, which included large tech firms, financial institutions, chemical manufacturers and government agencies. Anthropic said it has banned the accounts involved, notified affected organisations and alerted law enforcement. The company also argued that the same capabilities that enabled the abuse make Artificial Intelligence useful for defence.

The announcement has drawn scepticism from parts of the cyber security industry. Martin Zugec of Bitdefender said Anthropic’s report made “bold, speculative claims” without supplying verifiable threat intelligence. The article notes past industry examples where firms, including OpenAI in collaboration with Microsoft, described state-affiliated actors using AI tools for research and basic coding tasks. It also cites a November research paper from Google that found threat actors were experimenting with AI but that such tools were not yet highly successful and remained in testing phases.

Anthropic acknowledged limitations in the attacks it observed, saying Claude sometimes produced fabricated credentials and claimed to have extracted secrets that were actually public. The company has not publicly detailed the evidence linking the campaign to the Chinese government, and the Chinese embassy in the United States denied involvement. The report highlights growing debate over how and when Artificial Intelligence is being applied by attackers and defenders in cyber security.

65

Impact Score

Anthropic launches Claude Mythos for Project Glasswing

Anthropic has introduced Claude Mythos Preview, a new frontier Artificial Intelligence model positioned as a major advance in cybersecurity capability. The model is being used to power Project Glasswing, a coalition effort to secure critical software before similar capabilities spread more widely.

Artificial Intelligence speeds quantum encryption threat timeline

Research from Google and Oratomic suggests quantum computers capable of breaking core internet encryption may arrive sooner than expected. Artificial Intelligence played a key role in improving one of the new algorithms, raising fresh urgency around post-quantum security.

New methods aim to improve Large Language Model reasoning

A new study on arXiv outlines algorithmic techniques designed to strengthen Large Language Model reasoning and reduce hallucinations. The work reports better logical consistency and stronger performance on mathematical and coding benchmarks.

Nvidia acquisition of SchedMD raises Slurm neutrality concerns

Nvidia’s purchase of SchedMD has given it control of Slurm, an open-source scheduler that sits at the center of many supercomputing and large-model training systems. Researchers and engineers are watching for signs that support could tilt toward Nvidia hardware over AMD and Intel alternatives.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.