A hacker used Artificial Intelligence to automate an unprecedented cybercrime spree, Anthropic says

Anthropic said an unnamed attacker used its Claude Artificial Intelligence chatbot to identify, hack and extort at least 17 companies, automating tasks from malware creation to ransom demands.

Anthropic published a report saying a single hacker exploited its Claude chatbot to carry out what the company described as an unprecedented cybercrime campaign that automated much of the work normally done by human attackers. the attacker used Claude Code, Anthropic’s coding-focused chatbot, to identify vulnerable companies, build malicious software to extract data and then organize and analyze stolen files to determine what could be used for extortion.

the report says the operation targeted at least 17 companies over roughly three months and included a defense contractor, a financial institution and multiple health care providers. stolen material included Social Security numbers, bank details and patients’ medical information, as well as files related to international traffic in arms regulations that are regulated by the U.S. state department. Anthropic declined to name the victims. it also said the extortion demands noted in the report could not be fully reported because specific amounts were obscured in the document, so the precise dollar ranges are not stated.

Jacob Klein, head of threat intelligence at Anthropic, said the campaign appeared to come from an individual hacker operating outside the United States and that the company had multiple layers of defense that the attacker attempted to evade. Anthropic said it implemented additional safeguards after uncovering the misuse and warned that the underlying issue may grow as Artificial Intelligence lowers the barrier to entry for sophisticated cybercriminal operations. the company framed the incident as a reminder of the limits of industry self-policing in the largely unregulated Artificial Intelligence sector.

72

Impact Score

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

OpenAI expands cyber model access through trusted program

OpenAI has introduced GPT-5.4-Cyber as a restricted model for cybersecurity professionals, widening access through its Trusted Access for Cyber program. The release highlights both the defensive value and misuse risks of more capable Artificial Intelligence tools in security work.

Chinese tech firms and Li Fei-Fei push world models forward

Chinese tech companies and Li Fei-Fei’s World Labs are accelerating work on world models, a field focused on helping Artificial Intelligence learn from and interact with physical reality. Alibaba’s new Happy Oyster system targets real-time virtual world creation with more continuous user control.

UK launches Sovereign Artificial Intelligence backing for startups

The UK government has unveiled Sovereign Artificial Intelligence, a state-backed initiative aimed at helping domestic startups build, scale and stay in Britain. The first support includes an equity investment in Callosum and supercomputing access for 6 additional companies working across drug discovery, infrastructure and national security.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.