Anthropic has launched Project Glasswing as a cybersecurity initiative aimed at protecting critical software from the risks created by increasingly capable Artificial Intelligence systems. The effort brings together major companies including AWS, Apple, Nvidia, JPMorgan Chase and Palo Alto Networks. Anthropic positioned the project as a response to the growing ability of advanced models to perform human tasks such as code generation, a capability that is raising concern across the security industry.
Anthropic said it formed the project after observing Claude Mythos, its unreleased model that is now in preview. The company said Claude Mythos demonstrates that Artificial Intelligence models have reached a coding capability that surpasses humans’ ability to find and exploit software vulnerabilities. Anthropic said Claude Mythos has already found thousands of vulnerabilities in every major OS and web browser, and warned that if these capabilities spread to bad actors, the consequences could be severe for economies, public safety and national security.
The partners in Project Glasswing will use Mythos Preview as a defense mechanism, with Anthropic planning to share lessons from the initiative and extend access to 40 other organizations that build software. The company is also in discussions with U.S. government officials about Claude Mythos Preview and how it could contribute to offensive and defensive cyber capabilities. The move also appears to reinforce Anthropic’s claim to responsible Artificial Intelligence development after it downgraded its Responsible Scaling Policy earlier this year.
Industry observers described the initiative as a mitigation effort rather than a complete solution. Kashyap Kompella, CEO of RPA2AI Research, said restricted release is more responsible than public release for a dual-use capability that could support both offensive and defensive hacking. He said giving defenders early access could help harden foundational systems and establish new norms for model release, vulnerability triage, patch-cycle compression and security benchmarking before cyber-capable models become widespread.
Risks tied to advanced Artificial Intelligence models remain unresolved because newer and more capable systems continue to arrive at a rapid pace. As code generation and model autonomy improve, the possibility of misuse persists even as defensive applications expand. For cybersecurity firms, that creates a growing role in validation, prioritization, patch orchestration and compliance translation, even if automated vulnerability discovery increases the number of flaws that must be managed.