Anthropic launches Project Glasswing for cyber defense

Anthropic has introduced Project Glasswing to address mounting cybersecurity risks tied to increasingly capable Artificial Intelligence models. The initiative brings major technology and finance companies together to use Claude Mythos Preview as a defensive tool for critical software.

Anthropic has launched Project Glasswing as a cybersecurity initiative aimed at protecting critical software from the risks created by increasingly capable Artificial Intelligence systems. The effort brings together major companies including AWS, Apple, Nvidia, JPMorgan Chase and Palo Alto Networks. Anthropic positioned the project as a response to the growing ability of advanced models to perform human tasks such as code generation, a capability that is raising concern across the security industry.

Anthropic said it formed the project after observing Claude Mythos, its unreleased model that is now in preview. The company said Claude Mythos demonstrates that Artificial Intelligence models have reached a coding capability that surpasses humans’ ability to find and exploit software vulnerabilities. Anthropic said Claude Mythos has already found thousands of vulnerabilities in every major OS and web browser, and warned that if these capabilities spread to bad actors, the consequences could be severe for economies, public safety and national security.

The partners in Project Glasswing will use Mythos Preview as a defense mechanism, with Anthropic planning to share lessons from the initiative and extend access to 40 other organizations that build software. The company is also in discussions with U.S. government officials about Claude Mythos Preview and how it could contribute to offensive and defensive cyber capabilities. The move also appears to reinforce Anthropic’s claim to responsible Artificial Intelligence development after it downgraded its Responsible Scaling Policy earlier this year.

Industry observers described the initiative as a mitigation effort rather than a complete solution. Kashyap Kompella, CEO of RPA2AI Research, said restricted release is more responsible than public release for a dual-use capability that could support both offensive and defensive hacking. He said giving defenders early access could help harden foundational systems and establish new norms for model release, vulnerability triage, patch-cycle compression and security benchmarking before cyber-capable models become widespread.

Risks tied to advanced Artificial Intelligence models remain unresolved because newer and more capable systems continue to arrive at a rapid pace. As code generation and model autonomy improve, the possibility of misuse persists even as defensive applications expand. For cybersecurity firms, that creates a growing role in validation, prioritization, patch orchestration and compliance translation, even if automated vulnerability discovery increases the number of flaws that must be managed.

74

Impact Score

Intel and SambaNova pitch modular inference architecture

Intel and SambaNova are positioning a mixed-hardware inference design as an alternative to GPU-only deployments. The approach splits prefill, decode, and orchestration across different processors for demanding Artificial Intelligence agent workloads.

Global Artificial Intelligence governance pulls back

A broad pullback in Artificial Intelligence regulation is taking shape across Colorado, the European Union, Canada, the United Kingdom, and the United States. The shift reflects implementation gaps, competitive pressure, and resistance to heavy compliance burdens rather than the end of governance efforts.

Anthropic launches Claude Mythos for Project Glasswing

Anthropic has introduced Claude Mythos Preview, a new frontier Artificial Intelligence model positioned as a major advance in cybersecurity capability. The model is being used to power Project Glasswing, a coalition effort to secure critical software before similar capabilities spread more widely.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.