The Trump administration moved from an aggressively deregulatory posture on Artificial Intelligence to studying pre-release government vetting of frontier models. In January 2025, Executive Order 14179 revoked the prior Artificial Intelligence safety order, and by July 2025, America’s Artificial Intelligence Action Plan framed deregulation as the fastest route to leadership. Less than a year later, the White House was examining a model review process likened to FDA approval, while NIST’s Center for Artificial Intelligence Standards and Innovation signed pre-deployment testing agreements with Google DeepMind, Microsoft, and xAI. CAISI has already completed more than 40 evaluations, including assessments of unreleased models.
The shift followed mounting evidence that advanced models can carry out offensive cyber tasks with growing autonomy. UK testing found Anthropic’s Claude Mythos Preview and OpenAI’s GPT-5.5 capable of executing complex attack activity, with Mythos completing a 32-step corporate network attack simulation in 3 out of 10 attempts and GPT-5.5 scoring 90.5% on expert-level cyber tasks. The concern extends beyond the largest systems. AISLE’s research found that models as small as 3.6 billion parameters, running at ?.11 per million tokens, detected the same FreeBSD vulnerability, and eight out of eight models identified it. That has sharpened a core policy problem: frontier testing may address the most advanced systems, but offensive capability is spreading across smaller and cheaper models as well.
The case for intervention is tied to broader weaknesses in software security. GitHub is on pace for 14 billion commits in 2026, a 14x year-over-year increase driven by Artificial Intelligence coding agents. FIRST projects approximately 59,000 new CVEs this year, while NIST reclassified approximately 29,000 backlogged CVEs into a “Not Scheduled” category. The argument is that software security remains a market failure, with incentives favoring speed and feature delivery over resilience. In that environment, frontier models that can accelerate vulnerability discovery and exploitation make voluntary self-regulation look increasingly inadequate.
At the same time, the European Union is presented as a warning against expansive compliance regimes. The EU’s stack of digital rules includes fines of up to 7% of global revenue. The Artificial Intelligence Act alone carries compliance costs that exceed €50,000 per high-risk system, with organizations reporting a 40% increase in overall compliance burden. EU Artificial Intelligence-startup venture capital funding fell by roughly 15% in 2024, and small and mid-size European tech companies face annual revenue-at-risk of ?,000 to ?,000 per firm from regulatory-driven delays. By 2026, the IMF projects U.S. GDP at ?.8 trillion and EU GDP at ?.5 trillion, a gap that has widened to 41%.
China further complicates the calculus. An April 2026 OSTP memo said Chinese firms including DeepSeek, Moonshot Artificial Intelligence, and MiniMax were conducting large-scale distillation efforts against American models. DeepSeek’s R1 reportedly reached capabilities comparable to OpenAI’s o1 at a training cost of ? million, and the memo cited over 24,000 fraudulent accounts generating more than 16 million interactions with Claude. The emerging U.S. approach through CAISI is narrower than the EU model, centered on cyber, biological, and chemical national security risks. The unresolved question is whether that targeted regime can remain limited, or whether it will expand into a broader compliance system that slows releases without delivering proportional security gains.
