United States weighs frontier Artificial Intelligence testing after deregulatory push

The White House is reconsidering its deregulatory stance on frontier Artificial Intelligence as cyber and national security risks become harder to ignore. A more targeted testing regime is emerging, but the balance between security oversight and innovation remains unsettled.

The Trump administration moved from an aggressively deregulatory posture on Artificial Intelligence to studying pre-release government vetting of frontier models. In January 2025, Executive Order 14179 revoked the prior Artificial Intelligence safety order, and by July 2025, America’s Artificial Intelligence Action Plan framed deregulation as the fastest route to leadership. Less than a year later, the White House was examining a model review process likened to FDA approval, while NIST’s Center for Artificial Intelligence Standards and Innovation signed pre-deployment testing agreements with Google DeepMind, Microsoft, and xAI. CAISI has already completed more than 40 evaluations, including assessments of unreleased models.

The shift followed mounting evidence that advanced models can carry out offensive cyber tasks with growing autonomy. UK testing found Anthropic’s Claude Mythos Preview and OpenAI’s GPT-5.5 capable of executing complex attack activity, with Mythos completing a 32-step corporate network attack simulation in 3 out of 10 attempts and GPT-5.5 scoring 90.5% on expert-level cyber tasks. The concern extends beyond the largest systems. AISLE’s research found that models as small as 3.6 billion parameters, running at ?.11 per million tokens, detected the same FreeBSD vulnerability, and eight out of eight models identified it. That has sharpened a core policy problem: frontier testing may address the most advanced systems, but offensive capability is spreading across smaller and cheaper models as well.

The case for intervention is tied to broader weaknesses in software security. GitHub is on pace for 14 billion commits in 2026, a 14x year-over-year increase driven by Artificial Intelligence coding agents. FIRST projects approximately 59,000 new CVEs this year, while NIST reclassified approximately 29,000 backlogged CVEs into a “Not Scheduled” category. The argument is that software security remains a market failure, with incentives favoring speed and feature delivery over resilience. In that environment, frontier models that can accelerate vulnerability discovery and exploitation make voluntary self-regulation look increasingly inadequate.

At the same time, the European Union is presented as a warning against expansive compliance regimes. The EU’s stack of digital rules includes fines of up to 7% of global revenue. The Artificial Intelligence Act alone carries compliance costs that exceed €50,000 per high-risk system, with organizations reporting a 40% increase in overall compliance burden. EU Artificial Intelligence-startup venture capital funding fell by roughly 15% in 2024, and small and mid-size European tech companies face annual revenue-at-risk of ?,000 to ?,000 per firm from regulatory-driven delays. By 2026, the IMF projects U.S. GDP at ?.8 trillion and EU GDP at ?.5 trillion, a gap that has widened to 41%.

China further complicates the calculus. An April 2026 OSTP memo said Chinese firms including DeepSeek, Moonshot Artificial Intelligence, and MiniMax were conducting large-scale distillation efforts against American models. DeepSeek’s R1 reportedly reached capabilities comparable to OpenAI’s o1 at a training cost of ? million, and the memo cited over 24,000 fraudulent accounts generating more than 16 million interactions with Claude. The emerging U.S. approach through CAISI is narrower than the EU model, centered on cyber, biological, and chemical national security risks. The unresolved question is whether that targeted regime can remain limited, or whether it will expand into a broader compliance system that slows releases without delivering proportional security gains.

74

Impact Score

Musk and Altman trial could reshape Artificial Intelligence governance

Elon Musk’s lawsuit against OpenAI, Sam Altman, Greg Brockman and Microsoft has opened a high-stakes fight over whether a mission-driven Artificial Intelligence lab can lawfully turn into a commercial powerhouse. The case could affect OpenAI’s structure, leadership and future fundraising while setting a precedent for peers such as Anthropic.

Anthropic turns to SpaceX for Artificial Intelligence compute

Anthropic is renting compute from a giant SpaceX data center as demand for Claude and related services strains existing capacity. The move underscores how competition in Artificial Intelligence is increasingly shaped by access to infrastructure, not just model quality.

Intel reportedly reaches preliminary chip deal with Apple

Apple and Intel have reportedly reached a preliminary agreement for Intel to manufacture some chips for Apple devices. The deal follows more than a year of talks and comes as Intel pushes to revive its foundry business with support from the Trump administration.

White House tempers talk of stricter Artificial Intelligence vetting

The White House is trying to calm industry concerns after comments suggested advanced Artificial Intelligence models could face government review before public release. Officials are now emphasizing partnership with companies over formal regulation, even as internal discussions continue.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.