Senior White House officials are trying to calm industry concerns that the administration could require tech companies to submit their advanced Artificial Intelligence models for federal vetting before releasing them to the public. A day after Kevin Hassett publicly confirmed that such a review was under discussion and compared it to the Food and Drug Administration’s testing of prescription drugs, other aides signaled that the administration has not settled on a hard regulatory approach.
Internal messaging points to a divide over how far the government should go. One senior White House official said “there’s one or two people who are very intent on government regulations,” but described them as a minority. That same official said Hassett’s remarks were “taken out of context a little bit” and said the White House is looking for “partnership” with companies rather than pursuing “government regulation.” Susie Wiles reinforced that position publicly, saying the government is “not in the business of picking winners and losers” and that the administration wants innovators, not bureaucracy, to drive the safe deployment of powerful technologies.
The debate is unfolding as the administration prepares an executive order meant to address how powerful new models could be misused for cyberattacks or bioweapons development. According to three people familiar with the plans, the White House is also discussing using the intelligence community to pre-assess models and help secure systems before widespread release. One U.S. government official said part of the goal of any government pre-release coordination is to ensure that the intelligence community can study and exploit the tools before adversaries such as Russia and China know of the new capabilities. Defense Undersecretary Emil Michael appeared to back that idea, framing the issue as part of a broader cybersecurity response.
Industry resistance remains strong, especially around any system that could delay or block market access. Daniel Castro of the Information Technology and Innovation Foundation warned that if approval can be withheld before launch, it could have major competitive consequences. Existing voluntary federal safety-testing arrangements have been in place for several years, including reviews through the Commerce Department’s Center for AI Standards and Innovation, which recently signed additional agreements with Google DeepMind, xAI, and Microsoft.
The push for tougher safeguards has been sharpened by the emergence of highly capable cyber models. Anthropic recently limited access to Mythos after saying the system was so effective at hacking that it could not be released to the general public, while OpenAI announced limited previews of GPT-5.5-Cyber. The administration’s response also reflects a broader shift for President Donald Trump, who entered office promising to reduce Artificial Intelligence regulation but is now confronting pressure to act quickly as more powerful systems emerge.
