White House tempers talk of stricter Artificial Intelligence vetting

The White House is trying to calm industry concerns after comments suggested advanced Artificial Intelligence models could face government review before public release. Officials are now emphasizing partnership with companies over formal regulation, even as internal discussions continue.

Senior White House officials are trying to calm industry concerns that the administration could require tech companies to submit their advanced Artificial Intelligence models for federal vetting before releasing them to the public. A day after Kevin Hassett publicly confirmed that such a review was under discussion and compared it to the Food and Drug Administration’s testing of prescription drugs, other aides signaled that the administration has not settled on a hard regulatory approach.

Internal messaging points to a divide over how far the government should go. One senior White House official said “there’s one or two people who are very intent on government regulations,” but described them as a minority. That same official said Hassett’s remarks were “taken out of context a little bit” and said the White House is looking for “partnership” with companies rather than pursuing “government regulation.” Susie Wiles reinforced that position publicly, saying the government is “not in the business of picking winners and losers” and that the administration wants innovators, not bureaucracy, to drive the safe deployment of powerful technologies.

The debate is unfolding as the administration prepares an executive order meant to address how powerful new models could be misused for cyberattacks or bioweapons development. According to three people familiar with the plans, the White House is also discussing using the intelligence community to pre-assess models and help secure systems before widespread release. One U.S. government official said part of the goal of any government pre-release coordination is to ensure that the intelligence community can study and exploit the tools before adversaries such as Russia and China know of the new capabilities. Defense Undersecretary Emil Michael appeared to back that idea, framing the issue as part of a broader cybersecurity response.

Industry resistance remains strong, especially around any system that could delay or block market access. Daniel Castro of the Information Technology and Innovation Foundation warned that if approval can be withheld before launch, it could have major competitive consequences. Existing voluntary federal safety-testing arrangements have been in place for several years, including reviews through the Commerce Department’s Center for AI Standards and Innovation, which recently signed additional agreements with Google DeepMind, xAI, and Microsoft.

The push for tougher safeguards has been sharpened by the emergence of highly capable cyber models. Anthropic recently limited access to Mythos after saying the system was so effective at hacking that it could not be released to the general public, while OpenAI announced limited previews of GPT-5.5-Cyber. The administration’s response also reflects a broader shift for President Donald Trump, who entered office promising to reduce Artificial Intelligence regulation but is now confronting pressure to act quickly as more powerful systems emerge.

74

Impact Score

Intel reportedly reaches preliminary chip deal with Apple

Apple and Intel have reportedly reached a preliminary agreement for Intel to manufacture some chips for Apple devices. The deal follows more than a year of talks and comes as Intel pushes to revive its foundry business with support from the Trump administration.

Marine Corps mandates basic Artificial Intelligence course

The Marine Corps has ordered all Active Duty and Reserve Marines to complete a foundational Artificial Intelligence course as part of a broader effort to build force-wide literacy in emerging technologies. The training is designed to improve awareness, support ethical use, and prepare Marines for an increasingly Artificial Intelligence-enabled operating environment.

Artificial Intelligence reshapes the UK entry-level jobs market

The spread of Artificial Intelligence is reducing demand for some junior roles while increasing pressure on employers to build digital skills. Business groups warn that rising costs and automation could deepen youth unemployment and skills shortages across the United Kingdom.

How Google made Gemma faster with speculative decoding

Google introduced Multi-Token Prediction drafters for Gemma 4 to accelerate inference through speculative decoding. The approach speeds token generation by pairing the main model with a smaller drafter that shares context and verifies multiple guesses in parallel.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.