Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Gov. Gavin Newsom signed an executive order directing California to review whether companies flagged by the federal government as supply-chain risks should still be eligible for state business. The order came after the Department of Defense last month labeled San Francisco-based Artificial Intelligence tools maker Anthropic a supply chain risk during a dispute over contract terms that barred the military from using Anthropic systems for domestic mass surveillance and fully autonomous weaponry. A judge recently issued a temporary injunction blocking that designation.

The order is designed to place guardrails on state use of Artificial Intelligence while also pushing agencies to adopt the technology more quickly. It requires agencies to develop recommendations for contract standards addressing Artificial Intelligence systems that could generate child sexual abuse material, violate civil liberties and civil rights laws, or infringe legal protections against unlawful discrimination, detention, and surveillance. Agencies must also help employees gain access to vetted generative Artificial Intelligence tools, update the State Digital Strategy, develop generative Artificial Intelligence tools to help Californians access government services, and issue guidance on watermarking Artificial Intelligence-generated images and video.

Those steps arrive as more than 20 California departments and agencies are working to develop or use Poppy, a generative Artificial Intelligence assistant for state employees, and when half a dozen state agencies are testing Artificial Intelligence for tasks including supporting state workers and helping homeless people and businesses. The order also comes as state courts and city governments are expanding their own use of the technology.

Newsom’s office framed the measure as a contrast with President Donald Trump and Republicans in Washington D.C., saying federal officials have rolled back protections or ignored ways Artificial Intelligence can harm people. At the federal level, Trump has signed executive orders discouraging states from regulating Artificial Intelligence and has urged agencies to use it to reduce federal regulation and speed Medicare-related decisions. The White House released an Artificial Intelligence policy framework last month that takes a light-touch approach and does not address bias, discrimination, or civil rights issues.

This is the second executive order from Newsom focused on artificial intelligence. A 2023 order centered on generative Artificial Intelligence and similarly told state agencies to increase adoption while putting safeguards in place. Newsom’s approach is drawing attention from labor leaders, who in February said they would not back a presidential run without stronger worker protections, and from major tech donors spending heavily on California politics ahead of midterm elections this fall.

64

Impact Score

Google launches Gemma 4 open model family

Google has introduced Gemma 4, a new family of open-weight Artificial Intelligence models focused on advanced reasoning and multimodal capabilities. The release expands the Gemma line with broader deployment options, stronger performance claims and a more permissive open source license.

PrismML launches 1-bit large language model family

PrismML has emerged from stealth with a $16.25 million seed round and an open source release of its 1-bit Bonsai large language models. The startup says the models sharply cut memory use and energy consumption while aiming to preserve performance on standard benchmarks.

Fda shifts its breakthrough standard for clinical Artificial Intelligence

The Food and Drug Administration appears to be raising the bar for what qualifies as a breakthrough clinical Artificial Intelligence device. Priority is increasingly going to systems that address broad, complex medical problems rather than tools that simply improve physicians’ existing capabilities.

Mercor links cyberattack to LiteLLM compromise

Mercor said a cyberattack was tied to the compromise of LiteLLM, prompting wider discussion about supply chain risk and the limits of compliance programs. The incident also led LiteLLM to change its compliance processes and move from Delve to Vanta for compliance certifications.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.