States advance Artificial Intelligence laws as federal policy stays limited

State governments are moving ahead with a broad range of Artificial Intelligence rules while Washington remains focused on a narrow set of harms. A new executive order favors a lighter federal approach, but state laws still define most near-term compliance obligations.

Artificial Intelligence adoption has moved quickly from experimentation into day-to-day business operations, including marketing, pricing, customer service, employment, health care and creative work. That rapid deployment has outpaced federal rulemaking, leaving companies that build or use Artificial Intelligence systems to navigate an unsettled legal environment. No comprehensive federal law governs Artificial Intelligence, and state legislatures have stepped in to regulate disclosures, automated decisions, digital replicas, pricing practices and other uses.

State activity has accelerated even as federal policymakers signal opposition to a fragmented approach. Hundreds of Artificial Intelligence-related bills were proposed at the state level in 2025, and nearly every state, 44 at last count, has at least one Artificial Intelligence law on the books. At the federal level, the main enacted law is the TAKE IT DOWN Act, signed into law May 19, 2025, which targets nonconsensual intimate imagery and requires platforms to remove covered content within 48 hours of notification. A separate proposal for a 10-year moratorium on state Artificial Intelligence laws failed, while the Senate passed the DEFIANCE Act, which would let victims sue providers directly and is now under House consideration.

States are using a mix of broad and targeted measures. Colorado, Utah and Texas enacted overarching Artificial Intelligence laws, with Colorado creating a framework for high-risk systems used in areas such as employment, health care, insurance and housing. The Colorado AI Act was set to take effect Feb. 1 but has been pushed back to June 30. Other states have focused on transparency rules for chatbots and synthetic content, opt-out rights and risk assessments for profiling, limits on political deepfakes and nonconsensual explicit content, protections for voice and likeness rights, and disclosures tied to algorithmic pricing and other data-driven decisions. A smaller group is also pursuing rules for frontier models trained with massive computing power.

The Trump administration’s December executive order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” argues for a centralized and minimally burdensome approach and warns that state laws may interfere with interstate commerce or embed ideological bias. The order instructs the attorney general and federal agencies to examine preemption challenges, consider limits on discretionary funding to states whose laws conflict with federal policy, and propose a uniform legislative framework. Still, it does not erase state statutes, and state law remains the main source of compliance obligations for businesses.

Regulatory uncertainty also extends to Europe. The EU AI Act launched in August 2024 with a target for final implementation in August 2026, but on Nov. 19, 2025, the European Commission introduced a Digital Omnibus package to reduce costs and support innovation. Proposed changes include easing some high-risk requirements, delaying certain rules due in August, simplifying obligations for smaller businesses and adding a six-month grace period for some transparency and marking duties. As 2026 unfolds, policymakers are expected to focus on disclosures, automated decision-making, frontier models, chatbot rules involving minors, mental health uses in health care and sector-specific regulation, while businesses are advised to keep state compliance programs in place rather than wait for a stable federal framework.

68

Impact Score

Self-adaptive framework extracts earthquake data from web pages

A self-adaptive large language model framework is designed to extract and structure earthquake information from heterogeneous web sources by generating, validating, and reusing extraction schemas. In controlled tests, GPT_OSS delivered the strongest extraction quality, while selector errors were concentrated in wrong element selection and missing content.

Study finds widespread weaknesses in autonomous agents

A multi-institution study found that autonomous agents across several sectors are highly exposed to tool-chaining, goal drift, and memory poisoning attacks. The findings suggest agentic systems face broader and deeper security risks than stateless large language models.

Federal safety net unprepared for Artificial Intelligence job losses

Economists are warning that the federal system designed to support displaced workers is not equipped for a wave of job losses tied to Artificial Intelligence. Existing unemployment benefits and retraining programs are widely seen as too limited to manage broad disruption.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.