States advance Artificial Intelligence laws as federal policy stays limited

State governments are moving ahead with a broad range of Artificial Intelligence rules while Washington remains focused on a narrow set of harms. A new executive order favors a lighter federal approach, but state laws still define most near-term compliance obligations.

Artificial Intelligence adoption has moved quickly from experimentation into day-to-day business operations, including marketing, pricing, customer service, employment, health care and creative work. That rapid deployment has outpaced federal rulemaking, leaving companies that build or use Artificial Intelligence systems to navigate an unsettled legal environment. No comprehensive federal law governs Artificial Intelligence, and state legislatures have stepped in to regulate disclosures, automated decisions, digital replicas, pricing practices and other uses.

State activity has accelerated even as federal policymakers signal opposition to a fragmented approach. Hundreds of Artificial Intelligence-related bills were proposed at the state level in 2025, and nearly every state, 44 at last count, has at least one Artificial Intelligence law on the books. At the federal level, the main enacted law is the TAKE IT DOWN Act, signed into law May 19, 2025, which targets nonconsensual intimate imagery and requires platforms to remove covered content within 48 hours of notification. A separate proposal for a 10-year moratorium on state Artificial Intelligence laws failed, while the Senate passed the DEFIANCE Act, which would let victims sue providers directly and is now under House consideration.

States are using a mix of broad and targeted measures. Colorado, Utah and Texas enacted overarching Artificial Intelligence laws, with Colorado creating a framework for high-risk systems used in areas such as employment, health care, insurance and housing. The Colorado AI Act was set to take effect Feb. 1 but has been pushed back to June 30. Other states have focused on transparency rules for chatbots and synthetic content, opt-out rights and risk assessments for profiling, limits on political deepfakes and nonconsensual explicit content, protections for voice and likeness rights, and disclosures tied to algorithmic pricing and other data-driven decisions. A smaller group is also pursuing rules for frontier models trained with massive computing power.

The Trump administration’s December executive order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” argues for a centralized and minimally burdensome approach and warns that state laws may interfere with interstate commerce or embed ideological bias. The order instructs the attorney general and federal agencies to examine preemption challenges, consider limits on discretionary funding to states whose laws conflict with federal policy, and propose a uniform legislative framework. Still, it does not erase state statutes, and state law remains the main source of compliance obligations for businesses.

Regulatory uncertainty also extends to Europe. The EU AI Act launched in August 2024 with a target for final implementation in August 2026, but on Nov. 19, 2025, the European Commission introduced a Digital Omnibus package to reduce costs and support innovation. Proposed changes include easing some high-risk requirements, delaying certain rules due in August, simplifying obligations for smaller businesses and adding a six-month grace period for some transparency and marking duties. As 2026 unfolds, policymakers are expected to focus on disclosures, automated decision-making, frontier models, chatbot rules involving minors, mental health uses in health care and sector-specific regulation, while businesses are advised to keep state compliance programs in place rather than wait for a stable federal framework.

68

Impact Score

EU Artificial Intelligence Act prohibited practices overview

A LexisNexis practice note examines Article 5 of the EU Artificial Intelligence Act and the practices banned for posing unacceptable risks to EU values and fundamental rights. It also addresses enforcement, liability, and contractual considerations.

Artificial Intelligence adoption outpaces governance, Gallagher says

Gallagher says businesses are expanding Artificial Intelligence training and hiring as the technology moves into everyday operations, but many still lack formal risk controls. The gap is creating new concerns for insurers, brokers and risk consultants as regulation and liability exposures evolve.

Arm moves into chip production with new data center cpu

Arm is moving beyond licensing and into chip production with a new data center processor aimed at Artificial Intelligence workloads. Meta Platforms will be the lead partner as Arm targets a much larger revenue opportunity in data center infrastructure.

Artificial Intelligence could restore competition in the us economy

Artificial Intelligence is emerging as a threat to entrenched business models, but it may also revive competition in an economy that has grown increasingly concentrated. Lower barriers to entry and heavier capital investment could boost productivity, wages, and long-term growth if policymakers resist consolidation.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.