Artificial Intelligence adoption has moved quickly from experimentation into day-to-day business operations, including marketing, pricing, customer service, employment, health care and creative work. That rapid deployment has outpaced federal rulemaking, leaving companies that build or use Artificial Intelligence systems to navigate an unsettled legal environment. No comprehensive federal law governs Artificial Intelligence, and state legislatures have stepped in to regulate disclosures, automated decisions, digital replicas, pricing practices and other uses.
State activity has accelerated even as federal policymakers signal opposition to a fragmented approach. Hundreds of Artificial Intelligence-related bills were proposed at the state level in 2025, and nearly every state, 44 at last count, has at least one Artificial Intelligence law on the books. At the federal level, the main enacted law is the TAKE IT DOWN Act, signed into law May 19, 2025, which targets nonconsensual intimate imagery and requires platforms to remove covered content within 48 hours of notification. A separate proposal for a 10-year moratorium on state Artificial Intelligence laws failed, while the Senate passed the DEFIANCE Act, which would let victims sue providers directly and is now under House consideration.
States are using a mix of broad and targeted measures. Colorado, Utah and Texas enacted overarching Artificial Intelligence laws, with Colorado creating a framework for high-risk systems used in areas such as employment, health care, insurance and housing. The Colorado AI Act was set to take effect Feb. 1 but has been pushed back to June 30. Other states have focused on transparency rules for chatbots and synthetic content, opt-out rights and risk assessments for profiling, limits on political deepfakes and nonconsensual explicit content, protections for voice and likeness rights, and disclosures tied to algorithmic pricing and other data-driven decisions. A smaller group is also pursuing rules for frontier models trained with massive computing power.
The Trump administration’s December executive order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” argues for a centralized and minimally burdensome approach and warns that state laws may interfere with interstate commerce or embed ideological bias. The order instructs the attorney general and federal agencies to examine preemption challenges, consider limits on discretionary funding to states whose laws conflict with federal policy, and propose a uniform legislative framework. Still, it does not erase state statutes, and state law remains the main source of compliance obligations for businesses.
Regulatory uncertainty also extends to Europe. The EU AI Act launched in August 2024 with a target for final implementation in August 2026, but on Nov. 19, 2025, the European Commission introduced a Digital Omnibus package to reduce costs and support innovation. Proposed changes include easing some high-risk requirements, delaying certain rules due in August, simplifying obligations for smaller businesses and adding a six-month grace period for some transparency and marking duties. As 2026 unfolds, policymakers are expected to focus on disclosures, automated decision-making, frontier models, chatbot rules involving minors, mental health uses in health care and sector-specific regulation, while businesses are advised to keep state compliance programs in place rather than wait for a stable federal framework.
