Trump executive order targets state Artificial Intelligence laws

Executive Order 14365 lays out a federal strategy to discourage, challenge, and potentially preempt state Artificial Intelligence laws viewed as burdensome. Employers are advised to keep complying with current state and local rules while preparing for regulatory uncertainty in 2026.

On December 11, 2025, President Trump signed Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence.” The order identifies “excessive state regulation” as an obstacle to the Administration’s policy of “sustain[ing] and enhanc[ing] the United States’ global Artificial Intelligence dominance through a minimally burdensome national policy framework for Artificial Intelligence.” The measure does not immediately change existing state laws, but it creates a process intended to discourage, challenge, and potentially preempt state-level regulation.

The order directs multiple federal agencies to take specific actions aimed at state Artificial Intelligence laws. A legislative recommendation for a uniform federal framework was required and was published as the “National Policy Framework for Artificial Intelligence” on March 20, 2026. That seven-part framework includes recommendations on data infrastructure buildout and intellectual property, and it proposes that Congress should preempt state laws that impose undue burdens. On January 9, 2026, the Attorney General formally established an Artificial Intelligence Litigation Task Force through a memorandum, but it does not appear that the task force has initiated litigation regarding any state Artificial Intelligence law.

The Secretary of Commerce was instructed to publish an evaluation of existing state Artificial Intelligence laws no later than March 11, 2026, including laws that require models to alter truthful outputs or compel disclosures that could violate the Constitution. As of the date described, that evaluation had not been made public. The Secretary of Commerce also was to issue a Policy Notice no later than March 11, 2026, linking eligibility for remaining BEAD Program funding to whether states maintain onerous Artificial Intelligence laws, but that notice had not been published. The order also told federal agencies to assess whether discretionary grants can be conditioned on states not enacting conflicting laws or agreeing not to enforce them during grant performance periods.

Additional directives have also not yet been carried out. The Federal Communications Commission chairman was to initiate a proceeding no later than March 11, 2026, on a federal reporting and disclosure standard for Artificial Intelligence models that would preempt conflicting state laws, but no proceeding had been started. The Federal Trade Commission chairman was also to issue a policy statement no later than March 11, 2026, explaining when state laws requiring changes to truthful Artificial Intelligence outputs are preempted by federal prohibitions on deceptive practices, but no statement had been issued.

For employers, the immediate message is continuity mixed with caution. State and local Artificial Intelligence regulation has been expanding, including rules covering hiring, bias, and transparency, and those laws remain in effect for now. If the executive order’s initiatives move forward, employers may face a period of flux and confusion as legal challenges unfold. At the same time, the federal push could eventually produce a single uniform framework that simplifies compliance.

68

Impact Score

Uk delays Artificial Intelligence copyright reform

The UK government has postponed immediate copyright reform for Artificial Intelligence, leaving developers, creatives, and rightsholders to operate under existing law. Licensing, transparency, digital replicas, and future litigation are now set to shape the next phase of policy.

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

OpenAI expands cyber model access through trusted program

OpenAI has introduced GPT-5.4-Cyber as a restricted model for cybersecurity professionals, widening access through its Trusted Access for Cyber program. The release highlights both the defensive value and misuse risks of more capable Artificial Intelligence tools in security work.

Chinese tech firms and Li Fei-Fei push world models forward

Chinese tech companies and Li Fei-Fei’s World Labs are accelerating work on world models, a field focused on helping Artificial Intelligence learn from and interact with physical reality. Alibaba’s new Happy Oyster system targets real-time virtual world creation with more continuous user control.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.