White House sets out federal Artificial Intelligence framework for employers

The White House’s National Artificial Intelligence Legislative Framework outlines a federal policy agenda but does not create immediate legal obligations for employers. For now, businesses still need to comply with the growing patchwork of state and local Artificial Intelligence rules.

On March 20, 2026, the White House published a four-page document with “Legislative Recommendations” in its National Policy Framework for Artificial Intelligence. The framework does not include specific draft legislation or an executive order, and it is not legally binding on Congress or private sector companies. Instead, it lays out the administration’s vision for a comprehensive federal Artificial Intelligence legislative package and signals support for a unified national approach.

For employers, the immediate practical point is that there is no change in the law. Unless and until Congress enacts federal legislation with preemptive effect, state and local Artificial Intelligence laws remain fully in force. That leaves employers operating in a multi-jurisdictional compliance environment as jurisdictions including California, Colorado, Illinois, and New York City continue regulating the use of Artificial Intelligence in hiring, promotion, performance management, and other employment decisions.

The framework builds on earlier Trump administration actions, including Executive Order 14179, the July 2025 Artificial Intelligence Action Plan, and Executive Order 14365. Across those steps, a central theme has been resistance to what the administration views as burdensome state-level regulation. The framework advances eight policy areas for federal legislation designed to support innovation, preserve U.S. leadership in Artificial Intelligence, and preempt restrictive state laws while still allowing carve-outs for traditional state authority in areas such as consumer protection, fraud, child protection, zoning, and government use of Artificial Intelligence.

Key recommendations cover children’s safety, infrastructure, intellectual property, free speech, innovation policy, and workforce development. The framework calls for stronger protections for children using Artificial Intelligence services, support for law enforcement efforts against Artificial Intelligence-enabled fraud and impersonation scams, and measures to ensure that Artificial Intelligence data center construction and operation does not result in increased residential electricity costs from Artificial Intelligence data centers. It also backs regulatory sandboxes, broader access to federal datasets in Artificial Intelligence-ready formats, and reliance on existing sector-specific regulators and industry-led standards rather than a new federal Artificial Intelligence regulator.

On intellectual property, the framework states that training Artificial Intelligence models on copyrighted material does not constitute copyright infringement, while recommending that courts resolve the fair use question and advising against legislative intervention for now. It also suggests exploring a collective licensing or rights-management framework and calls for federal protections against unauthorized Artificial Intelligence-generated digital replicas that are consistent with the First Amendment.

For businesses that develop, contract for, or deploy Artificial Intelligence tools, the message is to keep building structured and flexible governance programs. The legislative outlook remains uncertain in an election year, and companies are being pushed to monitor both congressional developments and court decisions while preparing for a legal environment that continues to shift across jurisdictions.

55

Impact Score

Chrome downloads Gemini Nano model locally without clear consent

Google Chrome is reported to download a 4 GB Gemini Nano model onto some PCs automatically when certain Artificial Intelligence features are active. The process happens without clear notice in browser settings and can repeat after the model is deleted.

AMD plans specialized EPYC CPUs for Artificial Intelligence, hpc, and cloud

AMD is preparing a broader EPYC strategy with task-specific server CPUs aimed at agentic Artificial Intelligence, hpc, training and inference, and cloud deployments. The shift starts with the Zen 6 generation and adds Verano as an Artificial Intelligence-focused variant within the same EPYC family.

Nvidia expands spectrum-x ethernet with open mrc protocol

Nvidia is positioning Spectrum-X Ethernet as a foundation for large-scale Artificial Intelligence training, with Multipath Reliable Connection adding open, multi-path RDMA transport for higher resilience and throughput. OpenAI, Microsoft and Oracle are among the organizations using the technology in large Artificial Intelligence environments.

Anthropic explores Fractile chips to diversify supply

Anthropic is reportedly in early talks with London-based Fractile to secure high-performance Artificial Intelligence chips for inference workloads. The move would reduce reliance on Nvidia and broaden the company’s hardware supply chain.

OpenAI curbs odd creature references in chatbot responses

OpenAI has adjusted its models after users complained about overly familiar responses and strange references to goblins, gremlins, pigeons, and raccoons. The company traced the behavior to a retired “nerdy” personality whose habits spread into broader model training.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.