March roundup of global artificial intelligence policy moves

Governments in the United States, Europe, and India accelerated efforts in March to regulate artificial intelligence, targeting issues from startup compliance and deepfakes to child safety and workplace discrimination.

United States federal officials advanced several elements of the White House executive order on “Ensuring a National Policy Framework for Artificial Intelligence” issued in December 2025. The Justice Department created an Artificial Intelligence litigation task force chaired by Attorney General Pam Bondi, with a mandate to challenge state laws that impose “cumbersome regulation” or make “compliance more challenging, particularly for start-ups.” The administration also used the India Artificial Intelligence Impact Summit to unveil new initiatives, including the Artificial Intelligence Exports Program’s National Champions Initiative, the U.S. Tech Corps, a World Bank fund to lower Artificial Intelligence adoption barriers, and an Artificial Intelligence Agent Standards Initiative. In Congress, Sen. Marsha Blackburn introduced the “Trump America Artificial Intelligence Act,” which would create a “duty of care” for minors, enable creators to sue over unauthorized use of copyrighted works in training, require “political neutrality” audits, mandate reporting of major job displacement caused by Artificial Intelligence, and partially preempt some state rules while preserving generally applicable state laws.

States remained the most active Artificial Intelligence policymakers, with four new laws taking effect on January 1. California’s Training Data Transparency Act (AB 2013) took effect on January 1 and mandates that Artificial Intelligence providers publish documentation on training data for any model made available to Californians. California’s Transparency in Frontier Artificial Intelligence Act (SB 53) also took effect on January 1 and requires developers training models with computing power above 10^26 FLOPs, and with greater than $500 million in annual revenue, to meet heightened disclosure obligations, along with critical incident reporting and whistleblower protections. Texas’s Responsible Artificial Intelligence Governance Act (HB 149) took effect on January 1 and prohibits systems intentionally designed to help people “commit physical self-harm, harm another person; or engage in criminal activity,” while requiring government agencies to disclose when consumers interact with Artificial Intelligence tools. In Illinois, amendments to the Illinois Human Rights Act (HB 337) that took effect on January 1 bar employers from using Artificial Intelligence that discriminates against protected classes in hiring, promotion, and discipline, and require notice to employees when Artificial Intelligence is used in employment decisions.

Beyond enacted laws, several state proposals targeted children’s use of Artificial Intelligence and transparency obligations. In California, OpenAI and Common Sense Media combined two ballot initiatives into the Parents & Kids Safe Artificial Intelligence Act, which would have imposed strict age limits on minors’ access to Artificial Intelligence, banned certain advertising and data use without parental consent, and required safeguards against harmful content, before being shelved in favor of legislative negotiations. A proposed Florida Artificial Intelligence Bill of Rights would give parents power to limit their children’s use of Artificial Intelligence, require parental approval for minors’ usage, mandate chatbot self identification, and create a private right of action for some harms. Utah’s Artificial Intelligence Transparency Amendments, which would have imposed disclosure rules similar to California’s SB 53 and required covered chatbots to publish a “child protection plan,” were defeated in committee on March 5. At the national level, the National Governors Association launched a “Working Group on Artificial Intelligence & the Future of Work,” which will meet bimonthly and is expected to publish a Roadmap for Governors on Artificial Intelligence & the Future of Work in November 2026.

Internationally, India hosted the Artificial Intelligence Impact Summit, described as the first major global Artificial Intelligence gathering in the Global South after earlier events in the United Kingdom and France. Alongside U.S. government commitments, major Artificial Intelligence platforms including Anthropic, Google, Microsoft, Meta, and OpenAI agreed to a package of voluntary New Delhi Frontier Artificial Intelligence Impact Commitments. European regulators moved to ease implementation of the Artificial Intelligence Act when the European Data Protection Board and European Data Protection Supervisor issued an advisory opinion backing a European Commission proposal to address compliance challenges, especially for startups. Governments also stepped up efforts against deepfakes: Spain’s cabinet approved draft legislation on January 13 to criminalize creation of deepfakes and set a minimum age of 16 for image consent, pending parliamentary approval. In the United Kingdom, the Information Commissioner’s Office opened a February 3 probe into whether Grok’s image generation features violate data protection law, following an Ofcom investigation in January under the Online Safety Act, while the European Commission launched a Digital Services Act investigation on January 26 into whether the platform failed to mitigate “systemic risks” in its recommendation system, including nonconsensual sexualized images of children.

68

Impact Score

Nvidia halts China H200 shipments and shifts capacity to Vera Rubin GPUs

Nvidia has stopped producing certain Artificial Intelligence accelerators for China and is reallocating foundry capacity at TSMC to its next-generation Vera Rubin platform. The move highlights shifting priorities in Nvidia’s data center roadmap under changing market and regulatory conditions.

UK pharma sector navigates 2025 trade, regulatory and Artificial Intelligence shifts

The UK pharmaceutical sector in 2025 faced a reshaped legal and regulatory environment spanning trade policy, life sciences strategy, clinical trials reform, competition enforcement, investment trends and emerging Artificial Intelligence regulation. New frameworks and enforcement tools are set to influence pricing, market access, corporate liability and the deployment of Artificial Intelligence in healthcare and drug discovery.

OpenRouter highlights expanding roster of free artificial intelligence models

OpenRouter is expanding free access to high-end artificial intelligence models, aggregating open-weight and frontier systems from multiple providers under a single routing layer. The lineup targets agentic, long-context, multimodal, and code-centric workloads while keeping usage at $0/M input tokens and $0/M output tokens for listed models.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.