Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Lawmakers are moving to require Artificial Intelligence companies to disclose more about how their products are built as Congress considers a national standard for governing the fast-evolving technology. New bipartisan legislation would require companies developing the largest Artificial Intelligence models, including OpenAI Inc., Anthropic PBC, and Alphabet Inc.’s Google, to publicly share certain information about how they train tools such as ChatGPT, Claude, and Gemini. The AI Foundation Model Transparency Act (H.R. 8094), unveiled March 26, follows the White House’s release of a national Artificial Intelligence framework meant to override state laws and arrives as lawmakers weigh a broader package this year.

The proposal reflects an effort to find a middle ground between light-touch oversight and stricter regulation. Industry groups have asked for federal leadership on transparency requirements so companies can follow one standard instead of navigating separate rules in states like California, New York, and Colorado. The measure would direct the Federal Trade Commission, working with the Commerce Department, National Institute of Standards and Technology, and Office of Science and Technology Policy, to set disclosure requirements for the largest Artificial Intelligence models. Supporters say the approach could help the public better understand how systems are trained and tested without forcing companies to reveal proprietary algorithms or other trade secrets.

Public skepticism is adding momentum to the debate. Concerns are growing over Artificial Intelligence’s spread into daily life and its effects on workers, businesses, and consumers across industries including health care and finance. A Quinnipiac poll found that 76% of Americans think they can trust Artificial Intelligence sometimes or hardly ever. Backers of the bill argue that transparency could improve accountability, encourage safer development practices, and build confidence in the technology. The proposal has support from groups including Americans for Responsible Innovation, SAG-AFTRA, Mental Health America, and industry association TechNet.

Even so, the measure faces political and policy hurdles. Some companies may support a federal standard while remaining cautious about an FTC-led rulemaking process, especially amid criticism of the agency. Other analysts warn that the bill could open new avenues for litigation against Artificial Intelligence developers and slow the industry’s growth. OpenAI, Anthropic, and Meta Plaforms Inc. have faced numerous legal challenges over claims that copyrighted works were used without authorization to train models. The bill is nonetheless gaining bipartisan support. One more Republican, Rep. Brian Fitzpatrick (Pa.), signed on as a co-sponsor Thursday, adding to backing from Reps. Don Beyer (D-Va.), Mike Lawler (R-N.Y.), and Sara Jacobs (D-Calif.).

68

Impact Score

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

U.S. senators propose broader chip tool export ban for Chinese firms

A bipartisan proposal in the U.S. Senate would shift semiconductor equipment controls from specific fabs to targeted Chinese companies and their affiliates. The measure is aimed at cutting off access to advanced lithography and other wafer fabrication tools for firms such as Huawei, SMIC, YMTC, CXMT, and Hua Hong.

Trump executive order targets state Artificial Intelligence laws

Executive Order 14365 lays out a federal strategy to discourage, challenge, and potentially preempt state Artificial Intelligence laws viewed as burdensome. Employers are advised to keep complying with current state and local rules while preparing for regulatory uncertainty in 2026.

Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

GPUBreach bypasses IOMMU on GDDR6-based NVIDIA GPUs

Researchers from the University of Toronto describe GPUBreach, a rowhammer attack against GDDR6-based NVIDIA GPUs that can bypass IOMMU protections. The technique enables CPU-side privilege escalation by abusing trusted GPU driver behavior on the host system.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.