A surveillance mandate disguised as child safety: why the GUARD Act won’t keep us safe

The GUARD Act would force many companies offering Artificial Intelligence chatbots to verify users’ ages, bar minors, and impose criminal penalties, but the bill’s age-gating and data rules risk mass surveillance, censorship, and lost access to everyday tools.

A new bill introduced by Sen. Hawley (R-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require Artificial Intelligence chatbots to verify every user’s age, prohibit minors from using those tools, and create criminal penalties for systems that promote or solicit certain harms. While presented as child-safety legislation, the GUARD Act functions primarily as an age-gating and data-collection mandate that could be applied to a broad range of public-facing services, from customer-service bots to search assistants.

The bill would not allow parental consent or appeals: if an age-verification process determines a user is under 18, that user must be locked out entirely. Its ambiguous definition of an Artificial Intelligence “companion” could be read to include general-use large language models like ChatGPT and many other adaptive systems, prompting companies to shut teenagers out of tools they use for homework, customer service, and creative or educational work. By treating all young people the same, whether seven or seventeen, the law would limit confidential access to information and reduce opportunities for autonomy and learning.

To enforce the ban, platforms would have to deploy a “commercially reasonable” age-verification system that gathers identifying information such as government ID, credit records, or biometric data and periodically re-verifies users. That requirement forces the retention or repeated collection of sensitive data. The Electronic Frontier Foundation and other advocates warn that such centralized identity systems become attractive targets for hackers, destroy online anonymity, disproportionately harm vulnerable groups, and favor large incumbents that can afford compliance costs.

Definitions in the bill are broad: an “Artificial Intelligence chatbot” could cover any service producing adaptive or context-responsive outputs and an “Artificial Intelligence companion” could include systems that simulate interpersonal interaction. Combined with incredibly steep fines – up to $100,000 per violation, enforceable by federal and state attorneys general – the result will likely be widespread censorship, blanket bans on users under 18, or the construction of mass surveillance systems as a condition of access. Lawmakers should reject the GUARD Act and pursue privacy-first policies that improve transparency, accountability, and safer options without building invasive identity infrastructure.

68

Impact Score

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Introducing Mistral 3: open artificial intelligence models

Mistral 3 is a family of open, multimodal and multilingual Artificial Intelligence models that includes three Ministral edge models and a sparse Mistral Large 3 trained with 41B active and 675B total parameters, released under the Apache 2.0 license.

NVIDIA and Mistral Artificial Intelligence partner to accelerate new family of open models

NVIDIA and Mistral Artificial Intelligence announced a partnership to optimize the Mistral 3 family of open-source multilingual, multimodal models across NVIDIA supercomputing and edge platforms. The collaboration highlights Mistral Large 3, a mixture-of-experts model designed to improve efficiency and accuracy for enterprise artificial intelligence deployments starting Tuesday, Dec. 2.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.