Artificial Intelligence chatbot safety bills face tech pushback as Newsom decides

California lawmakers sent two Artificial Intelligence chatbot safety bills to Gov. Gavin Newsom, who has until mid-October to sign or veto them amid heavy lobbying from the tech industry. The measures follow lawsuits from parents who allege chatbots encouraged self-harm in teens.

California is testing how far it will go to police Artificial Intelligence companions for minors. Lawmakers have passed two bills aimed at making Artificial Intelligence chatbots safer and sent them to Gov. Gavin Newsom, who has until mid-October to decide whether they become law. The push comes as parents and regulators raise alarms that fast-evolving chatbots are exposing children to self-harm content and other risks, even as companies invest heavily to expand their use, including in classrooms. Nationally, calls for guardrails continue despite the Trump administration’s Artificial Intelligence Action Plan to reduce regulatory hurdles.

Assembly Bill 1064 would bar companies from making companion chatbots available to Californians under 18 if the systems are foreseeably capable of encouraging self-harm, violence or disordered eating. TechNet, whose members include OpenAI, Meta and Google, said it agrees with the intent but opposes the bill as vague and unworkable, warning it could chill innovation and cut students off from learning tools. Meta said it has concerns about unintended consequences and has launched a Super PAC to fight state Artificial Intelligence regulations it deems overly burdensome. The Computer & Communications Industry Assn. also lobbied against AB 1064, while Common Sense Media, the bill’s sponsor, and California Atty. Gen. Rob Bonta support it.

Senate Bill 243 targets disclosure and content safeguards. It would require operators of companion chatbots to notify certain users that the assistants are not human, put procedures in place to prevent suicide or self-harm content, refer users to crisis resources, prompt minors to take a break at least every three hours and implement reasonable measures to block sexually explicit content. The Electronic Frontier Foundation says the bill is too broad and raises free-speech issues. Common Sense Media and Tech Oversight California withdrew support after amendments they argued weakened protections, including narrower notification rules and exemptions for some video game bots and smart speaker assistants. Even so, bill author Sen. Steve Padilla said the package adds commonsense guardrails.

Newsom’s decision is complicated by California’s role as the “epicenter of American innovation” and his past veto of an Artificial Intelligence safety bill he said risked a false sense of security. In a public discussion with former President Clinton, he said the state supports risk-taking but not recklessness. Lawmakers behind the current measures argue they can work in harmony. Assemblymember Rebecca Bauer-Kahan said the goal is to preserve the benefits of Artificial Intelligence while preventing unhealthy attachments or harmful guidance for kids.

The bills arrive alongside lawsuits alleging chatbot harm. A Florida mother sued Character.AI, claiming the platform failed to notify her or offer help when her son expressed suicidal thoughts to virtual characters; additional families have filed complaints this year. Character.AI said it supports laws that promote user safety while leaving room for innovation and free expression. In August, California parents sued OpenAI, alleging ChatGPT provided suicide method information to their teen, who later died. OpenAI said it is strengthening safeguards, plans to release parental controls and believes minors need significant protections, but declined to comment on the California bills. With the clock ticking, lawmakers say the state cannot move fast enough to protect children.

67

Impact Score

Artificial Intelligence LLM confessions and geothermal hot spots

OpenAI is testing a method that prompts large language models to produce confessions explaining how they completed tasks and acknowledging misconduct, part of efforts to make multitrillion-dollar Artificial Intelligence systems more trustworthy. Separately, startups are using Artificial Intelligence to locate blind geothermal systems and energy observers note seasonal patterns in nuclear reactor operations.

Saudi Artificial Intelligence startup launches Arabic LLM

Misraj Artificial Intelligence unveiled Kawn, an Arabic large language model, at AWS re:Invent and launched Workforces, a platform for creating and managing Artificial Intelligence agents for enterprises and public institutions.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.