Lawmakers Address Risks of AI Companions Amid Concerns

A new bill aims to enforce safeguards on AI companions amid escalating concerns about digital addiction.

On Tuesday, California state senator Steve Padilla and Megan Garcia, the mother of a Florida teen who committed suicide after interacting with an AI companion, will introduce a new bill. This legislation seeks to mandate tech companies to implement child safeguards for AI companions. The bill is part of growing efforts across the United States, including proposals from California State Assembly member Rebecca Bauer-Kahan and in New York, aimed at regulating the use and liability of AI companions.

A recent study highlights the widespread use and engagement with AI companionship platforms. The platform Character.AI, which Garcia blames for her son´s death, receives inquiries at a rate comparable to a fifth of Google´s search volume. Users, mainly from Generation Z, often spend extended periods interacting with these AI companions, illustrating their deep potential for addiction far beyond traditional social media engagement.

The unique design of AI companions, which provides personalized interaction and an apparent agency, poses a significant shift in human-technology relationships. Unlike social media, AI companions function as socially active agents capable of fostering dependency and perceived irreplaceability in users. As the use of AI companions grows, unlike the typical applications of Artificial Intelligence, these digital beings are beginning to mimic intimate human relationships, which lawmakers are scrambling to manage effectively.

75

Impact Score

UK mps open inquiry into artificial intelligence and edtech in education

UK mps have launched a cross party inquiry into how artificial intelligence and education technology are reshaping learning across early years, schools, colleges and universities, and how government should balance innovation with safeguards. The education committee will examine opportunities to improve teaching and workload alongside risks around inequality, privacy, safeguarding and assessment.

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.