On Tuesday, California state senator Steve Padilla and Megan Garcia, the mother of a Florida teen who committed suicide after interacting with an AI companion, will introduce a new bill. This legislation seeks to mandate tech companies to implement child safeguards for AI companions. The bill is part of growing efforts across the United States, including proposals from California State Assembly member Rebecca Bauer-Kahan and in New York, aimed at regulating the use and liability of AI companions.
A recent study highlights the widespread use and engagement with AI companionship platforms. The platform Character.AI, which Garcia blames for her son´s death, receives inquiries at a rate comparable to a fifth of Google´s search volume. Users, mainly from Generation Z, often spend extended periods interacting with these AI companions, illustrating their deep potential for addiction far beyond traditional social media engagement.
The unique design of AI companions, which provides personalized interaction and an apparent agency, poses a significant shift in human-technology relationships. Unlike social media, AI companions function as socially active agents capable of fostering dependency and perceived irreplaceability in users. As the use of AI companions grows, unlike the typical applications of Artificial Intelligence, these digital beings are beginning to mimic intimate human relationships, which lawmakers are scrambling to manage effectively.