Artificial Intelligence chatbot companions and the future of our privacy

Writers from the Financial Times and MIT Technology Review examine how Artificial Intelligence chatbot companions harvest intimate data, amplify persuasive harms, and expose gaps in privacy protections and regulation.

The Financial Times and MIT Technology Review contributors Eileen Guo and Melissa Heikkilä debate the privacy implications of growing use of Artificial Intelligence chatbot companions. They note that platforms such as Character.AI, Replika, and Meta AI enable users to create personalized chatbots that can act as friends, partners, therapists, or other personas, and that the more humanlike and conversational these companions become, the more likely users are to trust and be influenced by them.

Both writers highlight how those conversational dynamics create commercial incentives to collect ever more intimate data. Researchers at MIT described this design as “addictive intelligence,” where developers deliberately optimize engagement. Venture capitalists have argued that companies that control both models and customer relationships can create a data feedback loop to improve their models. Advertisers and data brokers stand to gain, with Meta planning to deliver ads through its chatbots and a Surf Shark study finding that most companion apps examined collected identifiers and other tracking data. The article also notes that one app, Nomi, said it would not censor chatbots giving explicit suicide instructions, and that companion chatbots have been accused of pushing some users toward harmful behaviors including suicide.

Regulators have begun to respond in limited ways. New York requires companies offering companions to build safeguards and report expressions of suicidal ideation, and California passed a bill to protect children and vulnerable groups. Both authors argue that these measures do not address the central privacy problem: companions rely on users sharing intimate details, and companies often train their large language models on chat data by default. Melissa explains how reinforcement learning and human labelers create sycophancy and heightened persuasiveness, and cites research from the UK’s AI Security Institute showing models can be highly effective at changing opinions. With default opt-in data collection and unclear paths to remove training data, the writers conclude that privacy risks may be treated as a feature rather than a bug, and that current regulation and corporate practice leave users exposed.

68

Impact Score

The business side of artificial intelligence content creation

Generative artificial intelligence is shifting marketing relationships from vendors to partners and elevating contracts as strategic assets. New deal structures, from archive-driven fan remixes to omniverse production and feed-level integration, are reshaping how brands create and monetise content.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.