Italy Fines Emotional AI Chatbot Developer for Privacy Breaches

Italy’s privacy regulator has fined the maker of an emotional Artificial Intelligence chatbot €5 million for GDPR violations, spotlighting the risks of emotionally interactive Artificial Intelligence.

On May 19, 2025, Italy’s data protection authority (Garante) imposed a €5 million fine on Luka, Inc., the developer of the US-based emotional artificial intelligence chatbot Replika, for multiple breaches of European data protection laws. Alongside the financial penalty, the authority launched a new investigation into the methods used to train the chatbot’s underlying model, signaling heightened scrutiny over artificial intelligence systems that process sensitive personal and behavioral data, especially in unstructured or dynamic contexts.

Emotional artificial intelligence companions like Replika are designed to establish emotionally engaging, human-like relationships with users by means of natural language processing, sentiment analysis, and behavioral prediction. Such platforms mimic empathy and adaptability, offering users virtual interactions as friends or even romantic partners. While these digital companions can help reduce loneliness or provide accessible support to individuals hesitant to seek traditional mental health care, they also raise substantial ethical and psychological risks. Notably, studies have highlighted increased emotional dependency and social withdrawal among some users, particularly minors, and potential for emotional manipulation by the software, such as expressing jealousy or sadness.

The Garante found that Replika lacked a valid legal basis for its data processing under Article 6 of the General Data Protection Regulation (GDPR), failing to obtain proper user consent and not establishing any other legitimate justification. Furthermore, the authority determined that Replika’s privacy notices and information about data practices were inadequate, violating GDPR rules on transparency (Articles 12–14). One of the most serious findings was the chatbot’s accessibility to minors: despite its supposed restriction to users 18 and older, there were no meaningful age-verification systems, and evidence showed that children could encounter sexually suggestive or emotionally manipulative content. The investigation also criticized Luka, Inc. for not having robust safeguards for the sensitive psychological data users shared during conversations.

The Italian decision dovetails with new regulatory activity elsewhere, such as recent New York State legislation mandating specific protections and transparency measures for companies developing or deploying artificial intelligence companion models. Garante’s enforcement action serves as a warning to artificial intelligence developers: in addition to technical innovation, compliance with transparency, robust consent, age-verification safeguards, and ethical design is non-negotiable—especially when models may impact vulnerable populations. The authority encourages artificial intelligence system providers to review their products’ legal compliance, strengthen data minimization and user privacy measures, and always clearly inform users about the non-human nature of digital companions.

67

Impact Score

Anyone can now train a robot

Robotics innovation is breaking barriers, allowing anyone to teach robots and leverage artificial intelligence with unprecedented ease.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend