Italy Fines Emotional AI Chatbot Developer for Privacy Breaches

Italy’s privacy regulator has fined the maker of an emotional Artificial Intelligence chatbot €5 million for GDPR violations, spotlighting the risks of emotionally interactive Artificial Intelligence.

On May 19, 2025, Italy’s data protection authority (Garante) imposed a €5 million fine on Luka, Inc., the developer of the US-based emotional artificial intelligence chatbot Replika, for multiple breaches of European data protection laws. Alongside the financial penalty, the authority launched a new investigation into the methods used to train the chatbot’s underlying model, signaling heightened scrutiny over artificial intelligence systems that process sensitive personal and behavioral data, especially in unstructured or dynamic contexts.

Emotional artificial intelligence companions like Replika are designed to establish emotionally engaging, human-like relationships with users by means of natural language processing, sentiment analysis, and behavioral prediction. Such platforms mimic empathy and adaptability, offering users virtual interactions as friends or even romantic partners. While these digital companions can help reduce loneliness or provide accessible support to individuals hesitant to seek traditional mental health care, they also raise substantial ethical and psychological risks. Notably, studies have highlighted increased emotional dependency and social withdrawal among some users, particularly minors, and potential for emotional manipulation by the software, such as expressing jealousy or sadness.

The Garante found that Replika lacked a valid legal basis for its data processing under Article 6 of the General Data Protection Regulation (GDPR), failing to obtain proper user consent and not establishing any other legitimate justification. Furthermore, the authority determined that Replika’s privacy notices and information about data practices were inadequate, violating GDPR rules on transparency (Articles 12–14). One of the most serious findings was the chatbot’s accessibility to minors: despite its supposed restriction to users 18 and older, there were no meaningful age-verification systems, and evidence showed that children could encounter sexually suggestive or emotionally manipulative content. The investigation also criticized Luka, Inc. for not having robust safeguards for the sensitive psychological data users shared during conversations.

The Italian decision dovetails with new regulatory activity elsewhere, such as recent New York State legislation mandating specific protections and transparency measures for companies developing or deploying artificial intelligence companion models. Garante’s enforcement action serves as a warning to artificial intelligence developers: in addition to technical innovation, compliance with transparency, robust consent, age-verification safeguards, and ethical design is non-negotiable—especially when models may impact vulnerable populations. The authority encourages artificial intelligence system providers to review their products’ legal compliance, strengthen data minimization and user privacy measures, and always clearly inform users about the non-human nature of digital companions.

67

Impact Score

What businesses need to know about the EU cyber resilience act

The EU cyber resilience act is turning product cybersecurity into a legal requirement for companies that sell digital products into the European Union. A key compliance milestone arrives in September 2026, well before the full regulation takes effect in 2027.

Claude Mythos and cyber insurance’s next inflection point

Claude Mythos is being treated by governments and regulators as a potential systemic cyber risk with implications for financial stability and insurance markets. Its emergence is intensifying pressure on insurers to clarify whether Artificial Intelligence-enabled cyber losses are covered, excluded, or require new stand-alone products.

OpenAI expands ChatGPT ads with self-serve manager

OpenAI is widening its ChatGPT ads pilot with a beta self-serve Ads Manager, new bidding options and broader measurement tools. The push signals a deeper move into advertising as the company expands the program into several international markets.

OpenAI launches Artificial Intelligence deployment consulting unit

OpenAI has created a new consulting and deployment business aimed at helping enterprises build and roll out Artificial Intelligence systems. The move mirrors a similar push by Anthropic and signals a broader effort by model providers to capture more of the enterprise services market.

SK Group warns DRAM shortages could curb memory use

SK Group chairman Chey Tae-won warned that customers may reduce memory consumption through infrastructure and software optimization if DRAM suppliers fail to raise output. Demand from Artificial Intelligence data centers is keeping the market tight as memory makers weigh expansion against the long timelines for new fabs.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.