Deepfake porn and chatbot privacy breaches

Nonconsensual deepfake pornography is harming not only people whose faces are inserted into explicit media, but also adult creators whose bodies and likenesses are reused without permission. Generative Artificial Intelligence chatbots are also exposing private phone numbers, making personal information easier to retrieve and harder to control.

A woman who had made porn videos more than a decade earlier discovered that one of her old videos had been altered to show someone else’s face on her body. The finding highlights a lesser-discussed side of sexualized deepfakes: harm to the people whose bodies are used as the foundation for explicit synthetic content. Adult content creators say Artificial Intelligence systems are training on their work, cloning their likenesses, and generating explicit material they never agreed to make, with little legal protection or meaningful control.

The consequences extend beyond consent alone. Creators describe a growing threat to their rights, livelihoods, and ownership of their own bodies as synthetic media systems repurpose existing pornography into new forms of content. Public discussion often centers on victims whose faces are inserted into sexual imagery, but the underlying performers can also lose agency over how their bodies are reused and monetized. The issue points to a broader conflict over authorship, bodily autonomy, and exploitation in the age of generative tools.

Privacy harms are also surfacing in consumer chatbots. Generative Artificial Intelligence is exposing people’s personal contact information, including real phone numbers, and there appears to be no easy way to stop it. A software developer began receiving WhatsApp messages asking for help after Gemini surfaced his number. A university researcher prompted the chatbot to reveal a colleague’s private cell number. A Reddit user said Gemini directed a stream of callers looking for lawyers to his phone.

Experts believe these privacy lapses stem from personally identifiable information embedded in Artificial Intelligence training data. Chatbots may now be making that information dramatically easier to find by presenting it directly in response to user prompts. That changes the practical risk of exposure: information that may once have been difficult to locate can be surfaced instantly and at scale. Victims of these disclosures face a growing problem with few clear remedies, as the systems continue to retrieve and redistribute sensitive personal data.

68

Impact Score

European Union Artificial Intelligence Act raises layered compliance demands for finance

Banks, insurers and financial intermediaries face a more complex compliance environment as the European Union Artificial Intelligence Act overlays existing financial regulation and the GDPR. Proposed changes in the Digital Omnibus Package may delay some obligations, but the core challenge remains managing overlapping rules, roles and regulators.

Europe and US discuss biometric data-sharing framework

European Union and US officials are negotiating a border security arrangement that could enable continuous biometric data exchanges on EU citizens. The UK says the US has also requested access to fingerprint records as part of Visa Waiver Program discussions.

Apple plans Intel 18A-P for M7 and 14A for A21

Apple is expected to use Intel’s 18A-P process for M7 chips in MacBook models and Intel’s 14A process for A21 chips in iPhones. The shift points to a broader supplier strategy as Apple moves beyond TSMC for parts of its future silicon roadmap.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.