YouTube expands deepfake detection to Hollywood talent

YouTube is opening its likeness protection system to actors, athletes, musicians and creators beyond its own platform. The move gives public figures a way to flag and request removal of damaging Artificial Intelligence-generated replicas while YouTube weighs broader rules and possible future monetization.

YouTube has opened its proprietary deepfake detection tool to actors, athletes, creators and musicians who face a high risk of having their likeness misused, whether they have a YouTube channel or not. Public figures or their representatives can opt in by uploading their likeness to the system, which scans the platform for potential replicas and flags them for review. Their teams can then decide whether to leave the content up or request removal, giving talent and managers a new mechanism to monitor synthetic videos before reputational damage spreads.

YouTube first began testing the tool nearly a year and a half ago, then expanded it a few months later to some of the most prominent creators on its platform, and earlier this year to selected politicians and public officials. The platform began testing the tool in late 2024 through a pilot program with CAA. The wider rollout comes as deepfakes have become a growing concern in entertainment, especially after the past six months alone delivered what one source described as two major wake-up calls for Hollywood. Last fall, OpenAI launched the Sora app, and users quickly generated videos featuring recognizable characters, intellectual property and historic figures such as Martin Luther King Jr. Then in February, videos made with Seedance 2.0 showing Brad Pitt fighting Tom Cruise spread rapidly online.

YouTube says the system is modeled in part on the logic behind Content ID, but applied to identity rather than copyright. A takedown request is not automatic, and the company says parody and satire may still be allowed under its community guidelines. Content involving realistic and consequential disparagement or content replacement is more likely to be removed, especially if a deepfake closely imitates the type of work a celebrity, actor or creator is known for and could interfere with their livelihood. The boundaries remain less clear for fan-made trailers and other celebratory uses, highlighting how difficult it is to distinguish harmful deception from fandom.

Talent agencies and managers described the tool as a practical early safeguard because most harmful synthetic content is often discovered by chance, after damage is already done. At the same time, studios, agencies and public figures are not uniformly hostile to the technology. Some see creative and fan-engagement potential in synthetic media, and YouTube says many creators in the pilot requested removal of only a small percentage of flagged content. The platform is not yet offering a way for talent to monetize deepfakes of themselves, though executives say they are considering rightsholder and monetization questions after establishing what they describe as a foundational layer of responsibility and protection.

64

Impact Score

Adobe plans outcome-based pricing for Artificial Intelligence agents

Adobe is positioning its Artificial Intelligence agents around performance-based pricing, charging only when the software completes useful work. The approach points to a more results-oriented model for selling generative Artificial Intelligence tools to business customers.

Tech firms commit billions to Artificial Intelligence infrastructure

Amazon, OpenAI, Nvidia, Meta, Google and others are signing increasingly large cloud, chip and data center agreements as demand for Artificial Intelligence infrastructure accelerates. The latest wave of deals spans investments, compute purchases, chip supply agreements and data center buildouts.

JEDEC outlines LPDDR6 expansion for data centers

JEDEC has previewed planned updates to LPDDR6 aimed at pushing the memory standard beyond mobile devices and into selected data center and accelerated computing use cases. The roadmap includes higher-capacity packaging options, flexible metadata support, 512 GB densities, and a new SOCAMM2 module standard.

Tsmc debuts A13 process technology

Tsmc has introduced its A13 process at its 2026 North America Technology Symposium as a tighter version of A14 aimed at next-generation Artificial Intelligence, high performance computing, and mobile designs. The company positions the node as a more compact and efficient option with backward-compatible design rules for faster migration.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.