Cyber safety company Gen, the parent of Norton, has unveiled an early prototype of an artificial intelligence powered deepfake detection system that runs directly on consumer devices. Built in partnership with Intel and first demonstrated at CES 2026 in Las Vegas, the technology analyses audio and video in real time on the device, removing the need to send data to cloud servers and aiming to provide faster, more private protection against artificial intelligence enabled fraud. The system is designed to identify simultaneous manipulation of both sound and imagery as content is played back, rather than scanning only files, links or attachments before viewing.
Alongside the prototype, Gen released research that challenges assumptions about how deepfake scams typically spread. The company reports that most intercepted deepfake scam activity occurs within long form, recommendation driven video sessions on platforms like YouTube, Facebook and X, especially on TVs and PCs. Gen’s data indicates that YouTube accounts for the largest share of intercepted deepfake enabled scam activity, followed by Facebook and X. According to the company, most deepfake scam videos are detected during playback rather than as downloads, links or attachments, which represents a shift away from traditional phishing methods that rely on suspicious emails or text messages. Vincent Pilette, CEO of Gen, stresses that the presence of a deepfake alone is not the risk, and says risk emerges when deepfake capabilities are paired with intent, such as urgent financial requests or pressure to move conversations and payments off platform.
Gen highlights that audio led deception, powered by artificial intelligence voice cloning and synthetic narration tools, is now a dominant scam tactic, often combined with only slightly altered visuals from legitimate videos. The company notes that widely available creative software has made voice cloning and automated editing standard capabilities, citing Adobe’s survey finding that 86% of creators use generative artificial intelligence somewhere in their process. Financial lures remain the primary scam category, including investment advice, trading schemes, cryptocurrency offers and giveaways, but Gen emphasizes that deepfake technology itself is neutral and becomes dangerous when weaponised. The new protection builds on Gen’s earlier on device detection of artificial intelligence generated audio introduced in 2025 in Norton, which started on artificial intelligence PCs with Intel and Qualcomm and has since expanded to standard PCs. Using Intel’s upcoming Panther Lake processor and Gen’s image analysis tool, the system can now detect manipulated videos of public figures directly on the device, which Pilette calls a new benchmark for the industry, and the company plans to extend detection from celebrities to family member impersonation scams by focusing on long form video analysis and in playback detection.
