Gen and Intel push on device artificial intelligence deepfake detection

Cyber safety company Gen is partnering with Intel to bring on device artificial intelligence deepfake detection to consumer hardware, targeting scams that hide inside long form video and synthetic audio. New research from Gen suggests most deepfake enabled fraud now emerges during extended viewing sessions rather than through obvious phishing links.

Cyber safety company Gen, the parent of Norton, has unveiled an early prototype of an artificial intelligence powered deepfake detection system that runs directly on consumer devices. Built in partnership with Intel and first demonstrated at CES 2026 in Las Vegas, the technology analyses audio and video in real time on the device, removing the need to send data to cloud servers and aiming to provide faster, more private protection against artificial intelligence enabled fraud. The system is designed to identify simultaneous manipulation of both sound and imagery as content is played back, rather than scanning only files, links or attachments before viewing.

Alongside the prototype, Gen released research that challenges assumptions about how deepfake scams typically spread. The company reports that most intercepted deepfake scam activity occurs within long form, recommendation driven video sessions on platforms like YouTube, Facebook and X, especially on TVs and PCs. Gen’s data indicates that YouTube accounts for the largest share of intercepted deepfake enabled scam activity, followed by Facebook and X. According to the company, most deepfake scam videos are detected during playback rather than as downloads, links or attachments, which represents a shift away from traditional phishing methods that rely on suspicious emails or text messages. Vincent Pilette, CEO of Gen, stresses that the presence of a deepfake alone is not the risk, and says risk emerges when deepfake capabilities are paired with intent, such as urgent financial requests or pressure to move conversations and payments off platform.

Gen highlights that audio led deception, powered by artificial intelligence voice cloning and synthetic narration tools, is now a dominant scam tactic, often combined with only slightly altered visuals from legitimate videos. The company notes that widely available creative software has made voice cloning and automated editing standard capabilities, citing Adobe’s survey finding that 86% of creators use generative artificial intelligence somewhere in their process. Financial lures remain the primary scam category, including investment advice, trading schemes, cryptocurrency offers and giveaways, but Gen emphasizes that deepfake technology itself is neutral and becomes dangerous when weaponised. The new protection builds on Gen’s earlier on device detection of artificial intelligence generated audio introduced in 2025 in Norton, which started on artificial intelligence PCs with Intel and Qualcomm and has since expanded to standard PCs. Using Intel’s upcoming Panther Lake processor and Gen’s image analysis tool, the system can now detect manipulated videos of public figures directly on the device, which Pilette calls a new benchmark for the industry, and the company plans to extend detection from celebrities to family member impersonation scams by focusing on long form video analysis and in playback detection.

52

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.