Generative artificial intelligence (GenAI) has transformed online media, making content creation rapid and accessible while also enabling misinformation, identity-related fraud and non-consensual synthetic media commonly described as deepfakes. The use of deepfakes to spread false information attracted widespread attention in 2023 when a deepfake video of Indian celebrity Rashmika Mandanna went viral, prompting public concern and comments from the prime minister. Courts, including the Delhi High Court, have since granted relief to public figures and directed content creators and intermediary platforms to take corrective action.
To address these harms the government amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, with the amendment coming into force on 15 November 2025. The changes introduce the first legislative definition of synthetically generated information as “information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true”. The amendment is compared to the European Union’s Artificial Intelligence Act and notes that China recently rolled out Artificial Intelligence labelling rules. It strengthens due diligence obligations in rule 3 for social media intermediaries (SMI) and significant social media intermediaries (SSMI) as defined in rules 2(1)(w) and (v).
The rules require platforms that allow creation and dissemination of artificial intelligence content to ensure such content is prominently labelled or embedded with permanent, unique identifiers or metadata. For visual content the label or disclaimer must cover at least 10% of the total surface area and for audio content warnings must occupy the first 10% of the total duration. SSMIs must ensure developers declare that uploaded content is synthetically generated and put in place “reasonable and appropriate technical measures”, including automated tools, to verify declarations. Where verification confirms synthetic generation, a clear and prominent disclaimer must be displayed.
Crucially, removal of synthetically generated content no longer depends on the receipt of a court order or notification from an appropriate governmental agency. SSMIs must use reasonable efforts to remove such content or risk losing safe harbour protection under section 79 of the Information Technology Act, 2000. The authors caution that leaving assessment to platforms may produce varied standards, and they urge precise legal and technical standards, an inter-ministerial coordinating body and consideration of licensing and mandatory labelling to identify and prosecute those responsible for malicious deepfakes.
