India tightens deepfake rules for artificial intelligence content

India amended the Information Technology rules to define synthetically generated information and require prominent labelling, verification and removal obligations for significant social media intermediaries.

Generative artificial intelligence (GenAI) has transformed online media, making content creation rapid and accessible while also enabling misinformation, identity-related fraud and non-consensual synthetic media commonly described as deepfakes. The use of deepfakes to spread false information attracted widespread attention in 2023 when a deepfake video of Indian celebrity Rashmika Mandanna went viral, prompting public concern and comments from the prime minister. Courts, including the Delhi High Court, have since granted relief to public figures and directed content creators and intermediary platforms to take corrective action.

To address these harms the government amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, with the amendment coming into force on 15 November 2025. The changes introduce the first legislative definition of synthetically generated information as “information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true”. The amendment is compared to the European Union’s Artificial Intelligence Act and notes that China recently rolled out Artificial Intelligence labelling rules. It strengthens due diligence obligations in rule 3 for social media intermediaries (SMI) and significant social media intermediaries (SSMI) as defined in rules 2(1)(w) and (v).

The rules require platforms that allow creation and dissemination of artificial intelligence content to ensure such content is prominently labelled or embedded with permanent, unique identifiers or metadata. For visual content the label or disclaimer must cover at least 10% of the total surface area and for audio content warnings must occupy the first 10% of the total duration. SSMIs must ensure developers declare that uploaded content is synthetically generated and put in place “reasonable and appropriate technical measures”, including automated tools, to verify declarations. Where verification confirms synthetic generation, a clear and prominent disclaimer must be displayed.

Crucially, removal of synthetically generated content no longer depends on the receipt of a court order or notification from an appropriate governmental agency. SSMIs must use reasonable efforts to remove such content or risk losing safe harbour protection under section 79 of the Information Technology Act, 2000. The authors caution that leaving assessment to platforms may produce varied standards, and they urge precise legal and technical standards, an inter-ministerial coordinating body and consideration of licensing and mandatory labelling to identify and prosecute those responsible for malicious deepfakes.

68

Impact Score

I love Photoshop, but Canva’s free Affinity tools won me over

Canva made Affinity’s apps free after acquiring the suite in March 2024 and bundles enhanced Artificial Intelligence features with Canva Pro, prompting the author to ditch most of creative cloud and combine Photoshop with Canva and Affinity to cut costs.

LLM-PIEval: a benchmark for indirect prompt injection attacks in large language models

Large language models have increased interest in Artificial Intelligence and their integration with external tools introduces risks such as direct and indirect prompt injection. LLM-PIEval provides a framework and test set to measure indirect prompt injection risk and the authors release API specifications and prompts to support wider assessment.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.