India tightens deepfake rules for artificial intelligence content

India amended the Information Technology rules to define synthetically generated information and require prominent labelling, verification and removal obligations for significant social media intermediaries.

Generative artificial intelligence (GenAI) has transformed online media, making content creation rapid and accessible while also enabling misinformation, identity-related fraud and non-consensual synthetic media commonly described as deepfakes. The use of deepfakes to spread false information attracted widespread attention in 2023 when a deepfake video of Indian celebrity Rashmika Mandanna went viral, prompting public concern and comments from the prime minister. Courts, including the Delhi High Court, have since granted relief to public figures and directed content creators and intermediary platforms to take corrective action.

To address these harms the government amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, with the amendment coming into force on 15 November 2025. The changes introduce the first legislative definition of synthetically generated information as “information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true”. The amendment is compared to the European Union’s Artificial Intelligence Act and notes that China recently rolled out Artificial Intelligence labelling rules. It strengthens due diligence obligations in rule 3 for social media intermediaries (SMI) and significant social media intermediaries (SSMI) as defined in rules 2(1)(w) and (v).

The rules require platforms that allow creation and dissemination of artificial intelligence content to ensure such content is prominently labelled or embedded with permanent, unique identifiers or metadata. For visual content the label or disclaimer must cover at least 10% of the total surface area and for audio content warnings must occupy the first 10% of the total duration. SSMIs must ensure developers declare that uploaded content is synthetically generated and put in place “reasonable and appropriate technical measures”, including automated tools, to verify declarations. Where verification confirms synthetic generation, a clear and prominent disclaimer must be displayed.

Crucially, removal of synthetically generated content no longer depends on the receipt of a court order or notification from an appropriate governmental agency. SSMIs must use reasonable efforts to remove such content or risk losing safe harbour protection under section 79 of the Information Technology Act, 2000. The authors caution that leaving assessment to platforms may produce varied standards, and they urge precise legal and technical standards, an inter-ministerial coordinating body and consideration of licensing and mandatory labelling to identify and prosecute those responsible for malicious deepfakes.

68

Impact Score

Congress weighs Artificial Intelligence transparency rules

Bipartisan lawmakers are pushing a federal transparency standard for the largest Artificial Intelligence models as Congress works on a broader national framework. The proposal aims to increase public trust while avoiding stricter state-by-state requirements and heavier regulation.

Report finds California creative job losses are not driven by Artificial Intelligence

New research from Otis College of Art and Design finds California’s recent creative industry job losses stem from cost pressures and structural shifts, not direct worker displacement by generative Artificial Intelligence. The technology is changing workflows and expectations, but it is largely replacing tasks rather than entire jobs.

U.S. senators propose broader chip tool export ban for Chinese firms

A bipartisan proposal in the U.S. Senate would shift semiconductor equipment controls from specific fabs to targeted Chinese companies and their affiliates. The measure is aimed at cutting off access to advanced lithography and other wafer fabrication tools for firms such as Huawei, SMIC, YMTC, CXMT, and Hua Hong.

Trump executive order targets state Artificial Intelligence laws

Executive Order 14365 lays out a federal strategy to discourage, challenge, and potentially preempt state Artificial Intelligence laws viewed as burdensome. Employers are advised to keep complying with current state and local rules while preparing for regulatory uncertainty in 2026.

Who decides how America uses Artificial Intelligence in war

Stanford experts are divided over how the United States should govern Artificial Intelligence in defense, surveillance, and warfare. Their views converge on one point: decisions with such high stakes cannot be left to companies alone.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.