IGAP flags overreach in MeitY’s draft deepfake regulations, warns of pre-emptive censorship

Under the draft rules, intermediaries offering computer resources that generate or alter such content must ensure that it carries a visible label or embedded identifier covering at least 10% of the display area or audio duration.

By  Imran FazalNov 14, 2025 8:53 AM
Follow us
IGAP flags overreach in MeitY’s draft deepfake regulations, warns of pre-emptive censorship
IGAP commended MeitY’s proactive stance on safeguarding “Digital Nagriks” and its consultative approach in updating India’s cyber-regulatory framework.

The Indian Governance and Policy Project (IGAP) has cautioned that India’s leading social media platforms could face significant operational and legal challenges under the proposed Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025. The think tank has warned of pre-emptive censorship and mass takedowns by social media platforms.

IGAP has submitted extensive comments to the Ministry of Electronics and Information Technology (MeitY) on the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, also known as the Draft IT (Amendment) Rules, 2025. The policy think tank welcomed the government’s initiative to address the growing threat of deepfakes and AI-generated misinformation but urged for clearer definitions, risk-based obligations, and feasible implementation standards to prevent overregulation and unintended consequences.

IGAP commended MeitY’s proactive stance on safeguarding “Digital Nagriks” and its consultative approach in updating India’s cyber-regulatory framework. It noted that the proposed amendments represent India’s first formal attempt to govern AI-generated and algorithmically processed content. However, the organisation emphasised that while the intent behind the amendments is laudable, their practical execution requires precision to ensure proportionality and to avoid chilling effects on legitimate speech, journalism, innovation, and creative expression.

Obligations for Social Media Platforms

The new Rule 4(1A) introduces obligations for Significant Social Media Intermediaries (SSMIs) to ensure users declare whether their uploaded content is synthetic and to verify such declarations using “reasonable technical measures.” IGAP acknowledged that this shared responsibility model is a constructive step but warned that without clarity on what qualifies as synthetic content, users could face confusion, leading to false or inconsistent declarations.

Verification of synthetic content, it noted, remains technically limited. Detection tools based on inference (analysing inconsistencies) or provenance (authenticating metadata) are still unreliable, costly, and prone to false positives. IGAP cautioned that large-scale verification could overwhelm platform resources and degrade performance. Moreover, incorrectly labelling authentic content could harm journalists and creators, and the draft provides no mechanism for users to appeal erroneous determinations.

The organisation also questioned the provision deeming intermediaries to have failed due diligence if they “knowingly permitted, promoted, or failed to act upon” synthetic content. IGAP argued that this phrasing risks imposing a general monitoring obligation contrary to India’s safe harbour framework under Section 79 of the IT Act.

Concerns Over Broad Definition of Synthetic Content

The proposed Rule 2(1)(wa) defines “synthetically generated information” (SGI) as any information artificially or algorithmically created, generated, modified, or altered using computer resources in a manner that appears reasonably authentic or true. IGAP argued that this definition is overly broad and could inadvertently include routine digital edits such as colour correction, autocorrect, translation, or background blur. Such inclusivity, it warned, risks conflating harmless digital processing with genuinely deceptive deepfakes.

According to IGAP, the lack of a “materiality threshold” or distinction between benign and harmful content could lead intermediaries to adopt pre-emptive content removal to avoid non-compliance. This could result in excessive censorship and discourage lawful digital activity. The organisation compared India’s approach unfavourably with the EU, China, and South Korea, all of which provide narrower definitions and exemptions for parody, satire, and artistic use. IGAP recommended aligning India’s definition with the policy objective of targeting AI-generated misinformation, non-consensual imagery, and electoral manipulation, while excluding minor or assistive modifications and routine technical processing.

Technical Feasibility and Labelling Challenges

The amendments propose mandatory labelling and metadata embedding for all synthetic content. Under the draft rules, intermediaries offering computer resources that generate or alter such content must ensure that it carries a visible label or embedded identifier covering at least 10% of the display area or audio duration. IGAP raised concerns that this requirement is technically unrealistic, potentially intrusive, and disproportionate.

The think tank pointed out that India’s AI value chain involves multiple actors—developers, vendors, deployers, and end-users—making it unclear who should bear compliance responsibility. It also highlighted the limitations of current labelling and provenance technologies such as metadata, watermarks, and cryptographic credentials, noting that none are tamper-proof or interoperable across platforms. Metadata is often stripped during uploads, while watermarking tools like Google’s SynthID can be easily bypassed through editing.

IGAP also warned that rigid labelling mandates could interfere with artistic, journalistic, and scientific uses of AI tools, distorting creative works and imposing compliance burdens on low-risk applications. The proposed “10%” rule for visible warnings was deemed excessive and unlike any global precedent. The organisation cited international models that allow flexibility for creative and educational uses, recommending risk-based differentiation instead of one-size-fits-all labelling.

Constitutional and Practical Implications

IGAP expressed constitutional concerns regarding the proposed labelling mandate, stating that forcing creators to display government-mandated warnings on all synthetic content could amount to compelled speech under Article 19(1)(a) of the Indian Constitution. It argued that the right to free expression includes the right not to convey a state-imposed message, particularly when the content is lawful and non-deceptive.

The think tank suggested that MeitY focus on high-risk categories such as deepfakes, electoral misinformation, and financial fraud while exempting routine or creative applications. It urged the ministry to establish achievable technical standards, promote interoperability, and provide categorical exemptions for low-risk content like visual effects, music editing, and accessibility tools.

IGAP recommended that MeitY adopt a risk-tiered compliance model distinguishing between high-risk deceptive content and benign AI-assisted activities. It suggested focusing on interoperability, technical feasibility, and global best practices. Measures such as user disclosures, notice-and-action systems, and community-driven fact-checks could, it said, achieve transparency without overburdening platforms or curbing expression.

First Published on Nov 14, 2025 8:53 AM

More from Storyboard18