Advertising
Co-lead or crown? Tussle for Omnicom–IPG leadership race in India heats up
Civil society groups, industry bodies, and digital rights advocates have urged the Ministry of Electronics and Information Technology (MeitY) to extend the public consultation window for its draft amendments to the IT Rules, 2025, warning that the proposed framework for AI-generated content is “censorship-prone and surveillance-heavy.” Stakeholders said the November 6 deadline leaves too little time for meaningful input on rules that could reshape India’s digital governance landscape.
The Ministry of Electronics and Information Technology (MeitY) has released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, proposing new obligations on digital intermediaries to identify, label, and verify AI-generated and synthetic content.
While the government has framed the move as a response to the growing menace of deepfakes and misinformation, digital rights advocates and policy experts warn that the draft rules risk creating a censorship-prone, high-surveillance ecosystem that could chill lawful expression online and impose unrealistic compliance burdens on platforms.
The draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 defines “synthetically generated information” as any content “created, modified, or altered using computer resources in a manner that appears authentic or true.”
Under the proposal, all intermediaries—particularly large platforms such as Meta, Google, and X—will be required to label such content prominently, embed permanent metadata identifiers, and ensure these cannot be removed or tampered with. In addition, visible or audible disclaimers covering at least 10% of the screen area must accompany such content, irrespective of its context or purpose.
Digital rights organisation Internet Freedom Foundation (IFF) said the amendments may end up being counterproductive. "While we recognise the real harms of deepfakes—especially non-consensual intimate imagery and election manipulation—these proposals, as framed, risk overbroad censorship, compelled speech, and intrusive monitoring that chill lawful expression online," the organisation said in a statement.
A policy expert said the definition of synthetic media is so wide that it could easily include parody, satire, or remix culture. Even a lightly edited image or algorithmically enhanced video could require a label. That’s a regulatory overreach that conflates harmful content with creative or harmless modification,” the expert said.
Compliance versus creativity
Stakeholders have flagged that Rule 3(3) of the draft would compel developers of editing or creation tools to embed identifiers and display labels—irrespective of the content’s intent or artistic use. This, they argue, amounts to mandated disclaimers reminiscent of the certification regime applied to films or OTT content.
“This is compelled speech and a high-risk case of collateral censorship,” said a senior policy researcher. “Bad actors who want to mislead will simply not comply, while legitimate creators and smaller companies will face compliance paralysis.”
The proposed Rule 4(1A)—which requires significant social media intermediaries to seek user declarations and deploy automated tools to verify them—has also drawn sharp criticism. Experts say the provision could effectively introduce general monitoring, a practice that India’s Supreme Court has previously viewed with caution.
“The ‘deemed failure’ clause pushes platforms toward over-removal of content to avoid liability,” said the researcher. “This creates a perverse incentive to take down anything that might trigger scrutiny.”
Extension of consultation period
MeitY has invited public comments on the draft rules until November 6, 2025, leaving less than three weeks for stakeholders to respond. Several experts have described this as insufficient for a reform of such magnitude.
“The consultation window is simply too short for meaningful participation,” said a Delhi-based digital policy lawyer. “Given the technical complexity of AI regulation, the government should have extended it by at least two weeks.”
Critics also point to a lack of alignment with MeitY’s broader AI governance vision. Earlier this year, the ministry had sought public feedback on a Report on AI Governance Guidelines Development, which noted that India’s existing legal framework was largely adequate to address malicious synthetic media. That report has not translated into a clear regulatory roadmap.
“There’s a policy disconnect,” said another policy expert. “While MeitY is pushing for mandatory content labelling, it is simultaneously promoting large-scale facial recognition initiatives like the IndiaAI Face Authentication Challenge. Together, these signal growing state reliance on AI without parallel safeguards.”
Tech and media lawyer Jay Sayta welcomed the government’s intent to curb AI misuse but cautioned against one-size-fits-all regulation.
“We’ve seen deepfake videos of prominent politicians, industrialists, and cricketers being used for fraudulent endorsements and scams,” Sayta said. “Regulatory intervention was long overdue. But the challenge lies in balancing this with creative freedom and technical feasibility.”
Dhruv Garg, Founding Partner at the Indian Governance and Policy Project (IGAP), said the amendments mark a significant step in India’s evolving AI policy landscape.
“The government’s intent to promote transparency in AI-generated content is commendable,” Garg said. “However, effective implementation will depend on clear guidance, practical timelines, and tiered obligations that consider the capacity of smaller platforms.”
Garg noted that the move appears to be a response to recent deepfake controversies, ranging from political misinformation to AI-generated celebrity endorsements. But he warned that excessive regulatory complexity could stifle innovation.
“Compliance must not become a bureaucratic exercise,” he said. “Without clarity and proportionate enforcement, the rules could burden intermediaries without meaningfully curbing the problem.”
Global parallels, local challenges
According to IGAP’s recent study, Global Legal Responses to Deepfakes: A Regulatory Primer, countries such as Australia, France, and Singapore have introduced targeted measures to curb malicious synthetic media—typically focusing on false or harmful content rather than blanket content labelling.
“Regulation must balance innovation with responsibility,” Garg added. “A broad-brush approach risks penalising legitimate use of generative tools.”
Observers note that while India is not alone in grappling with deepfakes, its regulatory style tends to rely on compliance mandates rather than rights-based frameworks.
“The risk is that these rules could become yet another layer of digital control,” said a senior industry executive. “Once platforms are required to police and label every form of modified content, the line between accountability and surveillance becomes dangerously thin.”
For MeitY, the draft rules represent an attempt to pre-empt the social and political fallout of AI-generated misinformation. For critics, they underscore a pattern of expanding state control over online speech in the name of safety and integrity.
Experts suggest that in the absence of a comprehensive AI law or data protection framework, piecemeal amendments like these risk turning intermediaries into compliance departments of the state. MeitY’s intent to ensure transparency is valid, but its methods echo an older instinct—to control rather than to govern. If enacted in its current form, the rules could mark not the dawn of responsible AI oversight, but the slow bureaucratisation of India’s digital space.
According to LinkedIn’s research with over 1,700 B2B tech buyers, video storytelling has emerged as the most trusted, engaging, and effective format for B2B marketers. But what’s driving this shift towards video in B2B? (Image Source: Unsplash)
Read MoreDiscover Arattai, Zoho’s made-in-India messaging app. Features, privacy, user growth, and how it compares to WhatsApp in 2025.