Experts cite feasibility, free speech challenges in MeitY’s Draft Rules on synthetic media

As MeitY proposes mandatory labelling, metadata tagging, and user verification for AI-generated and synthetic content, industry experts and legal commentators warn of steep compliance burdens, technical hurdles, and the potential chilling effect on satire and creativity.

By  Imran FazalOct 22, 2025 2:15 PM
Follow us
Experts cite feasibility, free speech challenges in MeitY’s Draft Rules on synthetic media
Platforms that knowingly host unlabelled or falsely declared AI-generated content would be deemed non-compliant.

The Ministry of Electronics and Information Technology (MeitY) has floated draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, in a bid to rein in the growing menace of AI-generated misinformation, deepfakes, and synthetic media.

But while the government pitches the move as a step towards ensuring an “open, safe, trusted, and accountable internet,” legal experts and digital policy stakeholders warn that the proposed framework may pose steep compliance and free speech challenges for social media platforms.

The draft Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 introduce a set of sweeping obligations for online intermediaries—particularly large social media platforms such as Facebook, YouTube, and Snapchat—to identify, label, and verify synthetic or AI-generated content. Once finalized, the rules are expected to take effect later this year.

Under the proposal, “synthetically generated information” is defined as any content created, modified, or altered using computer resources to appear authentic or true. Platforms will be required to prominently label such content, embed permanent metadata identifiers, and ensure these cannot be removed or tampered with.

The rules go further in prescribing how the labels must appear: for visuals, at least 10% of the screen area must display a visible marker, while for audio content, the first 10% of the duration should contain an audible or textual disclaimer.

In the case of Significant Social Media Intermediaries (SSMIs)—those with more than five million registered users in India—the verification responsibility deepens. These platforms must authenticate user declarations about synthetic content using automated or technical measures. Failure to do so could amount to a lapse in “due diligence” under the IT Act, potentially costing intermediaries their “safe harbour” immunity from user-generated content liabilities.

Platforms that knowingly host unlabelled or falsely declared AI-generated content would be deemed non-compliant. However, the draft clarifies that removing or disabling access to synthetic content following a grievance redressal complaint would not violate existing intermediary liability protections.

Policy Intent vs Practical Hurdles

While MeitY argues that the proposed framework is essential to address the mounting risks of misinformation, impersonation, and election manipulation, several experts have flagged concerns over the implementation feasibility and potential overreach of the rules.

Dhruv Garg, Founding Partner at the Indian Governance and Policy Project (IGAP), noted that the government’s intent to promote transparency in AI-generated content is commendable but warned of the challenges in execution.

“The proposed amendments are a critical development in India’s evolving approach to AI governance,” Garg said. “The rules define synthetically generated information and mandate labelling norms for AI content. However, their success will depend on clear implementation guidance and technological feasibility, particularly for smaller platforms.”

He added that these measures appear to be a response to the recent surge in deepfake controversies, ranging from political misinformation to celebrity impersonations and AI-generated advertisements.

Garg cited IGAP’s recent research paper, Global Legal Responses to Deepfakes: A Regulatory Primer, which compares how countries worldwide are grappling with synthetic media. “Regulation must balance innovation with responsibility,” he said. “Without adequate clarity, compliance could turn into a bureaucratic burden rather than a meaningful safeguard.”

IGAP’s study underlines that India is not alone in confronting the challenges of deepfakes. Countries from Australia to France are introducing laws that attempt to curb the creation and dissemination of manipulated AI content—though through very different methods.

Australia, for instance, criminalised the non-consensual transmission of sexually explicit AI-generated material under its Criminal Code Amendment (Deepfake Sexual Material) Act, 2024, while China has imposed stringent metadata tagging requirements under its Measures for Labeling of AI-Generated Synthetic Content (2025).

The European Union’s AI Act and Digital Services Act together require clear disclosure of deepfakes, except in cases of satire or law enforcement. Meanwhile, France’s SREN Law criminalises algorithmically generated depictions without consent, carrying penalties of up to two years in prison and €45,000 in fines.

Closer to Asia, Singapore’s Elections (Integrity of Online Advertising) Act, 2024 bans manipulated campaign material, and South Korea’s election laws prohibit deepfake videos during the 90-day pre-election period.

Yet, some countries have opted for restraint. Switzerland, for instance, has declined to pass a dedicated deepfake law, instead relying on existing privacy and data protection frameworks.

Against this backdrop, India’s move to regulate synthetic content under intermediary guidelines places it among jurisdictions pursuing platform accountability rather than criminalisation. But experts caution that the success of such a model will hinge on precise definitions, scalable verification tools, and industry collaboration.

Call for Broader Consultations

Technology and media lawyer Jay Sayta said that while the dangers of AI-generated misinformation are undeniable, the draft’s obligations could risk overregulation if not carefully calibrated.

“Although the perils of AI-generated fake and misleading content are real, MeitY should engage in detailed consultations with social media platforms before finalising the rules,” Sayta told Storyboard18. “Platforms must be given adequate time to build technical capabilities for compliance.”

Sayta also raised concerns about over-censorship, pointing out that in previous regulatory cycles, intermediaries often erred on the side of caution by removing legitimate content.

“The obligation to take down fake or misleading content should not end up stifling creative freedom,” he said. “We’ve seen platforms take down satire, humour, and even journalistic work to avoid legal risk. A balance must be struck between combating misinformation and safeguarding free expression.”

His comments echo a wider sentiment among digital rights advocates who argue that India’s regulatory trend toward pre-emptive compliance risks shrinking online space for dissent and creativity.

MeitY has invited public comments on the draft rules until November 6, 2025, via email. Following stakeholder consultation, the ministry is expected to finalise the text later this year.

The 2025 draft marks the third major amendment to the IT Rules since 2021—after earlier revisions in October 2022 and April 2023—each expanding the regulatory perimeter for digital intermediaries. With this latest iteration, the focus shifts from content moderation to content provenance, underscoring India’s growing concern over AI-enabled misinformation.

However, as experts point out, the real test will lie not in the drafting, but in the execution—whether India can design a framework that deters manipulation without strangling innovation.

“Ultimately, it’s about trust in the digital ecosystem,” Garg summed up. “Regulation must not just punish misuse but also empower responsible innovation.”

First Published on Oct 22, 2025 2:15 PM

More from Storyboard18