ADVERTISEMENT
The Ministry of Electronics and Information Technology (MeitY) has released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at increasing accountability for AI-generated and synthetically modified content on social media. The proposed Information Technology Amendment Rules, 2025, seek to establish a transparent and traceable digital ecosystem where platforms must label, verify, and tag AI-created material across formats.
The draft defines “synthetically generated information” as any content created, modified, or altered using computer resources to appear authentic or true. It mandates that such content be prominently labelled — covering at least 10% of the screen for visuals or the first 10% of the duration for audio — and tagged with permanent metadata identifiers. Platforms with over five million registered users in India, such as Facebook, YouTube, and Snapchat, will be required to verify user declarations and ensure authenticity.
If finalized, these rules could come into force later this year, signalling a major policy turn in India’s digital governance playbook — from reactive takedowns to proactive provenance tracking.
Read More:Social media platforms may need to verify and tag AI-Generated posts
Gowthaman Ragothaman, Founding CEO of Saptharushi believes this is a long-overdue move toward structural integrity in digital advertising. “This is a must-have requirement,” he says. “Print ads used to carry agent codes; TV ads carry watermarks. Digital must also have some form of provenance tracking. Even if AI-aided campaigns are harmless, they should declare that they’re artificial — not real endorsements or real situations. Disclaimers are a must.”
For the advertising ecosystem, that means creative provenance becomes as critical as creative quality. Agencies that rely on AI for visual enhancement, copy variation, or synthetic voiceovers may need to re-engineer their processes for transparency. In the long run, provenance-first branding could emerge as a new trust metric — much like verified badges in social media once were.
Big Tech’s Neutrality Myth
For Gopa Menon, COO & Co-founder at Theblur, the new framework merely formalizes what’s already true: Big Tech is no longer a neutral pipeline.
“Big Tech hasn’t been purely neutral for some time,” he says. “They already moderate content, enforce community standards, and review ads. What changes is the specificity of the obligation. Now they’re explicitly responsible for verifying and labelling AI content, which makes them more accountable for what flows through their networks.”
The draft rules make it clear: platforms that knowingly allow unlabelled or falsely declared synthetic content will be seen as failing in due diligence under the IT Act. But this also opens a grey area.
“At what point does a platform know something is AI-generated if a user doesn’t declare it?” Menon asks. “This uncertainty might push platforms toward aggressive verification rather than risk penalties, effectively making them gatekeepers of AI authenticity rather than passive conduits.”
The risk of over-compliance, he adds, is real. “When regulations have teeth but lack clarity, companies often default to caution. Platforms might implement blanket restrictions — flagging anything that looks AI-assisted, even if it’s just a harmless logo design or colour-corrected product photo.”
For brands, this could mean that even AI-assisted creative work faces scrutiny.
“The 10% screen area requirement for labelling visuals is quite prominent and might discourage advertisers from using any AI tools, even when transparency isn’t a real issue,” Menon warns. “Legitimate innovation could get stifled because the compliance burden makes it easier to say ‘no AI at all.’ Much depends on how MeitY finalizes these rules after the consultation period and whether they clarify what’s harmful versus acceptable AI use.”
A New Phase of Digital Accountability
Akshay Mathur, Founder & CEO, Unpromptd, sees this as part of India’s emerging provenance-first digital framework. “Platforms aren’t just responsible for taking down harmful content anymore — they’re being asked to prove where it originated,” he says. “The idea of labelling AI-generated visuals and audio at the source is a clear signal that the government wants transparency built into the system. It’s the first step toward a broader authenticity regime, one that could eventually apply to all forms of digital content, not just AI.”
Mathur notes that India’s regulatory ambition now exceeds that of several developed markets. “The compliance bar in India is getting higher than in many other regions,” he explains. “The new rules lay out specific thresholds, clear labelling standards, and stronger liability clauses. Platforms can no longer rely on their global playbooks. They’ll need dedicated compliance setups, local engineering support, and proactive engagement with policymakers. In some ways, India might become the test market for the next generation of AI accountability frameworks — influencing how global plaatforms approach regulation elsewhere.”
That positioning — of India as both a regulatory innovator and a sandbox for global governance — could reshape how Big Tech prioritizes its regional compliance operations.
Liability, Editorial Control, and Legal Grey Zones
Sonam Chandwani, Managing Partner at KS Legal & Associates cautions that the amendments could reshape the very definition of an intermediary.
“A failure to label AI-generated content, even inadvertently, could reasonably be construed as a lapse in due diligence,” she says. “Since safe harbour protection under Section 79 is contingent upon strict adherence to such obligations, non-compliance may expose platforms to direct liability.”
She argues that this blurs the long-standing distinction between platform and publisher.
“Once an intermediary is required to verify the authenticity or origin of content, it moves closer to exercising a form of editorial control, which historically negates the protection intended for neutral facilitators.”
Operationally, the challenge could be overwhelming. “If enforcement remains grievance-driven, the compliance burden will be immense,” Chandwani adds. “Considering the exponential volume of AI-generated material, platforms may struggle to maintain consistent labelling and verification standards. The result could be an uneven regulatory landscape where the intent of accountability translates into disproportionate operational and legal exposure.”
Ultimately, she says, India’s attempt to promote transparency might collide with concerns over feasibility and freedom of expression.
“While the amendments aim to promote responsible digital governance, they significantly recalibrate the intermediary liability framework and create potential tension between feasibility, compliance, and free expression.”
From Compliance to Creative Integrity
Rajiv Dingra, founder and CEO of ReBid, believes these regulations will rewire how digital advertising operates.
“We’ll see the rise of a two-tier monetisation model — verified human-created content and verified AI-assisted content,” he predicts. “Platforms will need to build transparent pipelines that show provenance — not to penalize AI creators, but to assure advertisers that ad adjacency is authentic, ethical, and compliant. Creators using AI responsibly could actually command premium placements if disclosure drives trust.”
That shift, Dingra says, transforms platforms into custodians of creative integrity. “Once labelling becomes mandatory, Big Tech will evolve from neutral marketplaces into AI authenticity regulators. Platforms will be expected to audit creative origins and enforce disclosure norms. This adds a governance layer — effectively turning ad platforms into custodians of creative integrity, not just distributors of impressions.”
But he also anticipates an adjustment period marked by over-compliance. “In the early phase, over-compliance will likely be the default, as algorithms can’t yet judge nuance or intent,” Dingra says. “Platforms may flag or suppress even compliant AI-aided ads to avoid liability. Over time, human-in-the-loop oversight and clearer frameworks will be critical — ensuring that innovation isn’t stifled under the weight of compliance.”
Toward a Trust-by-Design Internet
MeitY’s draft rules are part of a broader push to maintain an “open, safe, trusted, and accountable Internet”, addressing risks from misinformation, impersonation, and election manipulation driven by generative AI. Stakeholders have until November 6, 2025 to submit feedback at itrules.consultation@meity.gov.in .
If implemented thoughtfully, the 2025 amendments could position India at the forefront of AI provenance governance — balancing innovation with integrity. But the industry consensus is clear: intent isn’t enough. Execution, clarity, and collaboration will decide whether these rules build a transparent ecosystem or merely burden it.
Read More:MeitY releases draft IT Rules; targets deepfakes, synthetic content