MeitY’s draft rules promise safer digital space for brands in the age of deepfakes

The government’s proposed amendments to the IT Rules seek to make AI-generated content traceable and transparent- offering brands a chance to rebuild trust, curb misinformation, and protect reputations amid India’s synthetic media surge.

By  Akanksha Nagar| Oct 23, 2025 8:35 AM

When the Ministry of Electronics and Information Technology (MeitY) released its draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2025, it marked the clearest signal yet that India intends to take on synthetic media and deepfakes head-on.

Under the new proposals, any content created, altered, or modified by computer resources to appear real will fall under the definition of “synthetically generated information.” Social media intermediaries- particularly the larger ones with over 5 million registered Indian users- must now label such content, embed metadata identifiers that cannot be tampered with, and verify user declarations before publication.

In practical terms, platforms like Facebook, YouTube, Instagram, and X would need to ensure that AI-generated visuals cover at least 10% of the screen area or initial audio duration, clearly signalling to users that the content is synthetic. They must also deploy automated tools to detect undeclared AI content, failing which they risk losing “safe harbour” protection under the IT Act for due diligence lapses.

For the Indian digital ecosystem- home to the world’s largest social media user base and rapidly growing AI adoption- the draft rules could be a turning point.

For brands and advertisers, they might represent something even more vital: the restoration of trust in what consumers see, hear, and believe online.

A New Trust Framework for Digital India

“Synthetic media and deepfakes are going to percolate to every content stack,” warns Shubhranshu Singh, business leader, cultural strategist, and board member at the Effie LIONS foundation. “For brands, risk is the main issue. Who said it will become more important than what is said.”

Singh believes the amendments are “a timely and necessary step toward restoring trust in India’s fast-digitising information ecosystem.” With the lines between real and fabricated content blurring fast, a clear regulatory perimeter is crucial- not to stifle innovation, but to safeguard authenticity.

“India’s reality is far more heterogeneous,” he notes. “Balancing consumer protection, national security, and the right to innovate will require calibrated policy. If implemented with clarity and industry collaboration, these rules could set a new benchmark for brand and citizen safety in the world’s largest open internet market.”

Brands See a Shield Against Deepfakes

In an era when viral misinformation can obliterate brand reputation overnight, companies welcome the move as a shield against misuse.

“The rise of AI-generated misinformation and deepfakes poses an undeniable challenge,” says Abhishek Chakraborty, Head of Brand, Oriflame India. “For consumer-centric brands like ours- where trust and authenticity are foundational- the potential misuse of synthetic media threatens more than reputation; it impacts consumer confidence itself.”

Chakraborty sees MeitY’s initiative as a necessary step to “strengthen digital safeguards and create clearer accountability,” adding that it will help foster a more reliable online space where creativity, technology, and ethics coexist responsibly.

Chandan Sharma, General Manager- Digital Media, Adani Group calls the move timely and necessary since many people are creating misleading content that is not only harmful to consumers but also undermines the credibility of brands/people, and even a country.

"Requiring verification and clear labeling of synthetic media helps restore consumer confidence in a world overwhelmed by algorithmic creativity. Responsible brands that already disclose their use of AI in content creation will now find themselves better positioned by new compliance norms and public expectations, since they already have an ecosystem working for them, unlike others."

The move, according to him, will encourage ethical storytelling that allows brands to embrace authenticity while using AI responsibly. Lower misinformation risks, especially concerning deepfakes, impersonations, and altered brand narratives. It was becoming serious; many celebrities filed cases on this in different courts.

However, there are also operational and creative challenges.

The 10% labeling requirement for visuals and audio, if strictly enforced, could disrupt the aesthetics of campaigns and user experience. Smaller brands or creators with limited design resources may struggle to comply without losing engagement. Global campaigns will also require regional adjustments (which come with a cost), as compliance standards differ across regions. It has both sides, a good one and a bad one.

"Marketers, platforms, and policymakers need to work closely together to ensure that the rules do not limit creativity. The real chance lies in building an environment where AI is recognized instead of concealed," Sharma suggests.

The Era of AI Accountability

For Divye Agarwal, Co-founder of Bingelabs, the amendments are a watershed moment: “This signals the beginning of an AI accountability era in Indian digital media.”

He argues that for too long, the internet rewarded “virality over veracity.” With these rules, traceability and responsibility finally enter the conversation. But he cautions that “regulation must not outpace understanding.”

“AI now touches almost every piece of content- from filters to captions. Drawing the line between ‘synthetic’ and ‘AI-aided’ will be tricky,” Agarwal says. Instead of focusing only on labels, India should use this opportunity to build a national framework for watermarking infrastructure, verification APIs, and interoperable metadata systems.

“If done right,” he says, “India will define the world’s first scalable model for authenticity in the age of algorithms.”

Legal Experts: A Three-Layer Shield

From a legal perspective, the amendments create a “three-layered shield” against misinformation, says Siddharth Chandrashekhar, Advocate at the Bombay High Court.

“First, AI tools themselves must watermark at least 10% of the image or audio,” he explains. “Second, platforms must demand user declarations about synthetic content. Third, they must deploy automated verification tools to catch evasions.”

This cascading responsibility- from AI generators to platforms to users- creates multiple checkpoints for synthetic content detection. “When everything synthetic has an ‘AI-made’ label, genuine brand content automatically stands out,” Chandrashekhar says. “The playing field has finally levelled.”

However, he raises critical questions: Who decides what’s synthetic enough? Can watermarks be gamed? Do global platforms need India-specific compliance systems? And perhaps most importantly: Can MeitY police the entire internet, especially encrypted platforms like WhatsApp or Telegram where most fake content spreads?

A Step Toward Responsible AI Governance

For Dinesh Jotwani, Co-Managing Partner at Jotwani Associates, the draft amendments represent a proactive shift from enforcement to prevention.

“The proliferation of generative AI has created tools capable of manipulating truth at scale,” he says. “These amendments recognise the need for traceability and responsible AI deployment.”

Jotwani, however, urges caution. While endorsing the amendments’ intent, he warns that definitions must remain clear and procedural safeguards strong to prevent overreach or arbitrary takedowns. “Regulation should enhance accountability without chilling legitimate expression, satire, or investigative journalism,” he adds.

He calls for ongoing stakeholder consultation to ensure the framework remains “technically sound, rights-respecting, and practically enforceable.”

Balancing Innovation and Accountability

Sonam Chandwani, Advocate at KS Legal, believes the amendments mark a “critical shift” in how India regulates digital intermediaries.

“From a legal standpoint, these changes underscore a move from reactive compliance to proactive due diligence,” she says. “For brands, this fosters a safer digital environment- curbing impersonation, deceptive advertising, and reputational risks from deepfakes.”

Yet she points out that the expanded intermediary obligations blur the line between platform and publisher, potentially jeopardising the safe harbour protections under Section 79 of the IT Act.

“The challenge,” she adds, “is ensuring that platforms act responsibly without over-regulating or stifling creativity. India’s digital governance framework is moving toward authenticity-driven accountability but it must do so carefully.”

Jaspreet Bindra, Co-founder, AI&Beyond notes that while the draft could create a safer, more authentic online environment where engagement is based on verified content rather than manipulated narratives, implementation will be key, ensuring the rules don’t stifle innovation while still curbing harmful misuse.

Rohit Kumar, Founding Partner at The Quantum Hub (TQH), believes that while watermarking and labelling are good first steps, India’s response must go further.

“The definition of ‘synthetic information’ is extremely broad- it could even cover benign AI-assisted edits,” he notes. “Over-labelling could dilute the very effectiveness of the warnings.”

He argues that AI detection tools remain unreliable, and that policy must address “the spread and speed” of misinformation. “A ‘review-before-amplification’ mechanism- where potentially harmful content is temporarily slowed until verified- could be vital,” he says.

As the EU’s AI Act and the White House’s voluntary safety commitments set precedents globally, India’s move stands out for its scale and ambition. With over 800 million internet users and near-saturation smartphone coverage, India could soon become the test bed for the world’s largest synthetic media governance model.

But implementation will be everything. If these amendments are enforced transparently and collaboratively- with clear definitions, stakeholder participation, and scalable infrastructure- India could redefine brand safety, digital trust, and authenticity in the AI age.

As Shubhranshu Singh aptly puts it: “If verified truth can travel faster than viral falsehood, we’ll finally have a digital space worthy of its promise.”

First Published onOct 23, 2025 8:35 AM

SPOTLIGHT

DigitalFrom Clutter to Clarity: How Video is transforming B2B storytelling

According to LinkedIn’s research with over 1,700 B2B tech buyers, video storytelling has emerged as the most trusted, engaging, and effective format for B2B marketers. But what’s driving this shift towards video in B2B? (Image Source: Unsplash)

Read More

Arattai App: All you need to know about Zoho’s made-in-India "WhatsApp killer"

Discover Arattai, Zoho’s made-in-India messaging app. Features, privacy, user growth, and how it compares to WhatsApp in 2025.