Digital
Leading with purpose creates wins for consumers, community and country: Hina Nagarajan of Diageo India
As tensions escalated between India and Pakistan following the April 2025 Pahalgam terror attack, a torrent of misinformation flooded social media platforms, outpacing both automated moderation systems and human verification efforts. The digital deluge, experts warn, is testing the limits of platform governance during geopolitical flashpoints.
Platforms including Meta’s WhatsApp, Facebook, and Instagram, as well as Elon Musk’s X (formerly Twitter) and Google’s YouTube, are once again under scrutiny for their handling of sensitive content during times of national crisis. Misinformation, particularly in regional languages and culturally coded formats, is spreading faster than moderators can flag or users can fact-check.
Amid growing concern, the Indian Ministry of Defence issued a rare advisory via its official WhatsApp channel, warning citizens about fake videos circulating online. The Ministry urged users to report such content to #PIBFactCheck, a government-led verification initiative. At the same time, India restricted access to X’s Global Affairs account after ordering the platform to block over 8,000 accounts accused of spreading false or provocative material.
A fragile framework under pressure
India’s current legal and regulatory framework, experts say, is ill-equipped to handle the evolving nature of digital misinformation, especially during periods of heightened security sensitivity. The country’s reliance on a patchwork of outdated laws, sporadic enforcement, and algorithmic moderation has raised alarms among legal and cybersecurity professionals.
“The legal tools we have today are blunt instruments,” said Sonam Chandwani, Managing Partner at KS Legal & Associates. “They’re good at blocking content after the fact, but they do little to prevent or deter the spread of misinformation in real time.”
Central to the government’s authority is the Information Technology Act, 2000, particularly Section 69A, which allows for the blocking of content deemed threatening to national security. Authorities recently invoked the provision to ban 16 Pakistani YouTube channels in the wake of the Pahalgam attack. Meanwhile, the Intermediary Guidelines and Digital Media Ethics Code (2021) requires platforms to remove government-flagged content within 36 hours and appoint grievance officers. Yet critics argue these measures lack transparency and may suppress legitimate discourse.
Cybersecurity experts also point to Section 505 of the Indian Penal Code, which criminalizes statements inciting public mischief, as a tool increasingly deployed to counter digital falsehoods. However, its application is often viewed as reactive and politically selective.
A system struggling to keep pace
For misinformation mitigation consultant Sagar Kaul, the larger issue lies in the system's overreliance on automation and its inability to scale local context.
“We’re seeing misinformation spread faster than it can be flagged,” Kaul said. “AI tools misfire. Community flagging gets abused. And harmful content often slips through because no one has the local expertise, or the time, to intervene.”
Kaul emphasized that platforms must reinvest in regional moderation teams and establish escalation protocols tailored for crisis situations. He also called for real-time transparency tools and stronger integration with civil society organizations, journalists, and fact-checkers.
“This chaos is exactly what bad actors count on,” he said. “They know the platforms are slow to respond. They know how to manipulate cultural nuance, humor, and recycled imagery to push propaganda.”
Cross-border threats and domestic gaps
Chandwani also highlighted the cross-border nature of today’s digital threats, particularly misinformation originating from hostile actors in Pakistan. Recent cyberattacks by groups such as the Pakistan Cyber Force targeted Indian defense websites, spreading doctored visuals and fabricated casualty figures.
Yet India’s legal toolkit remains domestically focused. “There’s no clear provision for holding foreign entities accountable,” Chandwani said. “And without international cooperation and robust attribution mechanisms, these campaigns continue unchecked.”
Proposed amendments to India’s IT Rules aim to tighten accountability, but critics argue they fall short of addressing state-sponsored misinformation and AI-driven disinformation warfare.
Debating the role of automation
Not all experts agree that human moderation is the answer. Gowthaman Ragothaman, CEO of Aqilliz, contends that AI holds promise, if deployed responsibly.
“Big tech firms should embed fact-checking algorithms to clearly flag verified content,” he said. “At this scale, human moderation alone is not feasible.”
Others, like educator Rohit Haldankar, advocate for community-driven moderation tools, such as the “Community Notes” feature on X. “This empowers users to collaboratively add context to misleading posts,” he said. “It’s a scalable approach when paired with public education.”
But deeper concerns persist about the business models underpinning these platforms. “The volume of user-generated content today is staggering,” said Rahul Vengalil, co-founder of tgthr. “If platforms are serious about curbing misinformation, they need to reconsider how they monetize content, perhaps even halt advertising on sensitive posts entirely.”
Toward a more resilient ecosystem
Industry leaders are calling for a multi-layered approach, one that blends technology with human judgment and contextual awareness.
“Platforms must deploy regionally trained rapid-response teams,” said Gopa Menon, Chief Growth Officer at Successive Digital. “They need to balance speed with accuracy and develop playbooks tailored to crises like the India-Pakistan conflict.”
Some suggest a greater emphasis on verified sourcing and attribution. Marketing professor Ashish Kaul proposes platforms display verification status not only on original posts but also on reposts, using trusted third-party services.
And brand strategist Ambi Parameswaran believes platforms should actively suppress content lacking source attribution, even if that means losing engagement. “Without verified sourcing, posts should not be allowed to go viral,” he said. “It’s the only way to restore credibility.”
A call for reform
With digital information warfare becoming an enduring feature of geopolitical conflict, the current response, legal bans and platform takedowns, risks being more symbolic than strategic.
“The government’s interventions may offer short-term relief,” Chandwani said. “But they’re band-aids on a festering wound. What’s needed is a cyber-misinformation law that balances speech with security and fosters collaboration between government, industry, and civil society.”
Until such reforms take shape, observers warn that digital platforms will remain vulnerable to manipulation, and social media will continue to serve as both battlefield and weapon.
At the Storyboard18 DNPA Conclave 2025, Union Minister Ashwini Vaishnaw spotlighted the critical role of traditional media in an evolving digital landscape. He emphasized that such gatherings can aid the govt in formulating more effective policies for a balanced and sustainable media ecosystem.
Read MoreFrom the chiefs of Nestle, Diageo, Colgate, PepsiCo, Zetwerk and CRED to AI visionaries, marketing mavens, top creators, ad legends and leading global agencies' CEOs, the brightest minds converged at the Storyboard18 Global Pioneers Summit for an action-packed day of meaningful dialogues on creativity, commerce and culture.