ADVERTISEMENT
In July 2024, influencer Aanvi Kamdar died after slipping into a gorge near Maharashtra’s Kumbhe waterfall while filming an Instagram reel. Just months later, in December 2024, a 20-year-old man in Kerala was struck and killed by a speeding car while recording reels on a coastal road.
In 2025, three teenagers drowned in Madhya Pradesh’s Kopra River while attempting to shoot videos for social media. That same year, Navi Mumbai police detained a group of youths after a viral clip showed them performing dangerous stunts on a moving auto-rickshaw. Every day, new videos surface of youths performing perilous stunts - from stunts on bikes and standing on moving SUVs to lying on active railway tracks and performing 'gliding' stunts from moving trains. A split second away from tragedy.
In August 2025, 22-year-old YouTuber Sagar Tudu was swept away at Duduma waterfall while attempting to film near the rapids. His desperate final moments, stranded on a rock before being pulled under by the current, were caught on video and circulated widely across social media.
These tragedies echo global concerns: in 2022, TikTok was forced to block searches around the deadly “blackout challenge” after reports linked the trend to multiple child deaths, highlighting how platforms often react only after risks escalate.
These incidents spotlight a troubling pattern: platforms benefit from the relentless engagement such risky content generates, but their interventions often come late - after lives are lost or law enforcement steps in. The absence of proactive guardrails raises critical questions about accountability, safety mechanisms and whether policies are tailored enough for realities.
Storyboard18 reached out to two of the world’s most influential social media platforms — Instagram and YouTube with 40 detailed questions on how they approach social media responsibility in India. None responded.
The questions covered a wide range of issues. They included how algorithms are designed to detect, deprioritize and moderate risky or harmful content such as dangerous stunts, challenges and self-harm videos. Other questions asked whether platforms balance engagement with user well-being, and how quickly flagged harmful content is removed in India. Storyboard18 also sought clarity on whether under-18 users are algorithmically shielded from high-risk content, and whether content moderators overseeing Indian material are fluent in regional languages and cultural nuances.
Read More:#SocialMediaResponsibly: The new peer pressure - Why "How many views?" is replacing "How are you?"
Additional queries focused on whether companies conduct India-specific risk assessments, create differentiated design protocols for teenage accounts, or consider mechanisms like delaying virality of teen-generated content to allow for review. Storyboard18 also asked if parental control dashboards are available for Indian families, whether authorities are notified when high-risk content is found on a minor’s feed, and if platforms initiate reviews after incidents that involve injury or loss of life.
Further questions looked at compliance with Indian laws and IT Rules 2021, as well as whether platforms accept independent audits, publish India-specific transparency reports, or establish advisory boards comprising psychologists, educators and parents. Storyboard18 also asked about partnerships with schools and NGOs for digital literacy campaigns and about whether creator codes of conduct penalize influencers who promote dangerous behavior.
The legal framework around such risks is evolving. Under the Bharatiya Nyaya Sanhita (BNS), rash and negligent acts that endanger human life, including dangerous stunts for content, fall under multiple sections: S281 (rash driving), S282 (causing hurt by rash or negligent act), and S283 (causing grievous hurt). If such stunts cause public nuisance or obstruction, S286 applies. Moreover, if influencers encourage followers to imitate them, BNS S111 on abetment extends liability to them.
Prashant Mali, advocate, explains that legal risk is tied to the very psychology of virality. “The likes add to the noradrenaline high, and a failure to get that forces them to walk on the feather-thin edge again and again till they get the huge highs. The feedback loops reinforce it,” he says.
"If they can monetize attention, they can also moderate danger. With great reach comes great responsibility.”
On the platform side, the IT Act, 2000 provides intermediaries with safe harbour, shielding them from liability unless they knowingly host unlawful content or ignore takedown notices. The IT Rules, 2021 go further: intermediaries are obligated not to promote content endangering life, while Significant Social Media Intermediaries like YouTube and Instagram are required to ensure algorithmic transparency and risk mitigation. Courts too have stepped in, as in the Madras High Court’s interim order banning TikTok downloads after a spate of dangerous content cases.
Educators and psycholgists say this silence matters because algorithms thrive on engagement, and risky content often drives it. In the words of cyber psychologist Nirali Bhatia, “If risky content is getting engagement, platforms should not amplify it further. That kind of accountability is essential.”
Dr Ted Mockrish, Head of School, Canadian International School, Bangalore, recalls how early warnings were ignored. “The destructive influence of social media is absolutely real and companies absolutely bear responsibility for what is on their platforms,” he says, pointing out that the industry has long known the risks.
Kanak Gupta, Group Director, Seth M.R. Jaipuria Schools, underlines how risk-taking has entered everyday student life. “The danger is not just in doing the stunt, but in feeding the culture where risk is rewarded with views,” he observes.
Experts also argue that if platforms can monetize attention, they can also moderate danger. Gupta puts it bluntly: “Platforms already run on algorithms that can predict what we will click next. If they can monetize attention, they can also moderate danger. With great reach comes great responsibility.”
India is one of the largest and youngest markets for these platforms. With adolescents relying heavily on digital spaces for validation and connection, the influence of algorithms, recommendations and virality is significant.
While the questions remain unanswered, the issues they raise are central to the wider debate on platform accountability and user safety in India.
Read More:#SocialMediaResponsibly: Risky selfies, reckless stunts in dangerous pursuit of online validation
What You Can Do: Practical Steps for Safer Social Media
For Young Users
Pause before posting: Ask yourself - would I still do this if nobody was watching?
Choose safer ways to stand out: Showcase creativity, humor, or talent instead of risky stunts.
Curate your feed: Follow accounts that inspire, educate, or entertain positively. Unfollow or mute pages that glorify dangerous trends.
Check the source: If a challenge looks risky, it probably is. Don’t trust trends just because they’re viral.
For Parents and Families
Open conversations early: Talk about self-worth, validation, and peer pressure before children start using social media.
Model digital responsibility: Kids copy what they see - show them mindful sharing in your own habits.
Set healthy boundaries: Delay access to platforms where possible, and encourage offline hobbies that build confidence.
Be approachable, not just watchful: Teens are more likely to share concerns if they don’t fear punishment.
For Schools and Educators
Integrate digital literacy: Teach students how algorithms work and how online behavior shapes mental health.
Create alternative stages: Give students offline platforms—debates, art exhibitions, performances—where they can shine.
Collaborate with experts: Invite mental health professionals and cyber psychologists for awareness workshops.
For Platforms and Policymakers
Algorithmic guardrails: Stop amplifying content that glorifies danger.
Age-gating enforcement: Ensure stronger checks on underage users.
Transparent reporting: Regularly disclose how much harmful content is removed and how algorithms are being adjusted.
Support cultural shifts: Highlight and reward safe, creative challenges to reset what “going viral” looks like.
Social media doesn’t have to be a stage for reckless validation. By combining personal responsibility, family guidance, institutional education and platform accountability, we can build a culture where being seen online doesn’t come at the cost of being safe offline.
Note to readers:
In an always-on world, the way we use social media shapes not just our lives but the safety and wellbeing of those around us. 'Social Media, Responsibly' is our commitment to raising awareness about the risks of online recklessness, from dangerous viral trends to the unseen mental and behavioural impacts on young people. Through stories, conversations and expert insights, we aim to empower individuals to think before they post, to pause before they share, and to remember that no moment of online validation is worth risking safety. But the responsibility does not lie with individuals alone. Social media platforms must also be accountable for the environments they create, ensuring their tools and algorithms prioritise user safety over virality and profit. It’s time to build a culture where being social means being responsible - together.