ADVERTISEMENT
Student Cyber Resilience Education and Empowerment Nationwide (SCREEN) Survey's report on youth digital wellbeing in India has found that over one-quarter of respondents, 27.9%, have encountered disturbing, violent, or sexual content online without actively searching for it, underscoring the policy challenge of algorithm-driven feeds and unsolicited content pathways. The report emphasises that these encounters are not the result of deliberate seeking, but instead occur through algorithmic recommendations, content shared by contacts, material that appears while browsing, or unsolicited messages.
The finding places platform design and content delivery systems at the centre of the youth safety debate. When harmful material reaches young users passively, risk management shifts away from individual choice and toward the architecture of recommendation systems, default settings, and moderation controls. The report’s focus on passive exposure suggests a need for policy frameworks that evaluate not just what content exists online, but how it is distributed to young people who did not request it.
The report identifies a clear age pattern. Exposure to disturbing content peaks in mid-adolescence, rising from 10.4% among 11–13 year-olds to 24.6% among 14–16 year-olds, before declining to 17.7% among 17–18 year-olds. Vicarious exposure, meaning seeing that friends or family encountered such content, follows a similar curve: 11% at ages 11–13, 20.5% at ages 14–16, and 15% at ages 17–18. The report interprets the mid-adolescent spike as potentially driven by heightened exploratory browsing, before older adolescents develop stronger filtering habits or experience normalisation effects that change how they label disturbing material.
Geography intensifies the picture. The report finds a substantial urban-rural gap in passive exposure, with 36.7% in metros, 30.8% in towns, and 14.3% in rural areas. Metro residents report more than 2.5 times the exposure rate of rural respondents. The report links this disparity to higher internet engagement in metros and algorithmic dynamics that surface more content to highly active users, suggesting risk is structurally shaped by time spent online and platform interaction levels.
For policymakers, the findings point to the limits of narrow platform-by-platform approaches. The report argues for regulatory frameworks that address the full range of online risks rather than focusing only on content moderation. In the report’s framing, content exposure is one risk category within a broader ecosystem where contact, conduct, and commerce risks interact.