Storyboard18 Awards

One in three pre-teens in India already uses ChatGPT: Report flags urgent AI literacy policy gap

While generative AI tools are now deeply integrated into youth behaviour, the report explicitly warns that existing digital citizenship and safety frameworks do not adequately address the distinct challenges created by AI systems.

By  Indrani BoseJan 15, 2026 12:37 PM
Follow us
One in three pre-teens in India already uses ChatGPT: Report flags urgent AI literacy policy gap

A new research report by Student Cyber Resilience Education and Empowerment Nationwide (SCREEN) Survey 2026 has revealed that generative artificial intelligence is rapidly becoming embedded in the daily lives of young Indians, with 38.8% of all respondents reporting use of ChatGPT, and the highest uptake recorded among children aged 11–13 at 33%, followed closely by young adults aged 25–30 at 31%.

The findings expose a widening gap between technology adoption and public policy preparedness. While generative AI tools are now deeply integrated into youth behaviour, the report explicitly warns that existing digital citizenship and safety frameworks do not adequately address the distinct challenges created by AI systems.

The report states that 849 respondents already maintain ChatGPT accounts, demonstrating how quickly generative AI has moved from experimental technology to mainstream utility. Yet adoption does not follow a simple age gradient. Mid-adolescent groups show lower reported use, with only 9.8% among 17–18 year-olds and moderate uptake among 19–21 and 22–25 year-olds at 20.6% and 22.2% respectively. The pattern suggests that exposure, access, educational context and curiosity, rather than age alone, shape AI adoption.

Crucially, the report identifies this phenomenon as a policy challenge. It argues that AI-specific digital literacy must become a policy priority, because the risks, privacy implications, and trust dynamics of interacting with AI systems differ fundamentally from traditional social media use. Young users are already engaging with generative AI long before regulatory and educational institutions have developed comprehensive frameworks for how such tools should be understood, evaluated, and safely used.

The report highlights the core policy problem: current digital citizenship curricula may not address whether AI outputs can be trusted, how AI training data influences results, and the privacy implications of AI queries. These are not theoretical risks. They are immediate realities for children and young adults already using generative systems as learning aids, information sources, and creative tools.

This policy gap becomes more concerning when placed within the broader platform ecosystem described by the report. Youth digital life is increasingly complex, fragmented across platforms with different risk profiles, safety tools, and behavioural norms. Introducing AI into this environment without corresponding regulatory and educational safeguards compounds existing vulnerabilities.

The report’s findings point toward the need for a distinct category of AI literacy within national digital policy, rather than treating AI merely as another software tool. Unlike conventional platforms, generative AI systems actively produce content, interpret prompts, and simulate authority, creating unique trust and dependency dynamics, particularly for young users still developing cognitive and social judgment.

By documenting the scale and speed of AI adoption among minors, the report frames the issue as one of policy urgency rather than future planning. The challenge now facing Indian policymakers is not whether generative AI will influence youth behaviour, but whether governance structures can evolve quickly enough to provide meaningful guardrails around technology that is already deeply embedded in young people’s lives.

First Published on Jan 15, 2026 12:33 PM

More from Storyboard18