ADVERTISEMENT
Note to readers:
In an always-on world, the way we use social media shapes not just our lives but the safety and wellbeing of those around us. 'Social Media, Responsibly' is our commitment to raising awareness about the risks of online recklessness, from dangerous viral trends to the unseen mental and behavioural impacts on young people. Through stories, conversations and expert insights, we aim to empower individuals to think before they post, to pause before they share, and to remember that no moment of online validation is worth risking safety. But the responsibility does not lie with individuals alone. Social media platforms must also be accountable for the environments they create, ensuring their tools and algorithms prioritise user safety over virality and profit. It’s time to build a culture where being social means being responsible - together.
As AI companions become common in India—from homework helpers to emotional confidants—mental health professionals are beginning to ask whether these tools are helping children cope or quietly reshaping how they relate to the real world.
Dr. Harish Shetty, psychiatrist, says the growing intimacy people feel toward AI is often misunderstood. “AI and ChatGPT have no emotions,” he explains, “but many develop intimacy towards them. The boxes don’t fight, are not harsh, and provide some answer for the queries asked.” Compared to human interaction, he adds, the difference is stark. “Humans respond with feelings—and also anger. Those using these platforms are unaware that they do not operate via feelings. Many opt for them because it’s free, easily available, and always responsive.”
Dr. Shetty notes that shy, reticent, and introverted individuals are especially drawn to AI’s constant availability. “AI always responds back and that is a big positive. Many claim benefit and solace, but the seriously ill can get trapped with disastrous consequences,” he warns. Once users return to in-person support, they often realize how different it feels—“the tonality, the decibels change; AI then appears like a robot, not human.” For him, the concern ties into a larger reality: “The loneliness epidemic is huge, and AI attracts a lot of loners.”
Divya David, MPhil, Mental Health Coach, adds that the emotional relationship children are building with AI is far more complex than convenience. “Children today are growing up in a world where artificial intelligence listens, responds, and even comforts,” she says. “This raises an important question: Are these tools filling emotional voids—or deepening them?”
David explains that for many children—especially those struggling with loneliness, anxiety, or feeling misunderstood—AI can appear as a comforting presence. “It responds patiently, without criticism, creating the illusion of being heard and cared for. The child begins to feel a sense of belonging—but what they receive is mirroring, not genuine empathy.”
She notes that while AI can teach emotional vocabulary and reflection, it lacks sensory awareness. “It can simulate understanding but cannot embody it,” she says. The predictability of AI may soothe, but it can also anchor children in an imaginary world of perfect relationships—ones that don’t test boundaries, don’t demand patience, and never reject. “Used safely, AI can offer temporary relief,” David emphasizes. “But for a child’s holistic growth, real human connection remains essential.”
The danger, she cautions, lies in dependency. “What begins as comfort can evolve into emotional withdrawal. The predictability and constant affirmation of AI can feel addictive. Over time, children may start believing that comfort without connection is real and when real life doesn’t match that ideal, they may face confusion or emotional breakdowns.”
David believes the solution lies in responsible design and supervision. “AI can play a valuable role in emotional wellness education—but it must supplement, not replace, human relationships.”
Nirali Bhatia, a cyber psychologist points out that AI companions have already become “the new invisible friends of the new generation.” Their allure, she says, lies in being “always available, never judgmental, and highly adaptive.” Yet this perfection is deceptive. “For growing minds still learning emotional regulation and social understanding, constant validation from AI can make real connections feel unnecessary,” she explains. “The entire bond becomes one-sided and emotionally skewed.”
Bhatia warns that while AI can boost confidence or ease loneliness, it may also deepen emotional voids in the long run. “A child who learns to confide in a bot instead of a person will struggle later to navigate real-world emotions,” she says. Instead, she believes AI should serve as a learning aid or therapeutic bridge. “If a child can’t express themselves, AI can help them name their emotions before they take it to real therapy or to a professional who can help them navigate.”
For parents and educators, she urges deeper engagement. “We need to go beyond asking what children are doing online and instead ask how they are navigating the world, who they open up to, and why they might be seeking validation.”
Her parting question reframes the entire debate: “The key question is not whether AI is good or bad,” Bhatia says. “It’s whether it’s becoming your child’s primary emotional companion or simply a tool to help them connect better with the real world.”
What You Can Do: Practical Steps for Safer AI Use
For Parents and Families
Open conversations early: Talk about self-worth, validation, and peer pressure before children start using social media, AI tools and chatbots.
Model digital responsibility: Kids copy what they see—show them mindful sharing in your own habits.
Set healthy boundaries: Delay access to platforms where possible, and encourage offline hobbies that build confidence.
Be approachable, not just watchful: Kids are more likely to share concerns if they don’t fear punishment.
For Schools and Educators
Integrate digital literacy: Teach students how algorithms work and how online behavior shapes mental health.
Create alternative stages: Give students offline platforms—debates, art exhibitions, performances—where they can shine.
Collaborate with experts: Invite mental health professionals and cyber psychologists for awareness workshops.
For Platforms and Policymakers
Age-gating enforcement: Ensure stronger checks on underage users.
Awareness and transparent reporting: Regularly create awareness about harmful online behaviors, disclose impacts and removal of harmful content. Disclose how AI is improving and algorithms are being adjusted.
Support cultural shifts: Highlight and reward safe, creative challenges to reset what online behaviors and AI tools' use looks like.
Social media doesn’t have to be a stage for reckless validation. By combining personal responsibility, family guidance, institutional education and platform accountability, we can build a culture where being seen online doesn’t come at the cost of being safe offline.