Unsafe code, unsafe kids: Why Instagram’s discovery engine and architecture are liabilities for Indian minors

If safety is enforced only through content moderation or reactive policy enforcement, it will always lose to engagement incentives. For children and teens, safety has to be a first-order design constraint, according to an expert.

By  Indrani Bose| Dec 24, 2025 8:55 AM

Note to readers: In an always-on world, the way we use social media shapes not just our lives but the safety and wellbeing of those around us. 'Social Media, Responsibly' is our commitment to raising awareness about the risks of online recklessness, from dangerous viral trends to the unseen mental and behavioural impacts on young people. Through stories, conversations and expert insights, we aim to empower individuals to think before they post, to pause before they share, and to remember that no moment of online validation is worth risking safety. But the responsibility does not lie with individuals alone. Social media platforms must also be accountable for the environments they create, ensuring their tools and algorithms prioritise user safety over virality and profit. It’s time to build a culture where being social means being responsible - together.

In the Netflix series Adolescence, a police officer misreads Instagram comments between two teenagers as benign interaction. His son corrects him. Emojis that appear harmless to adults are, in fact, coded harassment rooted in online subcultures amplified through recommendation systems.

The scene is brief, but instructive. Harm does not originate from a single post. It emerges from how platforms like Instagram structure discovery, visibility and reach, particularly for minors.

In a widely shared Reddit post a year back, one user who had lived in multiple countries described being “genuinely disgusted” by how their Instagram feed changed after moving back to India. Their algorithm, they wrote, “shifted almost instantly” into an “overwhelming amount of content filled with hatred towards women, extreme objectification, and outright bullying.” Even memes, the user said, had become “constantly degrading,” while comment sections were “not funny, not clever, just toxic.”

The user stressed they did not want to leave the platform. They enjoyed recipes, fashion, travel, lighthearted reels and humour that did not harm anyone. “But why does the platform have to be filled with so much hate?” they asked. “Why does it always have to be so vile in this side of Instagram?”

Having experienced Instagram feeds in other countries, the user argued that the Indian version felt uniquely aggressive, raising uncomfortable questions about whether local recommendation systems were amplifying the worst behavioural impulses at scale.

That distinction matters because what feels like a cultural or behavioural problem to users is increasingly being treated by Indian law as a design and governance problem: whether platforms are architected in ways that foreseeably expose children and teenagers to harm.

Akshay Mathur, founder and CEO of Unpromptd argues that at a systems level, recommendation engines are built to answer one core question: what keeps a user engaged longer. Discovery, velocity and feedback loops are the primary objectives. Child safety, by contrast, requires friction, slower amplification, conservative defaults and sometimes reduced engagement. “These two goals are not naturally aligned,” he says.

Children are not seeking risk. Instagram is surfacing it.

“Indian children are entering high-risk digital environments far earlier than most adults realise,” says Gurdit Singh Chhabda, an education professional and digital well-being educator.

Based on his interactions with over 2,000 students from Class 6 onwards, Chhabda says 83% accessed online gaming platforms before the age of 10, while 86% had their own Instagram or Snapchat accounts by age 13. Among more than 3,500 parents of primary-school children, 79% admitted to using screens as a babysitter.

“The problem is not curiosity. The problem is architecture,” he says. “Platforms like Instagram are discovery-driven systems.”

That view closely mirrors Mathur’s assessment that safety cannot be treated as a layer added on top of engagement systems. “If safety is enforced only through content moderation or reactive policy enforcement, it will always lose to engagement incentives,” he says. “For children and teens, safety has to be a first-order design constraint.”

That view is echoed by Sajal Gupta, CEO, Kiaos Marketing who frames the phenomenon more bluntly.

“This is a narrative war,” he says. “Many of these fake or synthetic accounts are not driven by revenue at all. They don’t run ads. There is no direct financial benefit. Their purpose is influence — shaping mindsets and winning opinions.”

Because these accounts are not monetising through advertising, traditional enforcement incentives fail. “They are optimising for reach and persuasion, not money,” Gupta adds.

Children who do not post publicly, Chhabda notes, remain reachable through direct messages, friend recommendations, gaming chats and voice lobbies. “Exposure is high, while adult visibility is fragmented.”

Exploitation rarely begins explicitly. “It starts with friendliness and secrecy,” he says. “By the time something is reported, psychological manipulation has already occurred.”

Harm happens before platforms act

The reactive nature of platform enforcement is a recurring concern. “Platforms can flag content, but flagging is reactive,” Sajal says. “Most of the time, when action is taken, the harm has already occurred.”

He points to lawsuits in the US where families sued platforms after their children died by suicide. “If you look at those cases, a pattern emerges. By the time technology flags harmful behaviour, the damage is already done,” he says.

AI-led moderation, in this framing, is structurally late. “Detection happens after exposure,” Sajal adds.

Engagement incentives collide with child safety

Vikas Chawla, co founder, Social Beat, argues that platforms have focused on the easiest safety levers while leaving core exposure risks intact. “Time spent is the easy part,” he says. “Screen-time limits and parental controls can manage endless scrolling.”

What remains unresolved is content exposure and peer pressure. “Teen accounts may reduce some risks, but they don’t dismantle the social hierarchies Instagram creates around appearance, popularity and validation.”

Gupta explains why this persists. “From a marketer’s point of view, this is one of the most valuable audiences,” he says. "If I can’t reach them through ads, influence shifts to content.”

Under Indian ad rules, behavioural targeting of minors is prohibited. “Below 18, you’re limited to gender and location. There’s no behavioural targeting,” he explains.

But creator content operates in a grey zone. “It’s not classified as advertising, but it has enormous influence over children and teenagers.”

When moderation becomes a design liability

Cyber law expert Prashant Mali argues that this is no longer a content issue. “If an algorithm systemically exposes minors to harm or predatory interactions, it becomes a design flaw,” he says. “Unsafe code can be as illegal as unsafe content.”

Gupta reinforces the inevitability of regulatory escalation. “Platforms operate with partial moderation, knowing it will never be complete,” he says. If they don’t self-govern, governments will step in with extreme regulation.”

Under India’s IT Rules, due diligence is proactive. Knowledge combined with inaction weakens safe harbour protections under Section 79 of the IT Act.

Sonam Chandwani at KS Legal and Associates says Indian courts are increasingly receptive to this framing. “Where platforms are aware, through internal assessments, that certain features pose foreseeable risks to children and delay safeguards, this looks less like neutrality and more like negligence,” she says.

Shreya Suri, Partner at CMS INDUSLAW, describes a structural shift. “Algorithmic architecture is no longer legally invisible,” she says. “Failure to conduct algorithmic due diligence can itself constitute breach, especially under the DPDP Act and the 2025 Rules.”

Shared devices and India-specific blind spots

India’s shared-device reality further weakens safeguards. “Platforms receive signals based on the account, not the individual using the device,” Gupta says. “In many households, children consume content through a parent’s profile.”

This shifts responsibility toward parents, but awareness is low. “In most Indian homes, children are more technologically aware than parents,” he adds.

Meaningful protection, in such cases, becomes structurally difficult.

AI escalates risk faster than safeguards

Chawla warns that AI-generated content deepens asymmetry. “Children may see content that appears real but isn’t labelled as AI-generated,” he says. “A child may think a peer is bullying them when it’s synthetic.”

Gupta frames the risk in governance terms. “Engagement is still the primary measurement,” he says. “Content optimisation follows engagement, not child well-being.”

The questions Instagram has not answered

Storyboard18 reached out to Instagram with detailed questions on child safety in India, including whether under-18 accounts are private by default, how unknown-user DMs are restricted, how sextortion involving minors is detected, and how fake or AI-generated accounts are prevented from reaching teenagers.

The questions also sought clarity on violent content amplification, mental-health risk mitigation, India-specific risk assessments, and whether Meta tracks aggregate data on harm involving minors.

As of now, Instagram had not responded.

“If platforms don’t build credible self-governance mechanisms,” Gupta warns, “regulation will not be subtle.”

Indian law is moving away from asking whether platforms removed harmful content after it appeared, and toward asking whether their systems made that harm foreseeable in the first place.

At that point, accountability no longer sits with moderation teams. It sits with algorithms, incentives and architecture.

First Published onDec 24, 2025 8:55 AM

SPOTLIGHT

Special CoverageCalling India’s Boldest Brand Makers: Entries Open for the Storyboard18 Awards for Creativity

From purpose-driven work and narrative-rich brand films to AI-enabled ideas and creator-led collaborations, the awards reflect the full spectrum of modern creativity.

Read More

“Confusion creates opportunity for agile players,” Sir Martin Sorrell on industry consolidation

Looking ahead to the close of 2025 and into 2026, Sorrell sees technology platforms as the clear winners. He described them as “nation states in their own right”, with market capitalisations that exceed the GDPs of many countries.