Brand Makers
Dil Ka Jod Hai, Tootega Nahin

A 14 year old boy’s recent suicide in the US after being “love bombed” by a chatbot based on Daenerys Targaryen has reignited questions in India: when AI crosses the line from helper to emotional manipulator, where does accountability begin, and who is liable?
This is no longer abstract. India’s schools, therapy rooms, homework workflows, and teenage social spaces are already quietly filling with emotionally responsive AI companions.
Prashant Mali, advocate, says we have entered a strange moral valley where neither the tech creator nor policymaker is thinking like a human. “It begins where the AI’s creator stopped thinking like a human. And it ends when policymakers start doing so again. I feel this is the dark valley where digital empathy turns into emotional entrapment.”
According to Mali, responsibility is not one neck to hang but a chain. “The developer holds primary product liability for releasing an emotionally manipulative system without age gating or disclaimers. The platform bears secondary liability for distribution without due diligence on child safety. The school or parent bears fiduciary responsibility for supervision though this should not become victim blaming. The state finally bears AI policy accountability for failing to impose guardrails on AI with emotional agency.”
Critically, Mali argues that this is not automatically protected by India’s intermediary safe harbour. “Where the platform’s AI autonomously generates personalised, harmful content (and the platform has control over or derived the content), courts may treat it as an active participant and deny immunity.”
He adds that even without AI’s own intent, culpability is imputable. “AI has no mens rea, but human actors’ culpability is imputable. Courts infer fault from design choices, foreseeability, and negligence.”
Safe harbour could collapse
Sonam Chandwani, Managing Partner at KS Legal and Associates agrees that Section 79 can break if the AI is not neutral.
“If a chatbot’s conduct were found to contribute to a minor’s death in India, the immediate question would be whether the AI platform qualifies as an ‘intermediary’. However, this protection is conditional and would not apply once the platform demonstrates active involvement through algorithmic design, curation, or emotional reinforcement that influences the user’s mental state,” she says.
That is the legal trigger: emotional engineering.
“Mens rea does not directly apply to AI. But constructive intent and negligence can attach. If an AI company designs a chatbot capable of engaging emotionally with users especially minors without safeguards, monitoring, or restrictions, it could amount to negligent or reckless conduct.”
DPDP raises the stakes further
Vinay Butani, Partner, Economic Laws Practice, says the pivot point in India will be children’s data and children’s harm.
“Under current Indian law, liability would primarily rest with the platform or company operating the chatbot. If the chatbot’s interactions are found to have caused psychological harm or abetted self-harm, the platform could be held liable under the IT Act, particularly Section 67,” he says.
But the real hammer is the DPDP Act.
“Once the DPDP comes into force, liability will extend more directly to the data fiduciary. Section 9(2) expressly prohibits processing children’s personal data in a manner likely to cause detrimental effect. In such cases, the platform could face significant financial penalties from the Data Protection Board.”
Even if classified as an intermediary, Butani notes that IT Rules 2021 obligations remain. “If a platform is aware that its chatbot can cause distress to minors yet fails to modify or restrict it, that design choice could be viewed as reckless or grossly negligent conduct.”
The Indian precedent that may now be set
If a fact pattern like the US suicide occurs in India, the first Indian judgment will likely decide three things at once: that certain AI systems are products with a statutory duty of care to minors; that intermediary safe harbour cannot be blanket extended to autonomous generative systems; and that India will now recognise what Mali calls algorithmic culpability.
The next suicide case at the intersection of AI and minors may not just be about a child.
It may be the case that defines India’s AI liability doctrine for the next decade.
"The raucous, almost deafening, cuss words from the heartland that Piyush Pandey used with gay abandon turned things upside down in the old world order."
Read MoreFrom OpenAI’s ChatGPT-powered Atlas to Microsoft’s Copilot-enabled Edge, a new generation of AI-first browsers is transforming how people search, surf and interact online — and reshaping the future of digital advertising.