Can AI be sued? India says no, but the law is catching up

As these technologies increasingly influence how news is consumed, opinions are formed, and reputations are shaped, the legal system is under pressure to define boundaries that were never built for non-human actors.

By  Indrani Bose| Jul 29, 2025 1:18 PM
This legal ambiguity presents a challenge not only to courts but to platforms and policymakers seeking clarity in a rapidly transforming digital environment. Until a new legal standard emerges, developers and platforms will likely remain on the hook, bound by how actively they shape and spread AI-generated information.

The rise of generative AI has introduced an unsettling question to Indian courts: when an AI system produces defamatory, misleading, or harmful content, who is legally responsible? Is it the developer who built the model, the organization that deployed it, the platform that hosted it, or as some futurists suggest, the AI itself?

As these technologies increasingly influence how news is consumed, opinions are formed, and reputations are shaped, the legal system is under pressure to define boundaries that were never built for non-human actors. The stakes are particularly high in India, where the rapid digitalization of media and commerce collides with evolving regulatory frameworks still catching up to older platform dynamics.

According to Sonam Chandwani, Managing Partner at KS Legal & Associates, current Indian law places liability squarely on human or legal persons, not machines. “The law does not recognize AI as a legal person, so the question of the AI system being a ‘defendant’ is legally untenable,” she says. Instead, liability falls on developers, deployers, or intermediary platforms, depending on their degree of control and involvement.

The primary legal scaffold is still the Information Technology Act 2000, specifically Section 79, which offers safe harbour to intermediaries if they function merely as passive conduits of information. However, this protection is conditional. “If the intermediary initiates the transmission, selects the receiver, or modifies the content, it loses that immunity,” Chandwani adds. Criminal provisions like defamation under Sections 499 and 500 of the Indian Penal Code also come into play.

The definition of intermediary and the scope of their protection has become central to how courts assess platform responsibility. The 2015 Supreme Court judgment in Shreya Singhal v. Union of India clarified that safe harbour applies only to platforms that act neutrally. Vinay Butani, Partner at Economic Laws Practice, points out that this ruling is particularly relevant when evaluating the role of AI-powered content systems. “If platforms like Google use algorithms to generate or amplify content, especially through AI-generated summaries or responses, they risk being seen as active participants, weakening their immunity.”

This shift in perception from passive host to active content generator marks a fundamental reclassification of platform behavior. As Butani explains, “The core question is whether the intermediary is neutral or whether it actively optimizes, selects, or contributes to the content in question. If it does, courts may treat the platform as more than an intermediary, thereby jeopardizing safe harbour protection.”

The implications are not just theoretical. In an age where AI tools can summarize news, generate reviews, or even simulate public figures’ speech, the line between transmission and publication blurs. Yajas Setlur, Partner at JSA, agrees. “Indian law draws a clear line between passive hosting and active curation by distinguishing intermediaries from publishers. If a platform merely transmits third-party or AI-generated content, it may be considered an intermediary. But once it actively curates or promotes that content, it could be treated as a publisher and held to a higher standard of accountability.”

In the absence of AI-specific liability statutes, legal actors fall back on more traditional frameworks such as criminal law, torts, and contracts. But as Dr. Anindita Jaiswal Jaishiv, Associate Professor of Law at BITS Law School, notes, this patchwork approach has limits. “Fixing liability on AI is difficult because AI lacks mens rea, the legal intent, and actus reus, the criminal action, both of which are foundational to criminal and civil liability,” she says. Even where contributory negligence can be argued, it must be tied to human actors, such as developers who failed to comply with safety protocols or regulatory design standards.

Dr. Jaiswal also points to existing digital media regulations. Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021, platforms are required to inform users not to post false or misleading content, and to remove deepfakes or harmful materials within 36 hours of receiving a takedown notice. “Failure to comply with these timelines could invite liability for the platform itself,” she explains. However, these provisions still rely on human reporting and after-the-fact responses, a reactive model ill-suited to the scale and speed of generative AI.

The upcoming Digital India Act may offer a more comprehensive framework. Butani notes that the proposed law is expected to address AI-generated misinformation and reputational harm more explicitly. “Likely reforms include mandatory traceability mechanisms, watermarking or labeling of AI-generated content, and enhanced algorithmic transparency obligations,” he says. He adds that safe harbour protections may be redefined to exclude platforms that curate, recommend, promote, or monetize harmful AI-generated material. New civil and criminal penalties may also be introduced for negligent or malicious deployment of generative AI systems.

Globally too, the issue of AI personhood remains hotly debated. No major jurisdiction has granted full legal personhood to AI systems. The European Union, for example, considered but ultimately rejected the concept of electronic personhood. Instead, it finalized the EU AI Act in 2024, which classifies AI systems by levels of risk and insists that humans remain legally accountable.

“These global discussions are also likely to be relied upon should the issue be legislated upon in India in the future, as they provide comparative perspectives and highlight the complexities involved. Indian policymakers and legal scholars closely observe international trends, particularly in the EU and the USA, to inform domestic debates, while adoption of such positions would be subject to being contextualized from an Indian standpoint,” says Apoorva Murali, Partner at Shardul Amarchand Mangaldas and Co.

Beyond legality, some scholars are beginning to question whether AI can ever be considered a legal person in its own right. Shradhanjali Sarma, Legal Counsel at YVLC, explains that Indian law has recognized non-human legal persons before, such as corporations, deities, or even rivers, but always through a human interface. “AI lacks both moral agency and representative structures. Without a human interface, rights and responsibilities cannot be meaningfully assigned or enforced,” she says. According to her, the core of the debate is not about whether AI has autonomy, but whether that autonomy can be made legally accountable without a clear line of human oversight.

This legal ambiguity presents a challenge not only to courts but to platforms and policymakers seeking clarity in a rapidly transforming digital environment. Until a new legal standard emerges, developers and platforms will likely remain on the hook, bound by how actively they shape and spread AI-generated information.

In other words, the Indian legal system does not yet see AI as a person. But it is beginning to recognize that its consequences are deeply personal.

First Published onJul 29, 2025 8:48 AM

SPOTLIGHT

Advertising7UP’s new campaign taps on 'snow' to beat the summer heat

Till July 20, social media feeds across the Middle East and South Asia will be taken over by influencer videos showing snow falling in everyday summer scenes, triggered by the act of opening a can of 7UP.

Read More

In Photos: At DES 2025, India charts ambitious course for digital entertainment leadership

At the Storyboard18 Digital Entertainment Summit in New Delhi, policymakers and industry leaders outlined how talent, technology, and governance will drive India’s push to dominate the global entertainment economy.