Chatting with AI? Your words might not be protected under Indian law

In India, where digital tools are being adopted faster than regulations can keep up, experts say the privacy risks of AI chats are growing. And the law is not catching up fast enough.

By  Indrani Bose| Aug 1, 2025 11:23 AM
AI is no longer just a backend tool. It is now a conversation partner, advisor, and sometimes even a confidant. But while the technology evolves at breakneck speed, India’s legal framework has yet to define how private or protected these AI-user interactions truly are. (Image Source: Unsplash)

As artificial intelligence tools become everyday companions, helping users draft emails, answer legal questions, or work through emotional stress, a key question continues to go unanswered: What happens to the data users share with AI?

In India, where digital tools are being adopted faster than regulations can keep up, experts say the privacy risks of AI chats are growing. And the law is not catching up fast enough.

“There’s no clear law that says your chats with an AI are private like some email providers or encrypted messenger applications,” says Siddharth Chandrashekhar, advocate and counsel at the Bombay High Court. “So, technically, what you say you can assume can be stored, shared, or used unless specified explicitly otherwise.”

Chandrashekhar emphasizes the need for platforms to seek consent that is “explicit, informed, specific, free and unambiguous” through “clear affirmative action.” He adds that such consent “must be itemised, purpose‑limited, easily withdrawn, and provided in accessible languages.”

According to him, consent language also matters. “Consent shouldn’t be hidden in long legal verbiage. It must be non-ambiguous, contextual, simple and easy to read, especially when data includes legal confessions or trauma disclosures.”

The legal consequences of vague or missing disclosures can be serious. “Under the Consumer Protection Act, omission of a material fact is an Unfair Trade Practice,” Chandrashekhar explains. “If users believe an AI chat is private or privileged but the platform fails to clarify this could be actionable.”

He further notes that emotional framing used in marketing can mislead users about the level of protection they have. “Marketing terms like ‘your emotional assistant’ without disclaimers could reinforce consumer misperception. The law must prioritize protective disclosures and penalise deceptive omissions.”

Vinay Butani, Partner at Economic Laws Practice, says the platform is usually the one responsible if data is mishandled. “It’s the platform (OpenAI) that is generally responsible if personal data is misused, leaked, or exposed.”

Under existing Indian law, that responsibility falls under Section 43A of the Information Technology Act. “Liability currently falls on the body corporate if it fails to implement reasonable security practices,” Butani explains.

Butani points out that enterprise users are treated differently. “For enterprise or Zero Data Retention users, OpenAI expressly positions itself as a data processor, with the enterprise customer being the fiduciary.”

This structure introduces legal uncertainty. “While contractual terms make the enterprise primarily liable, OpenAI could still face exposure if a breach results from its own security failure.”

He also warns that the belief in private conversations with AI tools may not hold up legally. “Under Indian law, even attorney-client privilege does not extend to communications made for an illegal purpose, and since AI platforms are not ‘advocates,’ no privilege applies here at all.”

And if law enforcement comes knocking, AI platforms may have no choice but to comply. “If law enforcement or the government seeks access, provisions under the IT Act and DPDP Act in future allow them to compel OpenAI or, in the case of enterprise accounts, the enterprise fiduciary, to share the data.”

OpenAI, like any intermediary, may need to cooperate to maintain its protections. “If OpenAI wants to rely on the safe-harbor protection available to intermediaries, it will then need to comply with the Intermediary Guidelines, which include monitoring obligations for certain types of content and prompt cooperation with law enforcement.”

Ashwini Kumar, founder of My Legal Expert, says that free online chat platforms are not required under Indian law to inform users about the privacy status of their messages. “Indian law does not require that free online chat platforms, including AI chatbots, warn their users whether their messages are protected or otherwise.”

That changes only when money enters the equation. “Once there is a payment made by the user for any such platform, then the user can expect added security and privacy.”

He believes transparency is critical to building public trust. “It will simply not be possible to hide behind some fine print. So, smart companies tell what is private and what is not. And they build some trust for themselves in this way, thereby fortifying their future position when tighter controls are enforced in the name of digital privacy.”

However, privacy in digital spaces remains largely performative unless enforced. “A user has an expected right to privacy; however, they will have to show that there was a breach of his or her privacy resulting in damages.”

Kumar challenges the widespread assumption that digital conversations are safe. “If people think their words are safe in a digital world, then they are being deceived. Despite the tall claims of WhatsApp that even ‘they cannot read the messages of their users,’ the Indian Government requires every intermediary to disclose all information, messages, photos, etc., to the government as and when demanded.”

He goes a step further by imagining how AI may interact with law enforcement in the future. “Imagine a youth trying to get information on how to make a bomb and destroy a sensitive government building. As AI develops in the future, such messages may be automatically reported to the concerned government agency along with IP addresses, locations, etc.”

The issue is not just legal but ethical. “One must understand the pros and cons of data collection by AI tools, and more importantly, how it's used,” Kumar says. “In the future, digital networks will create a bubble of privacy, where every user feels secure within their online bubble, when at the same time all their data is being logged, analyzed, processed, and used for commercial and other purposes.”

Sudarshan Sirsat, Associate Professor at K J Somaiya Institute of Management, says India urgently needs a legislative framework to guide AI-user interactions. “Yes, India strongly needs to develop a framework for legislative intervention to define its new AI-user conversation on policy, data privacy and good governance grounds.”

He cautions that AI’s ability to infer intent, even from seemingly anonymous interactions, makes the current lack of protections dangerous. “Even anonymous sharing of intent, information and identity could end up requiring strong and immediate need of setting up limitations on defining what should be exactly considered as safe and secure AI-user conversations.”

Sirsat calls for a strong culture of informed consent, echoing global best practices. “There should be strong algorithmic accountability implemented by the USA government and AI explainability and consent imposed by Singapore and the UK.”

He believes India needs to shape these standards to fit its own unique digital context. “With shared responsibility of increased user awareness, it is also the need of the hour from a governance perspective to define and enforce responsible AI adoption for the Indian context.”

He adds that doing so will foster long-term trust. “This will sensitize the diverse and mixed user base of Indian citizenship towards safe and secure and responsible ethical value-based transformative learning.”

AI is no longer just a backend tool. It is now a conversation partner, advisor, and sometimes even a confidant. But while the technology evolves at breakneck speed, India’s legal framework has yet to define how private or protected these AI-user interactions truly are.

Without clear, enforceable rules, consent remains murky, protections are inconsistent, and user assumptions could lead to real harm. Until the law catches up, every conversation with AI should be treated with caution, because privacy in its truest sense is still pending.

Tags
    First Published onAug 1, 2025 8:59 AM

    Info Edge ups FY25 Ad budget to ₹373 cr to drive growth in Naukri, 99acres

    The number of complaints filed under the POSH Act witnessed a sharp increase in FY25 compared to the previous financial year.