Govt explains how personal images and data given to AI apps are protected

The Ministry of Electronics and Information Technology said that the Digital Personal Data Protection Act, 2023 covers all forms of digital personal data, including images.

By  Storyboard18| Dec 3, 2025 2:21 PM

The government today clarified how personal images and data shared with AI applications are protected under current laws, while also outlining steps taken to address deepfakes and other synthetic content online.

The Ministry of Electronics and Information Technology said that the Digital Personal Data Protection Act, 2023 covers all forms of digital personal data, including images.

The Rules under the Act were notified on 13 November 2025.

The Act also sets up the Data Protection Board of India, which will have a Chairperson and four Members. The Board will oversee issues related to personal data processing and compliance.

According to the ministry, the Act gives people certain rights over their digital personal data.

Companies that collect or use such data, called data fiduciaries, must follow specific obligations on how they process it.

The government said it has been in regular communication with social media platforms regarding deepfakes and manipulated images.

Advisories were issued on three occasions, 26 December 2023, 15 March 2024 and 21 November 2025, reminding platforms about their duties under the IT Rules, 2021.

These advisories asked platforms to improve detection and removal of false, unlawful, or synthetic content.

Under the IT Rules (Amendment) 2025, online platforms must remove or block access to certain unlawful content within 36 hours of receiving a court order or a government direction.

Proposed changes for AI-generated content

Draft amendments to the IT Rules, 2021 have been released for public consultation.

These proposals include:

  • Mandatory labelling and watermarking of AI-generated or manipulated content
  • Traceability requirements to identify the source
  • Stricter due diligence for social media platforms and services that allow creation of synthetic content
Role of states in action against misuse

The ministry said that maintaining public order and investigating crimes fall under State governments.

Law enforcement agencies can take action against individuals misusing social media or creating harmful synthetic content.

AI mission projects under Deepfake detection

Under the IndiaAI Mission, approved in March 2024, three projects have been selected to work on deepfake detection tools:

  • Saakshya: A detection framework by IIT Jodhpur and IIT Madras
  • AI Vishleshak: A project by IIT Mandi and Himachal Pradesh’s Directorate of Forensic Services for audiovisual deepfake and signature forgery detection
  • Real-Time Voice Deepfake Detection System: Developed by IIT Kharagpur

First Published onDec 3, 2025 2:21 PM

“Two drunks leaning on a lamppost”: Sir Martin Sorrell on the Omnicom–IPG merger and the turbulence ahead

In a wide-ranging interview with Storyboard18, Sorrell delivers his frankest assessment yet of how the deal will redefine creativity, media, and talent across markets.