ADVERTISEMENT
The Ministry of Electronics and Information Technology, under the IndiaAI Mission, has unveiled the India AI Governance Guidelines, a national level framework to enable safe, inclusive, transparent and responsible AI adoption. The guidelines were formally released by Prof Ajay Kumar Sood, Principal Scientific Adviser to the Government of India, in the presence of senior leadership including MeitY Secretary S Krishnan, Additional Secretary MeitY and CEO IndiaAI Mission Abhishek Singh, and the core scientific leadership of MeitY and the Office of the PSA. This launch is a strategic milestone as India builds momentum towards hosting the India AI Impact Summit in 2026, and signals a decisive shift in India’s positioning as a serious global stakeholder on AI safety, accountability and trustworthy deployment.
The guidelines set out a governance model that encourages cutting edge innovation while protecting individuals and society from harms. They lay out seven guiding principles, key recommendations across the full AI value chain, and a timeline linked action plan. There are also practical guidelines for industry, developers and regulators to ensure transparency and accountability.
As Secretary MeitY S Krishnan said at the launch, India’s focus remains on human centricity and on using existing legislation wherever possible. Prof Sood added that the spirit of the framework is simple: Do No Harm.
India’s AI governance framework aims to encourage innovation, adoption and technological progress, while ensuring that actors in the AI value chain mitigate risks to individuals and society. The current assessment is that many emerging risks can actually be addressed by applying existing laws. Example: deepfakes can be tackled under the Information Technology Act and Bharatiya Nyaya Sanhita, and unauthorised personal data use is regulated by the Digital Personal Data Protection Act. However, a comprehensive review is required to identify gaps, for example PC-PNDT must consider AI models analysing radiology images. In priority sectors such as finance, regulatory gaps should be identified quickly and plugged.
On classification and liability, the Information Technology Act needs updating for AI systems. Definitions around intermediaries, publishers, developers and deployers need clarity. Section 79 immunity will not automatically apply to systems that generate or modify content, and the liability of developers and deployers needs explicit articulation. Hence, the IT Act should be amended.
Under data protection, critical issues include the scope of training exemptions on public data, purpose limitation, the role of consent managers, and the scope of legitimate use exemptions for AI. These may require legislative amendments and should be examined by the proposed AI Governance Group.
On content authentication, deepfakes and non-consensual imagery are serious risks. Watermarks and identifiers can authenticate whether information was AI generated. International industry standards like C2PA already embed this principle. Attribution tools have value but limitations also exist. It is recommended to set up a committee of experts across government, industry, academia and standards bodies, to develop global standards on content authentication and provenance. In parallel, the proposed AIGG with TPEC must review India’s regulatory framework on authentication and recommend techno legal measures to tackle AI deepfakes.
Copyright is under active deliberation by DPIIT. Section 52 exceptions are limited and do not extend to many modern AI training processes. Several countries have adopted TDM (text and data mining) exceptions. The DPIIT committee is examining legality of using copyrighted works in training, copyrightability of AI outputs, and international practice to propose a balanced framework.
AI governance is also now a foreign policy issue. India should use its balanced approach to benefit the Global South and integrate AI governance into its strategic engagements: G20, UN, OECD, and deliver outcomes as host of AI Impact Summit in 2026.
Governance must also anticipate the emergence of autonomous agents, AI to AI coordination, covert protocols, and loss of control. Standards, audit trails and mandatory human in the loop at critical decision points will be necessary. Foresight research, policy planning and simulation exercises are required.
Recommendations
Balanced, agile, principle based frameworks that enable recalibration. Review current laws to identify gaps. Consider targeted amendments in copyright and data protection, and clarify classification and liability. Develop standards such as content authentication, data integrity, cybersecurity and fairness. Establish global standards on authentication. The proposed AIGG and TPEC should examine content authentication in detail. Allow regulatory sandboxes with reasonable immunities. Support strategic foreign diplomacy on AI. Conduct horizon scanning and scenario planning to anticipate future developments.