ADVERTISEMENT
India is intensifying efforts to build a responsible and innovation-driven artificial intelligence (AI) ecosystem, backed by policy, regulation, and industry engagement. The government’s AI strategy, first articulated in NITI Aayog’s 2018 National Strategy for AI, aims to address both the opportunities and risks posed by AI systems through transparency, fairness, privacy, and security.
The 2021 Responsible AI Approach Document by NITI Aayog laid out principles such as safety, reliability, inclusivity, and accountability, and these have since been carried forward in recent policy moves. In 2025, the Ministry of Electronics and Information Technology (MeitY) published a governance framework detailing eight key principles for AI: transparency, accountability, safety and reliability, privacy and security, fairness, human oversight, inclusive innovation, and technology-led governance, as per a report by Koan Advisory Group.
To scale AI responsibly, the government allocated ₹10,371 crore (approx. $1.24 billion) in the 2024 Union Budget under the IndiaAI mission. The initiative seeks to democratize access to computing infrastructure, improve data quality, and support indigenous AI development through public–private partnerships.
India is also revising its digital regulatory landscape, with the Digital Personal Data Protection Act (DPDPA) 2023, the Telecommunications Act 2023, and pending legislation such as the Digital India Bill and the draft digital competition bill. These laws, though in various stages of implementation, are expected to influence how AI systems are developed and deployed.
Existing laws are already being applied to AI. Section 69A of the IT Act 2000 empowers the government to block content in the interest of national security. The IT Rules 2021 impose obligations on intermediaries to take down unlawful content, including AI-generated material. Rule 4(4) specifically mandates large platforms to audit their algorithmic tools for fairness, bias, and privacy risks. Additionally, the Consumer Protection Act 2019 and E-Commerce Rules 2020 apply to AI-enabled services, while current copyright and patent laws restrict AI systems from being recognized as legal authors or inventors.
Internationally, India has been active in AI governance discussions, participating in the G7’s Hiroshima AI Process, the G20’s New Delhi Leaders Declaration, and the 2023 AI Safety Summit in the UK, where it emphasized safe, secure, and accountable digital ecosystems.
The Bureau of Indian Standards (BIS), through its LITD 30 technical committee, is developing national AI standards aligned with global benchmarks. The upcoming IS 17802 series draws on international norms like ISO/IEC 22989, 24028, and 23894 and covers areas such as trust, data quality, and risk management.
India’s AI sector is also seeing rapid industry adoption. With the second-largest AI talent pool globally, India’s AI market is projected to grow from $7–9 billion in 2023 to $17–22 billion by 2027, at a 25–35 percent CAGR. Companies such as TCS, Infosys, Wipro, HCL, and LTI Mindtree have integrated AI across business functions and trained over 700,000 employees in generative AI. Programs by tech majors like Microsoft and Nvidia, in collaboration with IIT Madras, are also upskilling Indian professionals.
Industry bodies such as NASSCOM have launched platforms like NASSCOM AI to foster dialogue among startups, enterprises, academia, and policymakers.
However, the government continues to address potential harms. In March 2023, MeitY issued an advisory to intermediaries using generative AI, requiring them to submit status reports within 15 days. The advisory asked platforms to prevent AI tools from disseminating prohibited content under Rule 3(1)(b) of the IT Rules 2021 and ensure such systems do not spread bias or jeopardize election integrity.
As India pushes forward on AI adoption, the balancing act between innovation and regulation remains a central concern in shaping the country's digital future.