ADVERTISEMENT
The Government of India has released a landmark working paper proposing a new policy framework to regulate the use of copyright-protected works for training Generative AI systems. The draft, prepared by a committee formed by the Department for Promotion of Industry and Internal Trade (DPIIT), examines whether existing copyright laws adequately address challenges posed by AI technologies and recommends a hybrid statutory licensing model to balance innovation needs with creator rights.
MeitY releases draft IT Rules; targets deepfakes, synthetic content
The Working Paper – Part 1, released on December 8, 2025, invites feedback from industry stakeholders and the public within 30 days.
It outlines the legal, economic, and ethical complexities surrounding AI developers’ use of protected content for training models, noting that existing approaches—including blanket exceptions, opt-out regimes, voluntary licensing, and extended collective licensing, fail to adequately address concerns around fairness, transparency, and operational feasibility.
‘No Absolutes’: Committee rejects blanket exceptions and opt-out models
The committee has emphasized that a “zero-price license” or legal exception allowing unrestricted use of copyrighted materials would harm the creative ecosystem by reducing incentives for human creators. Allowing AI-generated works to compete with original work without compensating artists would “create an imbalance” and risk long-term underproduction of creative content, the report states.
While the EU-style opt-out Text and Data Mining (TDM) exception was considered, the committee observed that without mandatory transparency disclosures from AI developers, rightsholders would struggle to enforce their rights. Conversely, full disclosure obligations would impose heavy compliance burdens and risk exposing trade secrets.
Voluntary licensing models were deemed impractical due to massive transaction costs, fragmented rights ownership and risks of bias, particularly affecting startups dependent on diverse datasets. Extended Collective Licensing, recommended by the U.S. Copyright Office, was also found to potentially disadvantage smaller AI companies while enabling large tech companies to dominate access to data.
Statutory licensing evaluated but requires adaptation.
The paper acknowledges statutory licensing as a promising mechanism because it ensures:
- Fair compensation for creators
- Guaranteed access to lawfully obtained copyrighted content
- Reduced transaction costs
- Wider, more equitable access for startups
However, applying statutory licensing in its traditional form, as used in broadcasting, would be unworkable for large-scale AI training due to challenges in identifying and compensating millions of rightsholders, especially in sectors with incomplete collective management structures, it noted.
Hybrid licensing model proposed
To resolve these issues, the committee recommends a Hybrid Model designed to provide blanket access for AI model training with royalties payable only upon commercialisation.
Key features include:
- Mandatory blanket license for AI developers, eliminating upfront negotiation
- Royalty payments based on government-set rates, open to judicial review
- Equal compensation for creators, regardless of CMO membership
- Centralised royalty collection and distribution
- Legal certainty and reduced litigation risk
- Stronger safeguards against AI bias and hallucinations
The Ministry of Electronics & IT (MeitY) has supported the Hybrid Model, while Nasscom has lodged a dissent recommending full TDM allowance with opt-out mechanisms.
Striking a balance between innovation and creator rights
The committee concludes that neither unrestricted AI training nor rigid rights-holder control would serve long-term public interest. AI systems rely on human creativity for training, and without sustainable compensation structures, content supply will diminish.
“It would be most appropriate to craft a framework that ensures fair compensation to copyright holders, while enabling comprehensive data access for AI developers,” the paper states.
The Hybrid Model, the paper argues, offers a sustainable legal foundation that protects cultural development, supports responsible AI, and ensures equitable access to data for developers of all scales.
DPIIT has requested stakeholder comments to be emailed to ipr7-dipp@gov.in within 30 days of publication.