ADVERTISEMENT
Meta Platforms has scored a legal victory in a high-stakes copyright case after a U.S. federal judge ruled against a group of authors who alleged that the company unlawfully used their books to train its artificial intelligence models. The ruling, issued Wednesday by Judge Vince Chhabria of the U.S. District Court in San Francisco, dismissed the authors' claims, citing insufficient evidence that Meta’s AI system, Llama, infringed their copyrights under existing U.S. law.
The lawsuit, filed in 2023, accused Meta of using pirated versions of the authors’ books—without permission or compensation—as training data for Llama. The case is one of several brought by writers, news publishers, and copyright holders against major tech companies such as OpenAI, Microsoft, and Anthropic, challenging whether the widespread practice of using copyrighted material for AI training constitutes "fair use" under U.S. law.
Not a Clean Slate for Meta While the ruling favored Meta, Judge Chhabria was quick to clarify that it was not an endorsement of the company’s methods. “This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful,” Chhabria wrote. “It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.”
Chhabria noted that the authors did not adequately show how Meta’s use of their books would harm the market for their original works—a key requirement for a successful copyright infringement claim. Nevertheless, he expressed concern over the potential economic and creative implications of allowing generative AI tools to be trained on copyrighted content without consent.
Judges Split on AI Fair Use Chhabria’s opinion diverges from a separate ruling issued earlier this week by U.S. District Judge William Alsup, also in San Francisco, who found in a case involving Anthropic that the AI company’s use of copyrighted materials amounted to “fair use.”
This growing split between judicial interpretations highlights the legal uncertainty surrounding AI training practices. It also raises the stakes for ongoing and future cases, including several involving OpenAI and its GPT models, which rely heavily on vast corpuses of internet data, including copyrighted content.
A Broader Battle Over Fair Use Fair use—a legal doctrine that allows limited use of copyrighted material without permission in certain contexts such as criticism, education, or transformation—has become the central defense for AI companies. Meta, like others, argues that its models use copyrighted texts not to replicate them, but to learn language patterns and generate original content.
In a statement, a Meta spokesperson welcomed the decision and defended the fair use framework as “vital for building transformative AI technology.” The authors’ legal team, however, sharply criticized the ruling. A spokesperson from Boies Schiller Flexner, the firm representing the plaintiffs, said they disagreed with the judge’s conclusion, pointing to what they called an “undisputed record” of Meta’s “historically unprecedented pirating of copyrighted works.”
Judicial Concern About AI’s Creative Disruption Though Meta prevailed, Judge Chhabria signaled unease with the broader implications of AI training. During a hearing in May, and again in Wednesday’s ruling, he acknowledged the authors' concerns that generative AI could severely undercut traditional creative industries.
“By training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way,” Chhabria wrote.
His remarks suggest that while Meta escaped this particular challenge, legal scrutiny of AI training practices is far from over.
https://www.storyboard18.com/digital/us-judge-rejects-mass-surveillance-claims-over-chatgpt-chat-log-preservation-in-copyright-suit-71738.htm