Meta achieved a major legal victory as a federal judge in San Francisco ruled that its AI model, Llama, did not breach copyright law by training on books written by a group of well-known authors, including Sarah Silverman.
This decision in the Kadrey v. Meta Platforms case may shape the way courts handle similar copyright lawsuits involving AI. Earlier in the week, a separate court decision in Bartz v. Anthropic addressed fair use in AI training, but left open questions about the use of copyrighted content. Together, these rulings highlight a turning point for AI companies, creative professionals, and copyright law.
Meta’s Win in the Kadrey Case
The case began in July 2023 when authors Richard Kadrey, Christopher Golden, and Sarah Silverman accused Meta of using their books—sourced from piracy sites like LibGen—to train Llama. They claimed Meta removed copyright details to hide its actions and asked the court to stop Meta’s AI model training and pay damages.
Judge Vince Chhabria sided with Meta, relying on the fair use principle, which allows some use of copyrighted material for things such as research or parody. He pointed out that Meta’s AI did not copy the books word-for-word but used them to build a model that generates new language. The judge also noted that the authors had not shown that Meta’s use of their books would harm sales of the original works, which is key in fair use cases.
Chhabria wrote that Meta’s approach does not compete with or replace the original books, so the market for the authors’ work would not suffer. Meta’s legal team welcomed the decision, saying fair use is central to developing its AI technology. Meta has argued that its use of publicly available material, even if sourced from shadow libraries, falls under fair use.
This wasn’t without pushback. The plaintiffs’ lawyers, led by David Boies, argued that Meta’s use of pirated books disrespects creators’ rights. They said Meta took entire copies of their works instead of licensing them. Judge Chhabria acknowledged the ethical issues but focused on whether Meta broke the law, which he found it did not.
Anthropic’s Partial Result in Copyright Lawsuit
Just before the Meta decision, the Bartz v. Anthropic case reached an important milestone. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson accused Anthropic of using their copyrighted books to train its Claude AI model, with millions of books sourced from piracy sites.
Judge William Alsup decided that Anthropic’s use of legally bought books to train its AI counted as fair use, describing it as very transformative. He compared the process to someone reading books to become a better writer, stressing that the AI model creates something new rather than copying the original works. This is the first time a court has directly ruled on fair use in AI training.
However, Alsup’s ruling came with a warning. While using legally bought books was protected, keeping pirated books in Anthropic’s database was not. The judge ordered another trial for December 2025 to decide how much Anthropic might owe for holding about 7 million pirated books. With damages starting at $750 per book, Anthropic could face a huge bill. Alsup made clear that buying a book later does not erase responsibility for first downloading it illegally.
Anthropic supported the court’s view on fair use but disagreed with the decision to continue the case over the pirated library. The company is considering its next steps.
Impact of the Meta and Anthropic Rulings
These two rulings have mixed results for the AI industry. They strengthen the idea that training AI models on copyrighted works can be fair use if the use is transformative and does not threaten original sales. This is good news for companies like Meta, OpenAI, and Google, all facing similar lawsuits. Judge Alsup’s decision could be used as a reference in future cases.
At the same time, the Anthropic decision highlights the risk of using pirated material. Both Meta and Anthropic built their training sets with content from sites like LibGen. Courts are less willing to ignore this practice now. While Meta avoided liability, the Anthropic case suggests future lawsuits may focus more on how companies gather their training data.
This may push AI developers to work out licensing deals with publishers or find other legal ways to get training content, which could slow progress or raise costs.
For authors and other creators, these outcomes are mixed. Some feel the courts’ support of fair use makes it harder to claim payment or licensing fees from AI companies. The next phase of the Anthropic trial could still give authors hope for damages when their works are used without permission.
More broadly, these cases show that the debate over copyright and AI will only get more heated as AI tools become more common in areas such as entertainment and education. Legal experts expect these issues to reach higher courts, including potentially the Supreme Court, to set clearer rules. For now, the Meta and Anthropic cases show that copyright law is being tested in new ways by AI, with the need to find the right balance between innovation and creator rights.
Looking Ahead
The AI industry is taking stock after these rulings, but more legal challenges are coming. Dozens of lawsuits continue against companies like OpenAI, Midjourney, and Stability AI, covering different aspects of fair use and copyright. Some AI firms are now making licensing deals with publishers to avoid legal trouble, a trend that may grow after the Anthropic case.
Meta’s recent win is a boost, but the company still faces criticism for admitting to using pirated sources. Many in the writing community remain upset, highlighting the ethical questions that go beyond what’s legal. As AI technology advances, control over the data that powers it—and decisions about who benefits—will keep shaping the discussion.
Sources: WIRED, Reuters
Related News:
How Shopify’s AI Store Builder is Transforming Online Shopping
Related Post









