Two recent court rulings in the United States have added new twists to the ongoing debate about AI training and copyright. Both cases one involving Anthropic and the other Meta were brought by groups of authors whose books had been used to train large language models (LLMs). While both rulings largely went in favor of the AI companies, the details matter and they leave plenty of open questions for the music industry, which is itself locked in legal battles with AI firms.
So, do these decisions mean AI companies now have free rein to train on copyrighted content? Is it “fair use” to train a music-generating AI on thousands of copyrighted songs without permission? The rulings offer some guidance but also underline just how complex, and unresolved, this debate remains.
Key points both rulings agreed on
1. AI training can be “transformative”
A central question in US fair use law is whether the new use of a copyrighted work is transformative—meaning it adds something new or changes its purpose. Both courts agreed that training LLMs on books can qualify as transformative, because the models are not just reproducing books but learning general language skills that enable a range of outputs.
But here’s the nuance: the rulings are about language models trained on text, not music AIs trained specifically to generate new songs. Training a model to make entirely new kinds of outputs (like chat responses) can be seen as transformative. But training a model to produce more of the same thing (like new pop songs based on existing ones) might be harder to justify.
2. AI companies have commercial motives
Both Anthropic and Meta were recognized as commercial enterprises seeking to profit from their AI models. This doesn’t automatically kill a fair use defense, but it can weigh against it, because fair use is less likely to apply when the new use is commercial.
3. Books and music are highly expressive works
The rulings agreed that books and by extension, music are highly creative and expressive works that merit strong copyright protection. This matters because fair use cases always look at how “creative” the original works are.
4. Full copying is necessary for AI training
Both courts accepted that to train an AI effectively, it’s reasonably necessary to copy entire works, rather than snippets. In music, this would mean AI firms arguing they must ingest entire songs to model their patterns. However, this does not mean they don’t have to pay for that use.
5. Outputs were not directly infringing
The lawsuits focused on how the AI models were trained, rather than accusing them of spitting out large passages of the books. That distinction is important: even if the training uses copyrighted works, if the AI doesn’t copy them directly in its output, it can strengthen the fair use claim.
6. No harm to an emerging licensing market
One factor in fair use is whether the new use harms the market for the original work or a market the rightsholder might reasonably enter. The courts recognized a “circularity” problem here: if every new, transformative use is assumed to harm a potential licensing market, then nothing could ever be fair use. Both rulings found that merely losing out on potential licensing fees is not, by itself, market harm.
Where the rulings diverged
The biggest disagreement came over whether AI-generated outputs might compete with human-created works, and whether that constitutes market harm.
The Anthropic ruling played down this concern, arguing that just as authors can’t stop people from reading books and writing new ones, they also can’t stop AI from learning from books and generating new text. The judge said the complaint was similar to arguing that teaching children to write could create too much competition.
The Meta ruling strongly disagreed, saying AI-generated content can flood the market with competing works, created at a fraction of the time and effort, which could hurt authors’ livelihoods. The court warned that copying, even for a transformative purpose, can still cause real market harm.
Implications for music
These differences could be very relevant for ongoing lawsuits in the music industry like those by major labels and publishers against AI music companies such as Suno and Udio.
If the logic from the Meta ruling prevails, music rightsholders will need to do more than show AI models are transformative. They’ll also need to show evidence that AI-generated music is competing with human-created music and damaging the market like lower royalties, lost licensing deals, or declining demand.
However, that evidence so far seems mixed. AI-generated tracks make up a large share of new uploads on platforms like Deezer about 18%, or over 20,000 tracks a day but only account for around 0.5% of total streams. Spotify has reported that fully AI-generated tracks have “infinitely small consumption.” So far, AI music seems more like background noise than a true market threat but this could change.
Transformative use vs. market harm
A key takeaway from the Meta ruling is that being transformative alone isn’t enough to guarantee fair use. Courts must balance how transformative the new use is with how much harm it causes to the market for the original works.
If a music AI is shown to meaningfully reduce the demand or value of human-created music, the fair use defense could weaken even if the AI is technically “transformative.” This makes evidence critically important.
Questions about pirated data
Another difference between the rulings was about where the AI companies got their training materials.
Anthropic admitted downloading millions of ebooks from pirate sites early on. The judge ruled this was not fair use: “Anthropic had no entitlement to use pirated copies.”
Meta, meanwhile, also used books from “shadow libraries.” But the judge there argued that the transformative purpose of the use mattered more than whether the copies were authorized.
For music AI companies, sourcing training data from unauthorized or pirated sites could become a major vulnerability especially if courts see this as bad faith.
The regulation debate
The Meta ruling also challenged the idea that strict copyright enforcement will stifle AI innovation. The judge noted that these AI products are expected to generate billions or even trillions of dollars. If training requires copyrighted works, the companies will simply figure out how to pay for them. In other words, copyright shouldn’t be sacrificed just because AI companies say it’s inconvenient.
What music rightsholders should do next
From these rulings, some practical guidance emerges for the music industry:
- Prove market harm: Labels and publishers should gather detailed data on how AI-generated music could hurt sales, streams, sync licensing, and royalties.
- Show attempts to license: Courts may want to see that rightsholders have tried to license their works to AI firms, showing there is a real, functioning licensing market.
- Watch for direct copying: If AI models start generating recognizable lyrics or melodies copied from copyrighted works, that could strengthen infringement claims.
- Question data sources: If AI companies used pirated or unauthorized music to train their models, that could be damaging to their defense even if the models themselves are transformative.
Looking ahead
The recent rulings aren’t final answers they’re part of an ongoing legal and cultural conversation. Fair use itself is a US-specific concept, and other countries may apply different standards.
For now, the takeaway is that the courts recognize AI training can be transformative, but that doesn’t mean AI companies get a free pass. The balance between innovation and protecting creators remains delicate and the fight over where to draw that line is far from over.
In music, where the stakes are cultural as well as financial, this debate will likely play out in courtrooms, legislatures, and boardrooms for years to come. And with AI evolving faster than the law, these early rulings are just the beginning.