Huge AI copyright ruling offers more questions than answers
Anthropic won the court battle, but the AI copyright war is far from over.

While sci-fi movies from the 1980s and '90s warned us about the potential for artificial intelligence to destroy society, the reality has been much less dramatic so far.
Skynet was supposed to be responsible for the rise of killer machines called Terminators that could only be stopped by time travel and plot holes.
The AI from "The Matrix" movies also waged a war on its human creators, enslaving the majority of them in virtual reality while driving the rebellion underground.
Related: Meta commits absurd money to top Google, Microsoft in critical race
To be fair, the artificial intelligence from OpenAI, Google Gemini, Microsoft Copilot, and others does threaten to destroy humanity, but only sometimes. And it looks like the technology is mostly harmless to our chances of survival.
But that doesn't mean this transformative tech isn't causing other very real problems.
The biggest issue humans currently have with AI is how the companies controlling it train their models.
Large language models like OpenAI's ChatGPT need to feast on a lot of information to beat the Voight-Kampff test from "Blade Runner," and a lot of that information is copyrighted.
So at the moment, the viability of AI rests in the hands of the courts, not software engineers.
This week, the courts handed down a monumental ruling that could have a wide-ranging ripple effect.
Judge issues ruling in AI case that leaves many questions unresolved
This week, Judge William Alsup of the U.S. District Court for the Northern District of California ruled that AI company Anthropic, and others, can train their AI models using published books without the author's consent.
The ruling could set an important legal precedent for the dozens of other ongoing AI copyright lawsuits.
More on AI:
- Gemini, ChatGPT may lose the AI war to deep-pocketed rival
- Anthropic shows the limits of AI as it scraps blog experiment
- Amazon's Alexa AI upgrade is even worse than expected
A lawsuit filed by three authors accused Anthropic of ignoring copyright laws when it pirated millions of books to train its LLM, but Alsup sided with Anthropic.
“The copies used to train specific LLMs were justified as a fair use,” Alsup, who has also presided over Oracle America v. Google Inc. and other notable tech trials, wrote in the ruling. “Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes.”
Anthropic ruling is a 'mixed bag' on fair use of copyrighted material
Ed Newton Rex, CEO of Fairly Trained, a nonprofit that advocates for ethically compensating creators of the data LLMs get trained on, had a unique take on the verdict after many headlines declared it a broad win for AI companies.
"Today's ruling in the authors vs. Anthropic copyright lawsuit is a mixed bag. It's not the win for AI companies some headlines suggest — there are good and bad parts," he said in a lengthy X post this week.
"In short, the judge said Anthropic's use of pirated books was infringing, but its training on non-pirated work was fair use."
So Anthropic is on the hook for pirating the material, but the judge ruled that it doesn't need the author's permission to train its models.
This means Anthropic's fair use argument stood up in court, but the ruling may not be as wide-ranging as it seems.
"This is not a blanket ruling that all generative AI training is fair use. Other cases may go the other way, as the facts are different," Newton Rex said.
"The Copyright Office has already pointed out that some AI models are more transformative than others — for instance, they singled out AI music models as less transformative. Lobbyists will say this decision confirms that generative AI training is fair use — that's not true."
Related: Amazon coders have a surprising reason for hating GenAI