What happens when AI wins a copyright dispute?
15 July 2025
Courts are increasingly testing how copyright applies to artificial intelligence. Excel V. Dyquiangco looks at recent rulings, global lawsuits and licensing models, underscoring the need for clearer legal frameworks to balance innovation with creator protection.
A U.S. judge recently sided with Mark Zuckerberg’s Meta in a copyright lawsuit filed by comedian Sarah Silverman and author Ta-Nehisi Coates, who argued that the Facebook parent company broke copyright law by using their books without permission to train its Llama artificial intelligence model. In his San Francisco court ruling, U.S. District Judge Vince Chhabria determined Meta’s actions constituted fair use, a legal doctrine allowing limited use of copyrighted material. He reasoned that the authors failed to provide enough evidence that Meta’s AI would dilute the market by creating works like their own.
This decision marks the second major legal victory for the AI industry, following a similar ruling where a judge found that AI developer Anthropic had not infringed on authors’ copyrights in a related case.
“In a sense, these decisions are a significant setback for the creative industry as they limit the legal recourse available to creators to vindicate their copyright entitlements,” said Timothy Webb, a partner at Clayton Utz in Sydney. “If fair use were to apply to most, if not all, uses of copyrighted material to train AI models, creators would rely on establishing copyright infringement solely in the output generated by the AI model. Though this may be easier to establish depending on the copyrighted work, other works, such as books, face greater difficulty in establishing reproduction.”
He added: “Considering the lack of transparency across the AI industry as to how AI models operate, the virtually infinite prompts and outputs available to users and the limited resources available to most artists in comparison to their corporate AI counterparts, it is a David versus Goliath scenario.”
Moreover, Webb said that he saw the “wins” as not as clear as they first appeared. While the judge in the case involving Anthropic held that its storage of over 7 million books within an online repository constituted copyright infringement in the absence of a justification under fair use principles, the Meta decision is even more overt in its support for creatives, with Chhabria remarking that he had little choice but to grant summary judgment to Meta due to the deficiencies in the plaintiffs’ evidence of market harm. The judge observed: “No matter how transformative LLM (large language model) training may be, it’s hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books.”
In India, while there is an ongoing lawsuit that raises similar questions under the Indian Copyright Act, 1957, Shivam Vikram Singh, a partner at Remfry & Sagar in Gurugram, also pointed out that there was a case in the U.S. where the court recently granted summary judgment that such copying was not fair use, emphasizing that the purpose of copying was to build a directly competing product.
“The law – any law – leans towards balancing equities,” he said. “The current AI litigations represent uncharted waters wherein fresh judicial interpretation is required to deal with new technology. This is not the first technological leap that copyright regimes have faced, and ultimately, an equilibrium will be found between the rights and obligations of all parties – though it may take time. A lot of AI-based cases are ongoing across the globe, and a few judgments have trickled in from the U.S., but it cannot be said that the judgments are tilting in favour of copyright owners or generative AI (GenAI) tools.”
Each dispute is multi-faceted, and it is difficult to give a black and white answer, he noted. “Copyright claims are being made and challenged at the stage of storage of training data, its use, processing and the resultant output. The floor is open for creative arguments and decision-making, and it will be interesting to see how the law evolves.”
Danny Kobrata, a partner at K&K Advocates in Jakarta, added: “The recent court rulings definitely raise the bar for creators. A mere use of work without consent by AI does not always mean copyright infringement. Unless a creator can prove that the AI has directly substituted their market, they can no longer stop their work from being used as AI training data, even if it was used without the author’s consent.”
Collaboration as key
In response to this ruling, some lawyers in the IP field think of collaboration as the key. “We have already seen a number of companies enter into licensing arrangements permitting AI models to use their portfolio of copyrighted works in exchange for compensation (for example, OpenAI’s agreement with the Financial Times in 2024). However, in the absence of clear legal guidance and given the cost of pursuing litigation, creators might struggle to get AI companies to the negotiating table,” said Webb.
For Singh, the implications may not be positive if AI developers are given a free hand to train LLMs – creators and authors may be discouraged if a flood of AI-generated competing products reduces the monetization potential of the outcome of their skill, labour and judgment.
However, he said that a balanced approach would be one where the creative industry can reap the benefit of its creations, and AI models also develop optimally. No one can argue that the transformational leap promised by AI ought to be stymied.
“A balance could be achieved through licensing models that take into account factors such as the amount of copyrighted work used; slabs for progressive licence fees; collective licensing schemes; better metadata standards; opt-out models; efficient bookkeeping and record maintenance by creators; among others,” he said. “Such models are likely to emerge in the aftermath of the ongoing AI litigations.”
Kobrata, meanwhile, also said that U.S. court decisions often have high influence on the intellectual property field. “U.S. rulings could still influence international conversations about the scope of copyright and AI,” he said. “For the creative industry, allowing AI to train on copyrighted works without consent may lead to less incentive for creators to produce new works, as their originality feels replaceable by machines trained on their own content.”
“The key challenge is balancing protection for creators with room for AI to grow,” he added. “Requiring permission for every use may slow progress, but allowing free use risks exploiting creators. What’s needed are clear, balanced rules, defining when AI outputs cross the line and how much reuse is fair. Without this clarity, both innovation and creativity could suffer.”
Manh Hung Tran, managing partner at BMVN International, the Baker McKenzie affiliate in Hanoi, had a different take on the issue. “We doubt the implication of this case will be felt profoundly in the creative industry as a whole, especially for artists and authors seeking to enforce their rights in countries outside the United States – or outside the Northern District of California even – given that it is a ruling of first instance based on U.S. law for a limited jurisdiction,” he said.
“However, the ruling does offer a lot of food for thought, because it touches one of the most fundamental questions in the context of both copyright laws and technological innovation – whether a new technology can substitute a more-traditional-but-not-yet-obsolete way of doing things in the market,” he continued. “Since this is also the key consideration of the Berne Convention’s three-step copyright limitation test, the other jurisdictions’ laws are likely to reflect this factor to at least a certain extent. The district court judge’s assessment of whether and how the use of copyrighted materials to train an LLM might harm the market for those original works can be a part of the thought process of any local court that seeks to analyse whether such training can interfere with the rightsholder’s ability to exploit their work commercially and/or can unreasonably prejudice the rightsholder’s legitimate rights and interests.”
With this, the overall impact of the ruling can still be offset by local nuances. For example, Vietnamese laws currently do not offer any fair use cases that are suitable for the use of copyrighted materials in training LLMs. Therefore, a Vietnamese court could reject the fair use defence of the AI-side in this case without the need to even get to the question of likelihood of substitution or market harm.
“The creative industry should not lose all hope because of this ruling. They should consider its reasonings and wants of evidence, of course, so that they can be better prepared when presenting their case,” Manh said. “On the other hand, if empirical evidence shows that certain activities related to AI/LLM operations are not harmful to rightsholders after all, increasing the chance of losing in copyright disputes, creators should consider taking alternative approaches to protect their work from unauthorized use, such as setting up more technological protection or digital right management guardrails (TPMs/DRMs) over their works.”
“In addition, despite copyright laws’ general disregard of high creativity and artistic merits, the value of a work of the arts in the market still largely lies in these factors,” he noted. “Given the current advancement level of the GenAI technology, it might take some more time for any currently existing models to reach the level of creativity a human can potentially have.”
Are copyright laws ready for AI?
“If copyright systems are designed to protect the economic interests of creators, uses that build upon the original works in a transformative manner without impacting those economic interests should be permitted,” said Webb. “Against the backdrop of free speech that underpins U.S. policy and rhetoric, any limit upon creative expression is rightfully scrutinized and confined.”
However, he said that it appears perverse to describe the use of copyrighted material to train AI models as “fair” when the ultimate end product is a service capable of generating (at unprecedented speed) precisely the type of material that competes with the original work.
“In fact, the desirability of high-quality copyrighted material itself (such as books and artistic works) demonstrates some level of intention to dilute the market, given that the better the training material, the higher quality the end product,” he said. “These observations are echoed in the recent complaint filed by Disney and Universal Studios against Midjourney Inc., describing the latter company’s image generator as a ‘bottomless pit of plagiarism’ capable of generating innumerable copies of its famous copyrighted characters.”
In Vietnam, its IP laws currently provide for fair use cases by listing out very specific cases that will not infringe copyright, such as reproduction of one copy for personal study or research purposes, reasonable quotation of a work for certain purposes such as comment, introduction and illustration.
“These regulations are in dire need of change if they want to adapt to AI-related disputes,” said Manh. “However, this only means that the former type of copyright laws can handle these disputes because they are generally broad enough and easier to apply. If we wish to respect the core purpose of copyright laws mentioned above, there will need to be a reassessment of the cost-benefit dichotomy that takes into consideration economic aspects such as market substitution, potential anti-competitive effects, market demands and consumer perception to understand which stakeholders will sustain the harm, the scale of that harm and whether a certain amount of harm or restriction should be compromised in the interest of the public.”
For Kobrata: “In the past, determining the author of a work was relatively straightforward. Typically, it was the individual who created the work using their skill, judgment and creativity. With AI-generated work, this concept becomes far more complicated. Who should be considered as an author when a work is entirely produced by an AI? Is the company that owns the AI, or the user, or is there no author at all? For some countries, this question is still open and subject to ongoing discussion.”
“The growing use of copyrighted materials in AI development clearly shows that new legal frameworks are needed. These frameworks must strike a balance between protecting creators’ rights while also allowing technological innovation to thrive. Without clear and updated rules, both legal certainty and creative trust are at risk,” he said.