“In the beginning, the cases were largely met with scepticism, but now there’s more of a consensus that it is important to protect the generated outputs. I think the reasons have more to do with the growing capabilities of AI and the growing industry adoption. People are coming to understand that it is very critical to use AI in R&D, and that we want a system that encourages rather than discourages that,” Abbott said.
Underlying these rulings is a statutory language that presumes inventors and authors are human. Where the law bends is in ownership and disclosure who claims rights and how AI’s role is recorded.
The current compromise is to secure the patent while attributing it to a person. Yet this sidesteps thorny legal and philosophical dilemmas that will only grow more pressing. Such challenges are not unique to patents; AI is disrupting the entire intellectual property landscape, with copyright being a major frontier.
The emerging framework for AI copyright protection
In China, courts evaluating AI‑generated works focus on authorship and require proof of human intellectual contribution before granting copyright protection. In a 2025 ruling, the Beijing Internet Court denied protection for an AI‑generated image because the plaintiff failed to provide original prompt records, generation logs or evidence of creative decision‑making, offering only after‑the‑fact recreations that the court deemed inadequate under Chinese law. The court emphasized that claimants must document their creative thinking, input commands, and the steps taken to select or modify AI outputs, consistent with the principle that the party asserting rights bears the evidentiary burden.
This approach aligns with the landmark Li v. Liu decision in November 2023, where the Beijing Internet Court recognized copyright in an AI‑assisted image after finding that the plaintiff exercised substantial human authorship through iterative prompting, adjustment of over 150 prompts and parameters, and aesthetic refinements. Together, these rulings illustrate a clear judicial trend requiring detailed documentation of human input to establish genuine authorship in AI‑supported works.
“Legal determination of authorship will depend on jurisdiction, context and the specific facts of creation and use. Prompt logs, tool settings, timestamps and generation records form valuable supporting evidence, but by themselves, they are not generally sufficient to legally establish authorship of AI-generated content. They merely demonstrate that a particular person interacted with an AI tool at a particular time,” said Sher Hann Chua, TMT/IP counsel at Linklaters in Hong Kong.
The court also highlighted that AI-generated content raises unique evidentiary challenges, as human contribution is less apparent than in traditional creative processes. To address this, the Beijing Internet Court recommended that AI platforms incorporate traceability features capable of preserving prompts, generation histories and iterative modifications, enabling creators to meet the evidentiary standard for proving authorship.
“Without this traceable record of human input, AI-generated works are unlikely to meet China’s legal threshold for originality,” said Wong.
“Creators must ensure that they maintain original generation records when using AI tools, or risk the ability to demonstrate that copyright subsists in the AI-generated content. AI platform developers operating in China may now consider implementing features that automatically preserve generation logs, prompts and iterative processes, if such features are not yet readily available on their platforms,” said Chua.
This Chinese framework, centred on verifying the human creative process behind AI-assisted works, sits within a broader global landscape of divergent legal standards. In the United States, courts have reaffirmed that copyright protection is available only for works authored by humans, most notably in Thaler v. Perlmutter (2025), where the D.C. Circuit held that an AI system cannot be the legal author of a work because the Copyright Act requires human authorship. In that case, the court did not address how much human input would be required if a human contributed to an AI-assisted work because the AI had been listed as the sole author.
Japan’s guidance similarly states that AI-generated materials qualify as copyrightable works only when the human user has exercised creative intention and made a creative contribution, treating the AI as a tool. However, the available sources do not identify any Japanese rulings granting copyright solely based on highly detailed prompts.
In the United Kingdom, litigation such as Getty Images v. Stability AI (2025) has focused less on authorship of AI outputs and more on whether the training and distribution of AI models constitute copyright or trademark infringement. The High Court ultimately dismissed Getty’s secondary copyright claim, finding that the Stable Diffusion model did not store or reproduce Getty’s copyrighted images, although the court did identify limited trademark infringement where AI outputs resembled Getty watermarks.
Taken together, these varying approaches mean that the protectability of AI-assisted works depends heavily on jurisdiction, requiring businesses operating internationally to navigate differing evidentiary burdens and authorship doctrines.
Does AI challenge creativity?
Since OpenAI’s breakthroughs pushed AI into the mainstream, debate has intensified. Some warn that machines could overtake humanity; others are building new companies on a wave of publicly accessible tools. AI is clearly transforming creative work, but does it ultimately challenge or amplify human creativity?
“It’s not surprising that you get fewer distinct ideas, because they all come from the same underlying distribution,” wrote Lennart Meincke, principal investigator at the Wharton Generative AI Lab at the University of Pennsylvania’s Wharton School and a Ph.D. candidate at WHU – Otto Beisheim School of Management, in an article published by the University of Pennsylvania.
Research by professors Gideon Nave and Christian Terwiesch of the Wharton School and Meincke shows that while the ChatGPT chatbot can improve the quality of individual ideas, it also causes groups to converge on more similar ideas, reducing the variety needed for breakthrough innovation.
These findings press against the boundaries of intellectual property laws, a system historically designed to reward human originality.