Please wait while the page is loading...

loader

The human denominator: Why AI’s creativity tests the limits of ownership

29 April 2026

The human denominator: Why AI’s creativity tests the limits of ownership

AI is rapidly transforming creative work, but is it a rival or a partner to human imagination? Cathy Li explores how AI is reshaping global intellectual property regimes and redefining what creativity means. 
 

The United States Patent and Trademark Office (USPTO) issued revised inventorship guidance on November 28, 2025, formally withdrawing its February 13, 2024 directive on AI‑assisted inventions. The update reaffirms a foundational principle of U.S. patent law: only humans can be inventors, and the presence of artificial intelligence, no matter how advanced, does not alter the legal standard for conception. The move eliminates the earlier attempt to apply the Pannu factors, traditionally used to assess joint inventorship among multiple natural persons, to situations involving artificial intelligence, reasoning that AI systems are not persons and thus cannot be joint inventors. 

In practical terms, the new guidance treats AI as a tool, no different in principle from lab equipment or software, and keeps the traditional inventorship analysis intact for human collaborators, even when they use AI. The revision arrives amid rapid adoption of AI across research and development (R&D) and creative workflows, a shift that is forcing fresh debates about the boundaries of inventorship and authorship in the AI era. 

Does AI threaten human creativity and originality?

“This blurring of authorship questions the value we assign to human intuition, emotion and intent. The traditional understanding of creativity as uniquely human, grounded in consciousness and subjective experience, is being tested by algorithms capable of generating seemingly novel results without awareness or intent,” said Elizabeth Wong, a partner at DLA Piper in Hong Kong.  

She explained that AI does not nullify human creativity, but rather true creativity in the AI era will likely hinge on human discernment, the ability to infuse machine-generated content with meaning, context and intentionality. 

“The synthesis of human imagination and computational capability may lead to richer creative outcomes, but only if humans remain conscious curators of technology’s expressive potential, not passive consumers of its convenience,” said Wong. 

But the law’s human‑centric framework now rubs sharply against a technological reality in which machines no longer merely assist; they generate. As AI systems increasingly produce sophisticated designs, compounds, code and artistic works, the boundary between human conception and machine output becomes harder to pinpoint. When a human cannot definitively identify where their own creativity ends and autonomous machine generation begins, the legal system defaults to the only source of authorship it has ever recognized: the human mind. That default, rooted in centuries of precedent, is now at the centre of some of the world’s most closely watched patent disputes. 

Few scholars have articulated this tension more sharply than Ryan Abbott, a professor of law and health sciences at the University of Surrey School of Law and adjunct assistant professor of medicine at the David Geffen School of Medicine at UCLA. Abbott’s work nearly a decade ago challenged the assumption that patent inventorship must be human at all. In his influential article, I Think, Therefore I Invent, Abbott chronicled autonomous outputs from systems, such as Stephen Thaler’s Creativity Machine and John Koza’s genetic‑programming Invention Machine, arguing that in some cases, computers already satisfy the criteria for inventorship. He warned that insisting on human inventors could distort incentives, undermine fairness and impede accurate disclosure. His questions, which once seemed abstract, are now central to policy debates: Who owns an invention conceived by a machine, and do creative algorithms challenge the very foundations of patent law? 

Recognizing machine creativity 

Many IP systems still define authorship and inventorship around human activity. That reflects history, and some still prefer it, arguing IP exists to reward human effort or protect human moral rights. Others say IP’s purpose is to benefit the public by spurring creation and sharing, so protection should not depend on whether a person or a machine produced the output. 

AI also raises hard ownership questions. Because AI has no legal personality, it cannot own IP, so the law must decide who does – the developer or trainer of the model, the user who prompts it, the organization that owns it, or some combination of these. The answer is complicated further by the patchwork of national and international IP rules. 

Stephen Thaler, who earlier built the Creativity Machine, later developed DABUS, an AI that integrates idea‑generation and idea‑evaluation networks in a single system. Thaler said DABUS can assemble complex ideas from simpler concepts, use memory, anticipate consequences, and produce novel outputs without human direction. However, he added, he could not reconstruct how DABUS arrived at two specific inventions, connecting this to the Lovelace Test for machine creativity. On that basis, Thaler filed U.S. patent applications naming DABUS as the sole inventor and assigning the rights to himself. 

This effort became the modern test case on AI inventorship. In Thaler v. Vidal, the U.S. Court of Appeals for the Federal Circuit in August 2022 held that, under the Patent Act, an inventor must be a “natural person,” relying on the statute’s use of “individual” and human‑specific pronouns; it therefore rejected listing DABUS as an inventor. The ruling aligned the United States with other major jurisdictions that likewise require a human inventor, while contrasting with South Africa’s formal grant that listed DABUS. The decision has intensified calls to modernize U.S. law. 

Advocates for reform (including Abbott, who leads the Artificial Inventor Project) argue that current statutory language defining “inventor” as an “individual” and tying procedures like the §115 oath to a human signer locks in a human‑only assumption and may discourage candour or push AI‑enabled advances into secrecy rather than disclosure. Proposed updates focus on redefining “inventor,” adjusting pronouns and oath mechanics, and clarifying ownership/disclosure so that human applicants can accurately record an AI’s role while securing rights. 

Abbott has also emphasized in public writing and interviews that, as AI capabilities and adoption accelerate, IP systems should encourage R&D on AI‑assisted invention and provide a path to protect valuable AI‑generated outputs, positions he and collaborators have articulated in scholarship and project summaries. 

The decision has had far-reaching implications: it reaffirmed a human-only boundary for U.S. inventorship, highlighted tensions between rapid AI-driven innovation and an inflexible statutory framework, and triggered calls for Congress to update the Patent Act. 

These necessary updates would specifically target the law’s core definition of an “inventor” as an “individual,” a term the court interpreted to mean only a human being, along with the supporting human-specific pronouns and the oath requirement that an AI cannot fulfill. The purpose of these changes is to avoid discouraging AI-enabled invention, incentivizing secrecy, and undermining the moral and utilitarian goals of the patent system. 

“In the beginning, the cases were largely met with scepticism, but now there’s more of a consensus that it is important to protect the generated outputs. I think the reasons have more to do with the growing capabilities of AI and the growing industry adoption. People are coming to understand that it is very critical to use AI in R&D, and that we want a system that encourages rather than discourages that,” Abbott said. 

Underlying these rulings is a statutory language that presumes inventors and authors are human. Where the law bends is in ownership and disclosure who claims rights and how AI’s role is recorded.  

The current compromise is to secure the patent while attributing it to a person. Yet this sidesteps thorny legal and philosophical dilemmas that will only grow more pressing. Such challenges are not unique to patents; AI is disrupting the entire intellectual property landscape, with copyright being a major frontier. 

The emerging framework for AI copyright protection 

In China, courts evaluating AI‑generated works focus on authorship and require proof of human intellectual contribution before granting copyright protection. In a 2025 ruling, the Beijing Internet Court denied protection for an AI‑generated image because the plaintiff failed to provide original prompt records, generation logs or evidence of creative decision‑making, offering only after‑the‑fact recreations that the court deemed inadequate under Chinese law. The court emphasized that claimants must document their creative thinking, input commands, and the steps taken to select or modify AI outputs, consistent with the principle that the party asserting rights bears the evidentiary burden.  

This approach aligns with the landmark Li v. Liu decision in November 2023, where the Beijing Internet Court recognized copyright in an AI‑assisted image after finding that the plaintiff exercised substantial human authorship through iterative prompting, adjustment of over 150 prompts and parameters, and aesthetic refinements. Together, these rulings illustrate a clear judicial trend requiring detailed documentation of human input to establish genuine authorship in AI‑supported works. 

“Legal determination of authorship will depend on jurisdiction, context and the specific facts of creation and use. Prompt logs, tool settings, timestamps and generation records form valuable supporting evidence, but by themselves, they are not generally sufficient to legally establish authorship of AI-generated content. They merely demonstrate that a particular person interacted with an AI tool at a particular time,” said Sher Hann Chua, TMT/IP counsel at Linklaters in Hong Kong. 

The court also highlighted that AI-generated content raises unique evidentiary challenges, as human contribution is less apparent than in traditional creative processes. To address this, the Beijing Internet Court recommended that AI platforms incorporate traceability features capable of preserving prompts, generation histories and iterative modifications, enabling creators to meet the evidentiary standard for proving authorship. 

“Without this traceable record of human input, AI-generated works are unlikely to meet China’s legal threshold for originality,” said Wong. 

“Creators must ensure that they maintain original generation records when using AI tools, or risk the ability to demonstrate that copyright subsists in the AI-generated content. AI platform developers operating in China may now consider implementing features that automatically preserve generation logs, prompts and iterative processes, if such features are not yet readily available on their platforms,” said Chua. 

This Chinese framework, centred on verifying the human creative process behind AI-assisted works, sits within a broader global landscape of divergent legal standards. In the United States, courts have reaffirmed that copyright protection is available only for works authored by humans, most notably in Thaler v. Perlmutter (2025), where the D.C. Circuit held that an AI system cannot be the legal author of a work because the Copyright Act requires human authorship. In that case, the court did not address how much human input would be required if a human contributed to an AI-assisted work because the AI had been listed as the sole author. 

Japan’s guidance similarly states that AI-generated materials qualify as copyrightable works only when the human user has exercised creative intention and made a creative contribution, treating the AI as a tool. However, the available sources do not identify any Japanese rulings granting copyright solely based on highly detailed prompts. 

In the United Kingdom, litigation such as Getty Images v. Stability AI (2025) has focused less on authorship of AI outputs and more on whether the training and distribution of AI models constitute copyright or trademark infringement. The High Court ultimately dismissed Getty’s secondary copyright claim, finding that the Stable Diffusion model did not store or reproduce Getty’s copyrighted images, although the court did identify limited trademark infringement where AI outputs resembled Getty watermarks. 

Taken together, these varying approaches mean that the protectability of AI-assisted works depends heavily on jurisdiction, requiring businesses operating internationally to navigate differing evidentiary burdens and authorship doctrines. 

Does AI challenge creativity? 

Since OpenAI’s breakthroughs pushed AI into the mainstream, debate has intensified. Some warn that machines could overtake humanity; others are building new companies on a wave of publicly accessible tools. AI is clearly transforming creative work, but does it ultimately challenge or amplify human creativity? 

“It’s not surprising that you get fewer distinct ideas, because they all come from the same underlying distribution,” wrote Lennart Meincke, principal investigator at the Wharton Generative AI Lab at the University of Pennsylvania’s Wharton School and a Ph.D. candidate at WHU – Otto Beisheim School of Management, in an article published by the University of Pennsylvania. 

Research by professors Gideon Nave and Christian Terwiesch of the Wharton School and Meincke shows that while the ChatGPT chatbot can improve the quality of individual ideas, it also causes groups to converge on more similar ideas, reducing the variety needed for breakthrough innovation. 

These findings press against the boundaries of intellectual property laws, a system historically designed to reward human originality. 

“Human creativity is not threatened, but it is evolving into a collaborative process between human direction and AI execution. The real challenge is ensuring that our IP systems continue to incentivize innovation, rather than creating legal black holes where valuable creations cannot be protected,” Chua expresses that the real test is keeping IP law focused on rewarding innovation, not leaving new work in legal limbo. 

“That being said, it is possible that AI can get so good so quickly that may cause significant unemployment amongst people, but I think if AI is going to do that then it is going to do that. There is not a lot you could do to legislate that away. I don’t think there is a good reason to legislate it away. Then the challenge becomes: if AI does so much of the work, how do we all share the benefits of that, instead of it going to Elon Musk and Sam Altman?” said Abbott.  

Abbott believes that the changes AI brings are going to be inevitable. He pointed out that history shows that every wave of new technology, from the first Industrial Revolution to today’s AI, has stirred fears about job loss. Some workers have been displaced, yet overall the economy has grown, new roles have emerged and living standards have improved. The problem is that those most affected haven’t been adequately supported. Our responsibility now is to ensure that people who lose jobs to technological change receive stronger social services. 

“In my opinion, AI undeniably challenges long-held notions of human creativity, but it does so in ways that are both disruptive and generative,” Wong said. She explained that, on one hand, generative AI forces us to rethink what “creativity” means when machines can rapidly produce outputs that mimic human originality – from paintings and music to literature and design. This blurring of authorship questions the value we assign to human intuition, emotion and intent. The traditional understanding of creativity as uniquely human, grounded in consciousness and subjective experience, is being tested by algorithms capable of generating seemingly novel results without awareness or intent. But rather than diminishing human creativity, AI can amplify and extend it.  

In the hands of an artist, designer or writer, AI becomes a powerful collaborator – expanding the palette of tools available for ideation and experimentation. By automating routine or generative aspects of creation, AI frees humans to focus on conceptual depth, emotional resonance and aesthetic vision. This hybrid model of creativity suggests that human ingenuity lies not in competing with algorithms, but in curating, shaping and contextualizing their outputs into meaningful cultural expressions.


Law firms