Of copyright, Denmark and deepfakes: The Asian experience

30 September 2025

Of copyright, Denmark and deepfakes: The Asian experience

Danish legislators are proposing a digital copyright amendment to give people control over their image, voice and facial features to curb deepfakes. Espie Angelica A. de Leon examines the bill’s impact, legal debates and how other jurisdictions in Asia are responding to the misuse of AI-generated content. 

 

Danish legislators have a plan and it’s stirring up a buzz in the IP world. The plan: Give people the copyright to their own image, facial features and voice to arrest the rise of deepfakes. 

In June 2025, it was reported that a group of Danish legislators was crafting a bill to amend Denmark’s digital copyright law. The bill aims to stop the creation and sharing of AI-generated deepfakes featuring the likeness of individuals, including their voice, without their consent. Victims may ask that the deepfake content be removed and demand compensation. Social media platforms will also be fined.   

Merriam-Webster defines “deepfake” as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” Deepfakes may be used for entertainment, education, information and translation, but they are also often misused. They may be used as tools for false endorsements, disinformation and defamation campaigns, harassment, financial scams and fraud. In the age of social media, deepfake technology easily rears its ugly head. According to media intelligence platform DeepMedia, around 500,000 deepfakes consisting of videos and voice recordings were shared globally in 2023.  

Famous figures are easy targets. In 2023, deepfake photos of Pope Francis wearing a fashionable puffer jacket outside the Vatican went viral. In 2024, deepfake images of Taylor Swift, which were pornographic in nature, also went viral. One particular post amassed more than 45 million views – and that’s just the views. Imagine how many times it was shared.  

Former U.S. president Joe Biden, Ukrainian president Volodymyr Zelenskyy, Tom Hanks, Oprah Winfrey, Elon Musk, Scarlet Johansson, Jamie Lee Curtis – the list of famous victims goes on. Unfortunately, the list also includes ordinary people. 

Denmark’s copyright law amendment: Precedent or problematic? 

According to an article in Time, Denmark’s department of culture was scheduled to submit the proposed amendment for consultation in the summer of 2025. 

This amendment, said Akshata Kamath, an associate partner at Krishnamurthy & Co. (also known as K Law) in Bengaluru, is a step forward in controlling the misuse of generative AI. She reasoned: “It will codify the protection for performers, performing artists and natural persons against unauthorized use of their realistic digitally generated imitations. Extending protection against realistic digitally generated imitations of personal characteristics to foreign nationals will provide them with clarity on their rights in Denmark and enable their access to redressal channels. Since these rights will last until 50 years have elapsed after the death of the performing artist, performer or imitated person, they will serve to protect the person’s integrity even posthumously.”  

Kamath added that the amendment has set out exceptions to ensure that the scope of the provisions is not overused. 

“If adopted, the amendment will set a new precedent at the international level and provide further support to judicial systems,” she stated. 

For David Haskell, a partner at Abacus IP in Phnom Penh, the amendment is problematic. “I think it’s a well-intended proposal, but somewhat problematic from a legal point of view. Copyright law is an odd way to address this issue. Copyright protects original works of authorship and is meant as an incentive to create works. I don’t see how one’s facial features or voice is an original work of authorship,” he explained. 

According to Haskell, deepfakes used for nefarious purposes are a problem better addressed by laws governing privacy rights and rights of publicity. Fortunately for Cambodia, a data protection and privacy bill is currently making its way through the legislature. But challenges remain. “In the end, it’s a game of cat and mouse, as these deepfakes can be created and spread much faster than the legal system can react to. Ultimately, we’ll need a combination of laws and technological measures if we’re ever to get a grip on the problem,” Haskell explained. 

Divina Ilas-Panganiban, a partner at Quisumbing Torres in Manila, agreed with Haskell, calling the proposed amendment a “somewhat rebellious principle.” 

“From a Philippine copyright law perspective, it may be difficult to adopt the somewhat rebellious principle of providing copyright protection over an individual’s own body, facial features and voice. Copyright law grants authors and creators legal protection over their literary, artistic, dramatic and other types of creations or works. A critical condition to enjoy the bundle of exclusive rights is the act of creation,” she pointed out. “‘Creation’ is premised on an action or process of bringing something into existence.”  

Explaining further, Panganiban said the author or creator of the artistic work exerted a certain level of effort to come up with his/her original creation as an expression of an idea. “This idea-expression dichotomy does not appear to be aligned with Denmark’s bill which seeks to grant copyright over something, which has been in existence in the first place, and without any need for an individual to create, express or work on the copyrighted material,” she noted. 

“Applying this basic policy [of copyright protection] in the case of deepfakes, where the copyrighted material involves one’s body, voice or facial features, the copyright will have to be best attributed to the divine creator, or to some higher form or process of bringing the human being into existence, over and beyond the human being itself,” Panganiban added.  

Like Haskell, Panganiban also expressed her belief that a person’s likeness and voice are better protected by data privacy, human rights, cybersecurity or similar civil and penal laws.  

Deepfakes in Asia 

Deepfakes are among the top five types of identity fraud committed around the world. 

According to the Identity Fraud Report by verification and monitoring platform Sumsub released on November 19, 2024, Singapore saw the highest year-on-year rise in incidence of identity fraud in Asia Pacific in 2024: a jump of 207 percent from 2023. Thailand recorded the second highest year-on-year increase with 206 percent, followed by Indonesia with 201 percent. 

In terms of deepfakes in particular, the report indicated that the number of deepfakes quadrupled globally. Among Asia-Pacific jurisdictions, South Korea recorded the highest increase at 735 percent, followed jointly by Singapore and Cambodia at 240 percent. 

“I have not heard of any high-profile deepfakes in Cambodia, though it is probably only a matter of time. A real issue is the scam centres operating here,” Haskell shared, “which I suspect are already making use of deepfakes.”  

In Singapore in 2023, then prime minister Lee Hsien Loong appeared in a manipulated video showing him speaking about an investment opportunity. Even students of the Singapore Sports School fell prey to the evils of deepfake technology as photos of them in the nude cropped up and were shared online. Scams involving deepfakes have also arisen in Singapore. 

Legislation is in place to address such cases, such as the Online Criminal Harms Act. To combat the proliferation of deepfakes using the likeness of election candidates, the Singapore Parliament also passed an amendment to the Elections (Integrity of Online Advertising) Bill on October 15, 2024. The bill, which came into effect before the general election on May 3, 2025, prohibited the publication, boosting, sharing and reposting of such deepfake content. 

The same applies to late Indonesian president Suharto and Philippine president Ferdinand Marcos, Jr.  

Suharto figured in a manipulated video showing him endorsing candidates for the presidential election. The same thing happened to current Indonesian president Prabowo Subianto. Along with other government officials, Subianto was used in deepfake videos intended for endorsements, urging Indonesians to sign up for government benefits and pay administration fees. The said government benefits turned out to be a hoax. An investigation ensued and two suspects have been identified. The two are believed to be part of a larger syndicate engaged in illicit AI-related activities.  

Meanwhile, a deepfake audio clip mimicking Marcos’s voice surfaced. The president is heard ordering the Philippine military to take action against China. The two countries are currently embroiled in a territorial dispute.  

A number of bills have been filed in Congress to address the growing problem of deepfakes in the Philippines. One of these is House Bill No. 3214, or the proposed Deepfake Regulation Act. Under this act, anybody who knowingly creates, distributes or refuses to take down deepfake content may face prison time from two to five years and a fine of P50,000 (US$875) to P200,000 (US$3,500). Online platforms that refuse to take down the deepfake will also face fines of P50,000 (US$875) per day of non-compliance with the takedown order. Furthermore, the bill encourages people to register their likeness as a trademark with the Intellectual Property Office of the Philippines. According to Sumsub’s 2023 report, the Philippines registered the highest increase in deepfakes in Asia Pacific from 2022 to 2023. 

“In the Philippines, while there are laws on data privacy and anti-discrimination, there are no laws specifically governing the use of AI. In order to regulate and guide the country in the prevalence of AI, various government agencies have launched AI roadmaps or strategies. Yet, there is still a need to combat the malicious use of AI. These AI-generated fabrications pose serious threats to privacy, dignity and personal security,” wrote congressman Brian Raymund Yamsuan in the explanatory note for House Bill No. 3214 as reported in an article in Inquirer.net

In India, the creation and sharing of AI-generated deepfakes featuring celebrities is prevalent. One of the earliest cases disputed was Amitabh Bachchan v. Rajat Nagi & Ors. in 2022. Indian TV personality and film producer Amitabh Bachchan filed a case against nine individuals for using his voice and photographs without his consent to help sell clothing items on their website and encourage people to download apps that they developed. 

The court granted an ad-interim ex parte order of injunction in favour of the plaintiff. It stated: “The defendants appear to be using the plaintiff’s celebrity status for promoting their own activities, without his authorization or permission. The plaintiff is, therefore, likely to suffer grave irreparable harm and injury of his reputation. In fact, some of the activities complained of may also bring disrepute to him.”  

Several other similar cases involving celebrities and public personalities have piled up in Indian courts. Among these are Anil Kapoor v. Simply Life India, Arijit Singh v. Codible Ventures and Ors., Sadhguru Jagadish Vasudev and Anr. v. Igor Isakov & Ors., Ankur Warikoo and Anr. v. John Doe & Ors. 

“In India, personality and publicity rights are not covered under a single regulation,” Kamath revealed. “Primarily, the right to privacy – which includes protection from dilution, tarnishment, blurring, misappropriation – is covered under the law of Torts. Further, Article 21 of the Constitution of India which guarantees the ‘protection of life and personal liberty’ also encompasses the right to privacy as a fundamental right. They are often applied in legal matters concerning violations of personality rights resulting from the use of deepfakes.” 

India’s Copyright Act, 1957 also provides protection against any unauthorized distortion, mutilation or any other modification, dissemination or communication of a person’s performances, voice, video recordings and the like which is likely to cause prejudice or harm to such person’s reputation. 

Another avenue is trademark protection. India’s Trade Marks Act, 1999 allows an individual to protect his/her name and image as a trademark under two conditions: 1) It is capable of being represented graphically; and 2) It can distinguish the goods and services it represents from others in the market. 

How will Denmark’s proposed amendment to its digital copyright law affect Asian jurisdictions? Are they wont to follow suit? 

“Considering that the misuse of AI-generated deepfakes is not limited to a single jurisdiction and most Asian jurisdictions including India are already having to deal with matters relating to violation of personality rights, this amendment will certainly act as a means of awareness to the masses,” said Kamath. “If it is able to practically control the unauthorized creation and use of deepfakes and reduce the burden on the judiciary, lawmakers in Asian jurisdictions will certainly assess its impact and hopefully consider making changes to their legislation.” 

Denmark’s culture minister Jakob Engel-Schmidt told The Guardian: “In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI.” 

People may have different reactions to the proposed amendment to Denmark’s digital copyright law. And different jurisdictions may have varied measures in place or in the pipeline to counter this growing global problem. Yet, the fact remains that bad actors are in every corner of the world, out to exploit or harm anyone’s image or likeness, facial features and voice for their own benefit. Something must be done in every jurisdiction, immediately. 


Law firms

Please wait while the page is loading...

loader