
Denmark’s Deepfake Regulation Proposal
Deepfake technology, utilizing artificial intelligence to create hyper-realistic videos, images, and audio, has evolved from a creative tool to a global threat. Initially developed to enhance creativity in entertainment, healthcare, and education, synthetic media is now used for disinformation, fraud, and societal harm, raising ethical, cybersecurity, and political challenges that require international policy responses. The accessibility of deepfake tools, such as Meta’s 'AI Twin' feature or mobile platforms like Captions.ai and Character.ai, has increased the threat.
Deepfake is becoming a tool for sophisticated cybercrime. A deepfake video call impersonating a senior executive resulted in a $25 million loss for a company. In politics, deepfakes undermine democratic integrity. Recent elections have seen widespread use of synthetic media in disinformation campaigns and AI-generated endorsements of candidates by historical figures. These manipulations distort political discourse and erode public trust. The proliferation of deepfake technology among minors has had alarming consequences. School-aged children are increasingly using AI tools to create explicit and degrading images of peers, causing profound psychological harm. In the United States, the National Center for Missing & Exploited Children recorded 485,000 reports of AI-generated child sexual abuse material in the first half of 2025. In conflict zones, deepfakes have been used to create instability by falsifying military communications and manipulating public sentiment.
In this troubling context, Denmark’s proposed law is a positive step toward addressing deepfake misuse. It mandates explicit consent for the use of digital likenesses, grants individuals ownership of their digital identity, and imposes legal consequences for violations. The legislation includes protection where individuals can sue for unauthorized recreations of their likeness or voice, as well as performance protection that extends to scripted, improvised, or non-verbal performances and performer-specific protection, which shields public figures from AI-based mimicry. Platforms like Meta and TikTok could face fines if they fail to remove unauthorized deepfakes featuring Danish citizens. If implemented effectively, this framework could set a global standard for digital identity protection and influence updates to the European Union’s AI Act.
The rise of deepfake technology demands a multifaceted response to balance its transformative potential with its capacity for harm. Effective policy must integrate technical, educational, and regulatory measures. Technical safeguards, such as watermarking, content provenance certification, and automated detection systems, are critical to enhancing accountability and verifying digital content. Simultaneously, robust digital literacy programs, emphasizing ethical use of synthetic media, are essential to empower younger generations to navigate this digital landscape responsibly. Social media platforms must also face stringent accountability with clear legal frameworks mandating swift removal of infringing content.
Denmark’s legislative proposal, which prioritizes explicit consent and digital identity ownership, offers a pioneering model to curb malpractices. However, a comprehensive global strategy, uniting policy, technology, and education, is imperative.