The Sound of Music in the Age of AI: When Your ‘Likeness’ Becomes Both Input and Output — Can Personality Rights Keep in Tune, or Is a Copyright Renaissance Due?
A practical legal explainer for artists in the music industry
This blog by Cézanne Van den Bergh offers a practical explainer for artists, especially musicians, on the legal aspects of how Generative AI (GenAI) is reshaping the music industry. The legal complexities around AI can sound intimidating, but understanding how it may influence an artist’s identity and career is indispensable, not only to guard against its risks but also to harness its potential. This guide is designed with artists in mind, helping you navigate this fast-changing landscape and keep creating music on your own terms. For clarity, in this article ‘(mis)use’ by AI refers to both the use of an artist’s likeness as input for training AI models as its reproduction in AI-generated outputs like voice cloning. The article begins by unveiling the tension between GenAI’s role in the music industry and the EU and US legal frameworks, along with the recent case law responding to it. It then explains how established laws, such as personality rights and copyright, can protect your creative work from GenAI. The discussion then turns to whether recent initiatives in some EU Member States could effectively and practically reshape copyright law and give artists greater control. Finally, the article ends on a positive note, highlighting GenAI’s potential as an ally that expands creative and revenue possibilities for artists.
Current State of Affairs: Legal Tension Between GenAI and the Music Industry
This section outlines the fast-evolving relationship between GenAI and the music industry, emphasising key AI technologies shaping the field and how EU and US laws, along with recent case law, have generally responded so far.
The Rise of AI Voice Cloning
In April 2023, AI deepfake music mimicking artists like The Weeknd and Drake with the song Heart on My Sleeve flooded the internet. Most tracks were soon removed, likely under personality and trademark rights rather than copyright, and no formal case followed. This example illustrates how AI-generated songs cloning artists’ voices are rapidly reshaping music production, while traditional copyright law remains technologically underdeveloped in addressing GenAI. These tools raise pressing legal questions: Who owns the rights to an AI-generated song? Where is the line between using AI while still creating your own artistic work and a work that is considered to have been created by an AI machine?
As a rule, both US and EU copyright law require a human author, so works created entirely or predominantly by AI, with little to no human input, are not protected. By contrast, UK’s Copyright, Designs and Patents Act 1988 (CDPA) grants copyright to the person who organised a computer-generated work, a globally unique approach, though the musician must still show some creative input. In any case, whether in the US, EU, and UK, musicians using AI must demonstrate original human input, whether by editing AI outputs, writing lyrics, or arranging material creatively, to claim authorship and secure copyright protection. Therefore, as a first rule, always keep a human element in your creative process when using AI tools to secure authorship and copyright protection. Additionally, although it remains unclear, you can challenge or request removal if an AI-generated song imitates your distinctive voice or style without consent, as it may constitute an illegal derivative work. Keep in mind, however, that your voice itself is not currently protected under traditional EU or US copyright law, and courts are still grappling with how to address AI’s implications, as will be further exemplified below.
The Rise of Training AI Models on Copyrighted Music
Training AI models on large music libraries is another highly contentious issue. By design, AI must analyse and feed on large numbers of recordings to “learn” and create content “in the style of”, which, without permission, may infringe copyright. US tech companies argue this qualifies as “transformative fair use,” a flexible American doctrine that evaluates multiple factors to determine whether copyrighted works can be used without consent. In contrast, artists and record labels argue in ongoing lawsuits that unlicensed training misappropriates their creative identity, raising complex legal questions. At the end of 2024, the UK government proposed letting AI developers use copyrighted music for training unless rights holders opted out. In response, over a thousand artists, including Paul McCartney and Ed Sheeran, released a “silent” album with track titles condemning what they called “legalised music theft”. By early 2025, officials hinted the opt-out plan might be reconsidered. Though no rules are final, the UK episode highlights the importance for artists to monitor the legal space.
EU Case Law
In the EU, using copyrighted material to train AI models may fall under the text-and-data-mining (TDM) exception, as set out in Article 53 of the AI Act and Article 4 of the DSM Directive. The TDM exception permits copying copyrighted material for computational analysis to extract data patterns, whether for scientific research or certain commercial uses, including in the music industry. Full clarity, however, will likely depend on the Court of Justice of the European Union’s (CJEU) forthcoming ruling in Like Company v. Google Ireland (Case C-250/25). The case raises the question whether AI model training amounts to the reproduction of copyright-protected works and their communication to a “new public,” and, if so, whether such commercial training nonetheless falls within the scope of the TDM exception. Regardless of the CJEU’s decision, rights holders always have the option to opt out their music for AI training uses, which can be done through machine-readable indicators, such as robots.txt or metadata, or human-readable terms in website agreements. For instance, Sabam, the Belgian collective management organisation for artists’ royalties, has already excluded its entire catalogue from the TDM exception, marking a significant move for the Belgian music industry.
US Case Law
Over the past two years, US courts have seen a surge of cases involving the use of copyrighted works, including visual art, books, and music, for AI training, with mixed results for artists. A key case which was considered a positive step for artists is Andersen v. Stability AI Ltd. (2024). Visual artists sued Stability AI for training its model on billions of copyrighted images without consent. The judge allowed the case to proceed, finding both direct and induced infringement claims plausible. The ruling cited evidence that AI outputs could reproduce training images with precise prompts, supporting the argument that the model itself could constitute an infringing copy and that distributing it could amount to distributing copyrighted works. The case demonstrates that US copyright law can be applied, at least when AI outputs closely replicate original works.
On the other hand, the recent Authors v. Anthropic case (2025) ruled that training the Claude AI chatbot on copyrighted books without permission qualifies as transformative fair use, but would constitute direct copyright infringement if the content was sourced from illegitimate sources. However, the US Copyright Office’s May 2025 conclusion that fair use will not always apply to AI training highlights ongoing regulatory uncertainty, with digital standards likely shaped by fragmented court rulings. Anthropic ultimately agreed to a $1.5 billion out-of-court settlement, sending a strong moral and operational signal to AI companies. The fair-use debate continues in courts, with split approaches such as the recent Meta case, which ruled that using copyrighted works for AI training would still be illegal in many circumstances. This judicial uncertainty prompts the question whether copyright is the appropriate tool to address AI training and deepfake music, or if another type of right, like personality rights, might be more well-suited.
Can Personality Rights Save Us?
Personality rights are private fundamental rights that everyone possesses simply by being a person. They protect the unconsented use of aspects of one’s identity that make someone unique and recognisable, such as image, portrait, likeness, voice, name, reputation, and privacy. These rights give individuals control over the (commercial) use of their identity and are generally protected under the right to privacy in the European Convention on Human Rights (ECHR) as well as national constitutions. Public figures, however, are usually expected to tolerate greater use of their identity due to public interest and the right to information. Nevertheless, personality rights play a key role in preventing the unauthorised exploitation of your identity, such as in AI deepfake music, taking into account factors like context, prominence, purpose, distribution channels, and duration.
Personality rights offer an advantage over copyright because they do not require proof of an “original” work of authorship that has been copied. If only your voice, style, or likeness is imitated in an AI song, and not a specifically recorded or published song, then the “subject-matter of the work is not identifiable with sufficient precision and objectivity”, to rely on traditional copyright, which would require significant digital adaptation. Under national laws, personality rights generally require claimants to show “reasonable interest”, such as defamation or privacy concerns, to challenge the use of their likeness, with a higher threshold for public figures. In the context of deepfaked songs and AI databases, however, artists almost always have such an interest, given their lack of financial control and inability to manage their image or reputation. This shows that personality rights, while useful, are not always sufficient to help musicians against AI deepfake music and training. With AI being here to stay, this shows the relevance of exploring a recently proposed revived copyright system that would not legally require the weighing of any interests.
A Step Further: Denmark’s Groundbreaking Copyright for the Human Body
In June 2025, Denmark introduced a groundbreaking legal proposal granting individuals copyright-like protection over their face, voice, and body, giving them the right to oppose any unauthorised digital use of their identity. Thus, the approach treats a person’s physical and vocal identity as a form of intellectual property, a first-of-its-kind measure, making AI deepfake music and similar imitations automatically illegal. Under this system, artists would no longer need to rely on personality or privacy rights, or prove a “reasonable interest,” to request the removal of AI-generated songs. This is game-changing for artists whose likeness and sound are central to their income, branding, and creative control. During its 2025 presidency of the Council of the EU, Denmark aims to promote similar protections across the EU. The Netherlands is also proposing comparable legislation, making it already two EU countries leading the way on this new approach, and soon potentially more.
Nevertheless, this pioneering move to grant copyright protection not just to an original work but to an individual’s entire identity raises significant legal, practical and ethical questions. For example: how much overlap must there be for an AI tune to count as copyright infringement? What “percentage” of one’s voice or style is required to represent one’s identity and how do you measure it? Ethically, can we even draw fair distinctions in a world where human artists have always borrowed, imitated and reinterpreted one another? By the same logic, would not a musician writing a rock song owe royalties to all previous rock artists, since inspiration, conscious or not, builds on earlier works just as AI does? These questions are just some food for thought, illustrating the practical and ethical difficulties. Nevertheless, musicians, whose work depends on their voice, appearance, and personal expression, are particularly vulnerable if the law does not keep pace. For instance, major Belgian retailers like Carrefour are switching to AI songs to replace real musicians, creating royalty-free playlists to cut costs, while public music use can account for up to a quarter of Belgian artists’ income. Regardless of the urgency, the Danish and Dutch approach remains controversial. Many question whether copyright is the appropriate tool to protect physical identity, some doubt it will meaningfully change industry practices, while others see the proposal as a bold and innovative response to large-scale digital misappropriation.
AI as an Ally: Opportunities for Artists
Finally, it is indispensible to also look at the opportunities that AI can offer to artists, rather than sounding or seeming entirely dismissive of the technology. Some artists have already embraced the technology creatively and commercially. For example, Grimes has pioneered a licensing model where fans can use her voice for AI songs in exchange for royalties, showing that artists can engage with AI on their own terms. While AI poses real risks to artistic identity, it also opens new opportunities for creativity and revenue. Used strategically, it can work to an artist’s advantage, at least to some extent, offering a balanced positive perspective on the complex relationship with the technology. Yet this partnership must be navigated carefully, as AI continues to rapidly and unpredictably reshape the creative landscape.
🎶 Key Notes 🎶
- When experimenting with AI tools to create music, whether in the EU or US, always ensure a clear human creative contribution, so there is a demonstrable human-edited influence if you ever need to claim ownership or copyright.
- Regarding the use of your copyrighted music as input for AI training in the EU, regardless of the outcome of the CJEU Google case, make sure to opt out in a recognisable format if you do not want your works included. This is important to keep in mind if you are a member of a collective management organisation. In the US, the legal landscape remains uncertain amid ongoing lawsuits, but out-of-court settlements may eventually provide a complementary way for musicians to be fairly compensated when judicial precedents do not sufficiently harbour their protection.
- If AI deepfake music imitates your voice, likeness or style without permission, the output could be argued to constitute an illegal derivative work, However, given current legal uncertainties around AI and copyright in both the US and EU, it is wise for artists to also invoke personality rights. These rights can provide meaningful protection, though claimants generally must show a “reasonable interest,” with a higher threshold for public figures. This ongoing balancing of the artist’s interest against the creator’s means the illegality is neither immediate nor guaranteed.
- As an alternative, new proposals in Denmark and the Netherlands aim to give copyright-like protection to a person’s whole body, including their voice and likeness, as an absolute right. This means anyone would need permission before using a person’s likeness to create AI deepfake music. Still, this approach brings its own difficulties and leaves many practical and ethical questions unanswered.
- Finally, it is important not to overlook the opportunities AI can offer. Some artists are already using it creatively and commercially, though smaller artists may face more challenges in maintaining control. While more popular artists may benefit most through royalties, AI also has the potential to create new revenue streams and business models for everyone. As a gentle reminder, optimism is not lost, and it never was. The AI revolution is here, but so is our ability to shape it positively through vigilance and creativity.
Author: Cézanne Van den Bergh

Cézanne holds a Master of Laws in European and International Law and Economic Law from KU Leuven, and an interdisciplinary Master’s in Human Rights from the GC of Human Rights and Lund University. Fascinated by the friction between law, technology, and artistic freedom, she explores how techno-cultural transformations are reshaping our online rights and digital well-being. Her traineeships at institutions like the EU Fundamental Rights Agency, OSCE/ODIHR, and Belgium’s Court of Cassation, combined with her involvement in artistic events and festivals, have contributed to her unique blend of critical legal analysis with a creative, practice-oriented lens.
