The Human Voice in the Age of AI – Who Actually Controls Our Voices?
The debate about artificial voices has long reached the media industry. What sounded like science fiction only a few years ago is now part of many production processes: AI can generate voices, speak texts, and even imitate vocal profiles.
This raises a fundamental question:
Who actually decides how human voices are used?
As a professional voice actor, I have worked with my voice for more than 20 years — in audiobooks, dubbing, commercials, games, and many other formats. For me, my voice is not just a tool of the trade; it is the central instrument of my artistic work and personal expression.
That is precisely why many colleagues and I are currently concerned about a development that goes far beyond technological innovation: voice recordings are increasingly being considered as training material for AI systems. This often happens through contractual clauses or platform models that remain largely opaque for voice actors.
So the question is not only what technology can do.
The real question is:
Who retains control over human voices?
When Voices Become Training Data
At the beginning of 2026, new contractual clauses in the dubbing industry attracted considerable attention. These clauses were designed to allow voice recordings to be used in the future for training artificial intelligence.
Many voice actors reacted with strong criticism. The concern: once voices are used as training material, artificial versions could emerge that may eventually replace the original performers.
This debate became particularly visible in connection with major streaming productions. For many colleagues, a very fundamental question suddenly emerged:
Who actually controls what happens to our voices after they are recorded?
The Spirit Legal Report
The German Voice Actors Association (Sprecherverband VDS), of which I am a member, commissioned a legal review of such contractual clauses. A report by the law firm Spirit Legal concluded that key parts of these provisions may be legally problematic.
One major point of criticism: the clauses often fail to clearly define
to what extent voice recordings may be used for AI training
whether voice actors must explicitly consent to such use
and what kind of compensation is intended.
This brings the debate directly into the realm of personality rights and copyright law. After all, a voice is not just a technical signal — it is a personal characteristic of an individual and the professional tool of many creative professionals.
Link to the legal report.
When Voice Actors Voluntarily License Their Voices for AI
At the same time, there are colleagues who consciously choose to make their voices available for AI systems.
Platforms such as ElevenLabs allow voice actors to create so-called voice clones and license them for various uses. The idea is simple: the voice is digitally reproduced and can then be used by clients for different projects.
At first glance, this model may seem attractive.
In practice, however, it raises many questions.
For one thing, the compensation is often relatively low compared to the potential scale of usage. In addition, many platforms offer extensive free-tier access, allowing “free users” to generate speech using these voices without meaningful compensation for the original speaker.
There is also a structural problem: once a voice has been digitally replicated, its further use becomes extremely difficult to control.
My Personal Position
I follow these developments very closely.
Under the current conditions, however, I do not find it desirable to feed my own voice into such systems.
Not because I fundamentally reject technological innovation. AI will undoubtedly become part of media production.
But innovation must not come at the expense of creative work. If voices are used to develop artificial systems, there must be clear rules, transparent usage models, and fair compensation.
Many voice actors are currently advocating exactly that — including within the German Voice Actors Association (VDS).
Why Clear Rules Are Necessary
The crucial question is therefore not whether AI will be used in audio production.
The crucial question is under what conditions.
In my view, at least three fundamental principles are necessary:
explicit consent from the voice actors involved
transparent usage conditions
fair compensation models.
Only when these conditions are met can the use of AI voices be fair for those whose voices form the basis of these systems.
The Human Voice in the Age of AI
Media production will continue to evolve. New technologies have always shaped the industry — from the introduction of sound film to digital recording technologies and streaming platforms.
AI will be part of that evolution.
But amid all technological innovation, one thing should not be forgotten:
The human voice is more than a collection of audio data.
It is an expression of personality, experience, and interpretation.
Why Clear Rules Are Now Crucial
This debate is no longer hypothetical. In recent years, multiple cases have emerged in which voices were used, copied, or imitated without consent — sometimes even for commercial purposes.
At the same time, many voice actors are confronted with new contractual models that leave them little real choice. Clauses allowing voice recordings to be used for AI training are increasingly presented as standard conditions, often without clear transparency about how these voices may be used in the future.
This is exactly what is currently being fiercely debated in the dubbing industry. Many dubbing actors are pushing back against contractual clauses introduced by large platforms and studios because they fear that their voices could become the foundation of artificial systems — without clear consent or fair compensation.
This leads to a fundamental question:
Who controls the use of human voices?
Technology can analyse, imitate, and reproduce voices.
But it must not decide who those voices belong to.
The human voice is not a raw material for technology.
It is part of a person’s identity — and therefore deserves protection, control, and fair compensation.
Add a comment