Editing Digital sound-alikes

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
<center><big>Latest version of this article can be found in [https://stop-synthetic-filth.org/wiki/Synthetic_human-like_fakes#Digital_sound-alikes '''Synthetic human-like fakes''' § '''Digital sound-alikes''' at the stop-synthetic-filth.org wiki]</big></center>
When it cannot be determined by human testing, is some synthesized recording a simulation of some person's speech, or is it a recording made of that person's actual real voice, it is a '''digital sound-alike'''.
 
When it cannot be determined by human testing whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a '''digital sound-alike'''.  
 


Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.  
Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.  


[[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram|spectrogram]] of a male voice saying 'nineteenth century']]
== Example of a digital sound-alike attack ==
----
== Timeline of digital sound-alikes ==
* In '''2016''' [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]], an unreleased prototype, was publicly demonstrated in 2016. ([https://www.youtube.com/watch?v=I3l4XLZ59iw&t=5s View and listen to Adobe MAX 2016 presentation of Voco])
 
* In '''2016''' [[w:DeepMind]]'s [[w:WaveNet]] owned by [[w:Google]] also demonstrated ability to steal people's voices
 
* In '''2018''' [[w:Conference on Neural Information Processing Systems|Conference on Neural Information Processing Systems]] the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results
** Listen [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"']
** View [https://www.youtube.com/watch?v=0sR1rU3gLzQ Video summary of the work at YouTube: 'This AI Clones Your Voice After Listening for 5 Seconds']
 
* As of '''2019''' Symantec research knows of 3 cases where digital sound-alike technology '''has been used for crimes'''.<ref name="WaPo2019">
https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/</ref>
 
----
 
== Examples of speech synthesis software not quite able to fool a human yet ==
Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer. 
 
* '''[https://lyrebird.ai/ Lyrebird.ai]''' [https://www.youtube.com/watch?v=xxDBlZu__Xk (listen)]
* '''[https://candyvoice.com/ CandyVoice.com]''' [https://candyvoice.com/demos/voice-conversion (test with your choice of text)]
* '''[https://cstr-edinburgh.github.io/merlin/ Merlin]''', a [[w:neural network]] based speech synthesis system by the Centre for Speech Technology Research at the [[w:University of Edinburgh]]
 
 
== Documented digital sound-alike attacks ==
* [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on 'An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft'], a 2019 Washington Post article
 
----
 
== Example of a hypothetical digital sound-alike attack ==
A very simple example of a digital sound-alike attack is as follows:  
A very simple example of a digital sound-alike attack is as follows:  


Line 45: Line 13:
# Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.
# Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.


Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice!]]'''
Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice]]!'''
----
== Examples of software capable to make a digital sound-alikes ==
* [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]] unreleased prototype publicly demonstrated in 2016. ([https://www.youtube.com/watch?v=I3l4XLZ59iw&t=5s View and listen to Adobe MAX 2016 presentation of Voco)]
* [[w:DeepMind]]'s [[w:WaveNet]] that was acquired by [[w:Google]] in 2014
 
Neither of these software are available to the masses at large according to the "official truth", but as is known software has a high tendency to get pirated very quickly.


----
----
Line 53: Line 27:
* [[Digital look-alikes]]
* [[Digital look-alikes]]


== See also in Wikipedia ==
* [[w:Speech synthesis]]
----
----
Footnote 1. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.
Footnote 1. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.
----
= Transcluded Wikipedia articles =
== Speech synthesis article transcluded from Wikipedia ==
{{wikipedia::speech synthesis}}
Please note that all contributions to Ban Covert Modeling! wiki are considered to be released under the Creative Commons Attribution-ShareAlike (see BCM:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)