Editing Digital sound-alikes

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 10: Line 10:
== Timeline of digital sound-alikes ==
== Timeline of digital sound-alikes ==
* In '''2016''' [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]], an unreleased prototype, was publicly demonstrated in 2016. ([https://www.youtube.com/watch?v=I3l4XLZ59iw&t=5s View and listen to Adobe MAX 2016 presentation of Voco])  
* In '''2016''' [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]], an unreleased prototype, was publicly demonstrated in 2016. ([https://www.youtube.com/watch?v=I3l4XLZ59iw&t=5s View and listen to Adobe MAX 2016 presentation of Voco])  
* In '''2016''' [[w:DeepMind]]'s [[w:WaveNet]] owned by [[w:Google]] also demonstrated ability to steal people's voices
* In '''2016''' [[w:DeepMind]]'s [[w:WaveNet]] owned by [[w:Google]] also demonstrated ability to steal people's voices
* In '''2018''' [[w:Conference on Neural Information Processing Systems|Conference on Neural Information Processing Systems]] the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results
** Listen [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"']
** View [https://www.youtube.com/watch?v=0sR1rU3gLzQ Video summary of the work at YouTube: 'This AI Clones Your Voice After Listening for 5 Seconds']
* As of '''2019''' Symantec research knows of 3 cases where digital sound-alike technology '''has been used for crimes'''.<ref name="WaPo2019">
* As of '''2019''' Symantec research knows of 3 cases where digital sound-alike technology '''has been used for crimes'''.<ref name="WaPo2019">
https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/</ref>
https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/</ref>
Please note that all contributions to Ban Covert Modeling! wiki are considered to be released under the Creative Commons Attribution-ShareAlike (see BCM:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)