Editing Digital sound-alikes
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
When it cannot be determined by human testing, is some synthesized recording a simulation of some person's speech, or is it a recording made of that person's actual real voice, it is a '''digital sound-alike'''. | |||
When it cannot be determined by human testing | |||
As of '''2019''' Symantec research knows of 3 cases where digital sound-alike technology '''has been used for crimes'''.<ref name="WaPo2019"> | |||
{{cite web | |||
|url= https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ | |||
|title= An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft | |||
|last= Drew | |||
|first= Harwell | |||
|date= 2019-09-04 | |||
|website= | |||
|publisher= | |||
|access-date= 2019-09-089 | |||
|quote= }} | |||
</ref> | |||
Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability. | Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability. | ||
---- | ---- | ||
* | == Examples of speech synthesis software capable to make a digital sound-alikes == | ||
* [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]] unreleased prototype publicly demonstrated in 2016. ([https://www.youtube.com/watch?v=I3l4XLZ59iw&t=5s View and listen to Adobe MAX 2016 presentation of Voco)] | |||
* [[w:DeepMind]]'s [[w:WaveNet]] that was acquired by [[w:Google]] in 2014 | |||
Neither of these software are available to the masses at large according to the "official truth", but as is known software has a high tendency to get pirated very quickly. | |||
---- | ---- | ||
== Examples of speech synthesis software not quite able to fool a human yet == | == Examples of speech synthesis software not quite able to fool a human yet == | ||
Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer. | Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer. | ||
Line 29: | Line 32: | ||
* '''[https://cstr-edinburgh.github.io/merlin/ Merlin]''', a [[w:neural network]] based speech synthesis system by the Centre for Speech Technology Research at the [[w:University of Edinburgh]] | * '''[https://cstr-edinburgh.github.io/merlin/ Merlin]''', a [[w:neural network]] based speech synthesis system by the Centre for Speech Technology Research at the [[w:University of Edinburgh]] | ||
== Example of a digital sound-alike attack == | |||
== Example of a | |||
A very simple example of a digital sound-alike attack is as follows: | A very simple example of a digital sound-alike attack is as follows: | ||
Line 46: | Line 43: | ||
Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice!]]''' | Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice!]]''' | ||
---- | |||
== Documented digital sound-alike attacks == | |||
* [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on 'An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft'], a 2019 Washington Post article | |||
---- | ---- |