Digital sound-alikes: Difference between revisions

    From Ban Covert Modeling! wiki
    (+ == See also in Wikipedia == + w:Speech synthesis)
    (+== Examples of speech synthesis software not quite able to fool a human yet == + Lyrebird.ai + CandyVoice.com)
    Line 2: Line 2:


    Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.  
    Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.  
    ----
    == Examples of speech synthesis software capable to make a digital sound-alikes ==
    * [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]] unreleased prototype publicly demonstrated in 2016. ([https://www.youtube.com/watch?v=I3l4XLZ59iw&t=5s View and listen to Adobe MAX 2016 presentation of Voco)]
    * [[w:DeepMind]]'s [[w:WaveNet]] that was acquired by [[w:Google]] in 2014
    Neither of these software are available to the masses at large according to the "official truth", but as is known software has a high tendency to get pirated very quickly.
    ----
    == Examples of speech synthesis software not quite able to fool a human yet ==
    Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer. 
    * [https://lyrebird.ai/ Lyrebird.ai] [https://www.youtube.com/watch?v=xxDBlZu__Xk (listen)]
    * [https://candyvoice.com/ CandyVoice.com] [https://candyvoice.com/demos/voice-conversion (test with your choice of text)]


    == Example of a digital sound-alike attack ==
    == Example of a digital sound-alike attack ==
    Line 14: Line 29:


    Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice]]!'''
    Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice]]!'''
    ----
    == Examples of software capable to make a digital sound-alikes ==
    * [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]] unreleased prototype publicly demonstrated in 2016. ([https://www.youtube.com/watch?v=I3l4XLZ59iw&t=5s View and listen to Adobe MAX 2016 presentation of Voco)]
    * [[w:DeepMind]]'s [[w:WaveNet]] that was acquired by [[w:Google]] in 2014


    Neither of these software are available to the masses at large according to the "official truth", but as is known software has a high tendency to get pirated very quickly.


    ----
    ----

    Revision as of 15:15, 3 April 2019

    When it cannot be determined by human testing, is some synthesized recording a simulation of some person's speech, or is it a recording made of that person's actual real voice, it is a digital sound-alike.

    Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.


    Examples of speech synthesis software capable to make a digital sound-alikes

    Neither of these software are available to the masses at large according to the "official truth", but as is known software has a high tendency to get pirated very quickly.


    Examples of speech synthesis software not quite able to fool a human yet

    Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer.

    Example of a digital sound-alike attack

    A very simple example of a digital sound-alike attack is as follows:

    Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims:

    1. Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes
    2. Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1
    3. Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1
    4. Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.

    Thus it is high time to act and to criminalize the covert modeling of human appearance and voice!



    See also in Ban Covert Modeling! wiki

    See also in Wikipedia


    Footnote 1. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.