Digital look-alikes

From Ban Covert Modeling! wiki
Jump to navigation Jump to search
Image 1: Separating specular and diffuse reflected light

(a) Normal image in dot lighting

(b) Image of the diffuse reflection which is caught by placing a vertical polarizer in front of the light source and a horizontal in the front the camera

(c) Image of the highlight specular reflection which is caught by placing both polarizers vertically

(d) Subtraction of c from b, which yields the specular component

Images are scaled to seem to be the same luminosity.

Original image by Debevec et al. – Copyright ACM 2000 – http://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
Image 2 (low resolution rip)

(1) Sculpting a morphable model to one single picture

(2) Produces 3D approximation

(4) Texture capture

(3) The 3D model is rendered back to the image with weight gain

(5) With weight loss

(6) Looking annoyed

(7) Forced to smile Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm? doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

When the camera does not exist, but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a digital look-alike.

Introduction to digital look-alikes[edit]

In the cinemas we have seen digital look-alikes for over 15 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. w:The Matrix Reloaded and w:The Matrix Revolutions released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the reflectance capture over the human face, was made for the first time in 1999 at the w:University of Southern California and was presented to the crème de la crème of the computer graphics field in their annual gathering SIGGRAPH 2000.[1]


“Do you think that was Hugo Weaving's left cheekbone that Keanu Reeves punched in with his right fist?”

~ Trad on The Matrix Revolutions


The problems with digital look-alikes[edit]

Extremely unfortunately for the humankind, organized criminal leagues, that posses the weapons capability of making believable looking synthetic pornography, are producing on industrial production pipelines synthetic terror porn[footnote 1] by animating digital look-alikes and distributing it in the murky Internet in exchange for money stacks that are getting thinner and thinner as time goes by.

These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This hate illustration increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve.

For these reasons the bannable raw materials i.e. covert models, needed to produce this disinformation terror on the information-industrial production pipelines, should be prohibited by law in order to protect humans from arbitrary abuse by criminal parties.


See also in Ban Covert Modeling! wiki[edit]

Footnotes[edit]

  1. It is terminologically more precise, more inclusive and more useful to talk about 'synthetic terror porn', if we want to talk about things with their real names, than 'synthetic rape porn', because also synthesizing recordings of consentual looking sex scenes can be terroristic in intent.

References[edit]


Transcluded Wikipedia articles[edit]

Human image synthesis article transcluded from Wikipedia[edit]

An image generated by StyleGAN, a generative adversarial network (GAN), that looks deceptively like a portrait of a young woman. This image was generated by an artificial intelligence based on an analysis of portraits.
An image generated by StyleGAN, a generative adversarial network (GAN), that looks deceptively like a portrait of a young woman. This image was generated by an artificial intelligence based on an analysis of portraits.

Human image synthesis can be applied to make believable and even photorealistic renditions[1][2] of human-likenesses, moving or still. This has effectively been the situation since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material. Towards the end of the 2010s deep learning artificial intelligence has been applied to synthesize images and video that look like humans, without need for human assistance, once the training phase has been completed, whereas the old school 7D-route required massive amounts of human work.

Timeline of human image synthesis

BRDF vs. subsurface scattering inclusive BSSRDF i.e. Bidirectional scattering-surface reflectance distribution function

Key breakthrough to photorealism: reflectance capture

ESPER LightCage is an example of a spherical light stage with multi-camera setup around the sphere suitable for capturing into a 7D reflectance model.

In 1999 Paul Debevec et al. of USC did the first known reflectance capture over the human face with their extremely simple light stage. They presented their method and results in SIGGRAPH 2000.[4]

Bidirectional scattering distribution function (BSDF) for human skin likeness requires both BRDF and special case of BTDF where light enters the skin, is transmitted and exits the skin.

The scientific breakthrough required finding the subsurface light component (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its polarization and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.[4]

For a believable result both light reflected from skin (BRDF) and within the skin (a special case of BTDF) which together make up the BSDF must be captured and simulated.

Capture

Synthesis

The whole process of making digital look-alikes i.e. characters so lifelike and realistic that they can be passed off as pictures of humans is a very complex task as it requires photorealistically modeling, animating, cross-mapping, and rendering the soft body dynamics of the human appearance.

Synthesis with an actor and suitable algorithms is applied using powerful computers. The actor's part in the synthesis is to take care of mimicking human expressions in still picture synthesizing and also human movement in motion picture synthesizing. Algorithms are needed to simulate laws of physics and physiology and to map the models and their appearance, movements and interaction accordingly.

Often both physics/physiology based (i.e. skeletal animation) and image-based modeling and rendering are employed in the synthesis part. Hybrid models employing both approaches have shown best results in realism and ease-of-use.

Using displacement mapping plays an important part in getting a realistic result with fine detail of skin such as pores and wrinkles as small as 100 µm.

Applications

Main applications fall within the domains of virtual cinematography, computer and video games and covert disinformation attacks.

Furthermore, some research suggests that it can have therapeutic effects as "psychologists and counselors have also begun using avatars to deliver therapy to clients who have phobias, a history of trauma, addictions, Asperger’s syndrome or social anxiety."[21] The strong memory imprint and brain activation effects caused by watching a digital look-alike avatar of yourself is dubbed the doppelgänger effect.[21]

The doppelgänger effect can heal when covert disinformation attack is exposed as such to the targets of the attack.

Related issues

The speech synthesis is verging on being completely indistinguishable from a real human's voice with the 2016 introduction of the voice editing and generation software Adobe Voco, a prototype slated to be a part of the Adobe Creative Suite and DeepMind WaveNet, a prototype from Google.[22] Ability to steal and manipulate other peoples voices raises obvious ethical concerns. [23]

This coupled with the fact that (as of 2016) techniques which allow near real-time counterfeiting of facial expressions in existing 2D video have been believably demonstrated increases the stress on the disinformation situation.[11]

See also

References

  1. ^ Physics-based muscle model for mouth shape control on IEEE Explore (requires membership)
  2. ^ Realistic 3D facial animation in virtual space teleconferencing on IEEE Explore (requires membership)
  3. ^ "Images de synthèse : palme de la longévité pour l'ombrage de Gouraud".
  4. ^ a b c Debevec, Paul (2000). "Acquiring the reflectance field of a human face". Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00. ACM. pp. 145–156. doi:10.1145/344779.344855. ISBN 978-1581132083. Retrieved 2017-05-24.
  5. ^ Pighin, Frédéric. "Siggraph 2005 Digital Face Cloning Course Notes" (PDF). Retrieved 2017-05-24.
  6. ^ In this TED talk video at 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - Which is which is difficult to tell. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a treadmill. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.
  7. ^ ReForm - Hollywood's Creating Digital Clones (youtube). The Creators Project. 2017-05-24.
  8. ^ Debevec, Paul. "Digital Ira SIGGRAPH 2013 Real-Time Live". Retrieved 2017-05-24.
  9. ^ "Scanning and printing a 3D portrait of President Barack Obama". University of Southern California. 2013. Retrieved 2017-05-24.
  10. ^ Giardina, Carolyn (2015-03-25). "'Furious 7' and How Peter Jackson's Weta Created Digital Paul Walker". The Hollywood Reporter. Retrieved 2017-05-24.
  11. ^ a b Thies, Justus (2016). "Face2Face: Real-time Face Capture and Reenactment of RGB Videos". Proc. Computer Vision and Pattern Recognition (CVPR), IEEE. Retrieved 2017-05-24.
  12. ^ Suwajanakorn, Supasorn; Seitz, Steven; Kemelmacher-Shlizerman, Ira (2017), Synthesizing Obama: Learning Lip Sync from Audio, University of Washington, retrieved 2018-03-02
  13. ^ Roettgers, Janko (2018-02-21). "Porn Producers Offer to Help Hollywood Take Down Deepfake Videos". Variety. Retrieved 2018-02-28.
  14. ^ Takahashi, Dean (2018-03-21). "Epic Games shows off amazing real-time digital human with Siren demo". VentureBeat. Retrieved 2018-09-10.
  15. ^ Kuo, Lily (2018-11-09). "World's first AI news anchor unveiled in China". Retrieved 2018-11-09.
  16. ^ Hamilton, Isobel Asher (2018-11-09). "China created what it claims is the first AI news anchor — watch it in action here". Retrieved 2018-11-09.
  17. ^ Harwell, Drew (2018-12-30). "Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target'". The Washington Post. Retrieved 2019-03-14. In September [of 2018], Google added “involuntary synthetic pornographic imagery” to its ban list
  18. ^ "NVIDIA Open-Sources Hyper-Realistic Face Generator StyleGAN". Medium.com. 2019-02-09. Retrieved 2019-10-03.
  19. ^ a b Paez, Danny (2019-02-13). "This Person Does Not Exist Is the Best One-Off Website of 2019". Inverse (website). Retrieved 2018-03-05.
  20. ^ Mihalcik, Carrie (2019-10-04). "California laws seek to crack down on deepfakes in politics and porn". cnet.com. CNET. Retrieved 2019-10-14.
  21. ^ a b Murphy, Samantha (2011). "Scientific American: Your Avatar, Your Guide" (.pdf). Scientific American / Uni of Stanford. Retrieved 2013-06-29.
  22. ^ "WaveNet: A Generative Model for Raw Audio". Deepmind.com. 2016-09-08. Retrieved 2017-05-24.
  23. ^ "Adobe Voco 'Photoshop-for-voice' causes concern". BBC.com. BBC. 2016-11-07. Retrieved 2016-07-05.