Adequate Porn Watcher AI: Difference between revisions
Juho Kunsola (talk | contribs) about adequacy |
Juho Kunsola (talk | contribs) linking to the Stop-Synthetic-Filth.org for most recent information |
||
(6 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
'''Adequate Porn Watcher AI''' is a working title for an AI to watch and model all porn ever found on the Internet to police porn for contraband and especially to protect humans by exposing [[digital look-alikes|digital look-alike]] | <center><big>Latest version of this article can be found in [https://stop-synthetic-filth.org/wiki/Adequate_Porn_Watcher_AI_(concept) '''Adequate Porn Watcher AI (concept)''' at the stop-synthetic-filth.org wiki]</big></center> | ||
'''Adequate Porn Watcher AI''' is a working title for an AI to watch and model all porn ever found on the Internet to police porn for contraband and especially to '''protect humans''' by '''exposing [[digital look-alikes|digital look-alike]] attacks'''. | |||
An ''adequate'' implementation should be nearly free of false positives, very good at finding true positives and absolutely able to process more porn than is ever uploaded. | |||
The purpose of the '''APW_AI''' is providing safety and security to its users, who can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">'''nothing matching found'''</font> or it will be of the opinion that <font color="red">'''something matching found'''</font>. | The purpose of the '''APW_AI''' is providing safety and security to its users, who can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">'''nothing matching found'''</font> or it will be of the opinion that <font color="red">'''something matching found'''</font>. | ||
If people are ''able to check''' whether there is '''synthetic porn''' that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and less destructive as | If people are '''able to check''' whether there is '''synthetic porn''' that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they may be exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals. | ||
Looking up if matches are found for '''anyone else's model''' is '''forbidden''' and this should probably be enforced with a facial biometric app that checks that the model you want checked is yours and that you are awake. | Looking up if matches are found for '''anyone else's model''' is '''forbidden''' and this should probably be enforced with a facial biometric app that checks that the model you want checked is yours and that you are awake. | ||
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you gets alerted and help if you ever get attacked with a synthetic porn attack. | If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you gets alerted and help if you ever get attacked with a synthetic porn attack. | ||
People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help. | |||
There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences. |