Ad blocker detected: Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker on our website.
Post anything related to English or other language, News, Education, Stories, novels...etc
1 post • Page 1 of 1
The current wave of politically-correct moralism reared its head in recent debates about the need to regulate relations between humans and sexbots (sexual robots).
First, for context, allow me to quote from a news report: “last year a sex robot named Samantha was ‘molested’ and seriously damaged at a tech industry festival; the incident spurred debate on the need to raise the issue of ethics in relation to machines... while the developers of sexbots have claimed that their projects will do anything to indulge their customers’ desires, it seems that they might start rejecting some persistent men... people ignore the fact that they may seriously damage the machine, just because it cannot say ‘no’ to their ‘advances’... future humanoid sex robots might be sophisticated enough to ‘enjoy a certain degree of consciousness’ to consent to sexual intercourse, albeit, to their mind, conscious feelings were not necessary components of being able to give or withhold consent... in legal terms, introduction of the notion of consent into human-robot sexual relationships is vital in a way similar to sexual relations between humans and it will help prevent the creation of a ‘class of legally incorporated sex-slaves.’”
Although these ideas are just a specific application of a proposal for the EU to impose the basic “rights” for AI (artificial intelligence), the domain of sexbots brings out in a clear way the implicit presuppositions that determine such thinking. We are basically dealing with laziness in thinking: by adopting such “ethical” attitudes, we comfortably avoid the complex web of underlying problems.
Indeed, the initial suspicion is that the proponents of such demands do not really care about the AI machines (they are well aware that they cannot really experience pain and humiliation) but about aggressive humans: what they want is not to alleviate the suffering of the machines but to squash the problematic aggressive desires, fantasies and pleasures of us, humans.
This becomes clear the moment we include the topics of video games and virtual reality: if, instead of sexbots – actual plastic bodies whose (re)actions are regulated by AI, we imagine escapades in virtual reality (or, even more plastic, augmented reality) in which we can sexually torture and brutally exploit people – although, in this case, it is clear that no actual entity is suffering, the proponents of the rights of AI machines would nonetheless in all probability insist on imposing some limitations on what we, humans, can do in virtual space.
The argument that those who fantasize about such things are prone to do them in real life is very problematic: the relationship between imagining and doing it in real life is much more complex in both relations. We often do horrible things while imagining that we are doing something noble, and vice versa. Not to mention how we often secretly daydream about doing things we would in no way be able to perform in real life. We enter thereby the old debate: if someone has brutal tendencies, is it better to allow him to play with them in virtual space or with machines, with the hope that, in this way, he will be satisfied enough and not do them in real life?
Another question: if a sexbot rejects our rough advances, does this not simply mean that it was programmed in this way? So why not re-program it in a different way? Or, to go to the end, why not program it in such a way that it welcomes our brutal mistreatment? (The catch is, of course, will we, the sadistic perpetrators, still enjoy it in this case? Because a sadist wants his victims to be terrified and ashamed.)
And one more: what if an evil programmer makes the sexbots themselves sadists who enjoy brutally mistreating us, its partners? If we confer rights to AI sexbots and prohibit their brutal mistreatment, this means that we treat them as minimally autonomous and responsible entities – so should we also treat them as minimally “guilty” if they mistreat us, or should we just blame their programmer?
Nevertheless, the basic mistake of advocates for AI rights is that they presuppose our, human, standards (and rights) as being the highest form. What if, with the explosive development of AI, new entities will emerge with what we could conditionally call a “psychology” (series of attitudes or mindsets) which will be incompatible with ours, but in some sense definitely “higher” than ours (measured by our standards, they can appear either more “evil” or more “good” than ours)? What right do WE (humans) have to measure them with our ethical standards?
So let’s conclude this detour with a provocative thought: maybe, a true sign of the ethical and subjective autonomy of a sexbot would have been not that it rejects our advances but that, even if it was programmed to reject our brutal treatment, it secretly starts to enjoy it? In this way, the sexbot would become a true subject of desire, divided and inconsistent as we humans are.
Like this story? Share it with a friend!
Source : RT