Culture & Ethics
Blade Runner 2049 Poses Questions about AI Machines and Moral Value
With the sequel to Blade Runner, the first movie in some time that I really want to see, we again are witnessing a discussion about when AI machines should be given human-style rights. From “Are Blade Runner’s Replicants Human?” published by Smithsonian Magazine:
So replicants arguably do feel emotions, and they have memories. Does that make them human? For Schneider, a definitive answer doesn’t necessarily matter. The replicants share enough qualities with humans that they deserve protection. “It’s a very strong case for treating [a non-human] with the same legal rights we give a human. We wouldn’t call [Rachel] a human, but maybe a person,” she says.
For Eric Schwitzgebel, professor of philosophy at University of California at Riverside, the conclusion is even more dramatic. “If we someday create robots with human-like cognitive and emotional capacities, we owe them more moral consideration than we would normally owe to otherwise similar human beings,” he writes in Aeon. “We will have been their creators and designers. We are thus directly responsible both for their existence and for their happy or unhappy state.”
Nope. Machines can’t “feel” anything. They are inanimate. Whatever “behavior” they might exhibit would be mere programming, albeit highly sophisticated.
Sci-fi aside, scientists are trying to create a way of measuring whether a machine should have rights.
This year, Schneider published a paper on the test she developed with astrophysicist Edwin Turner to discover whether a mechanical being might actually be conscious. Like the Voight-Kampff test, it is based on a series of questions, but instead of demanding the presence of empathy — feelings directed towards another — it looks at feelings about being a self. The test, called the AI Consciousness Test, is in the process of being patented at Princeton.
Please. That would be software, a product of programming, either by us or in a true AI computer, perhaps by the machine. But they wouldn’t be true experiences that not only impact the consciousness, but the subconscious.
Unless one accepts an entirely mechanistic view of human life — the idea that all we are mere meat computers without free will and driven by genetic determinism — than the notion of machines and computers exhibiting true empathy, or love, grief, joy, etc., for that matter, is nonsense.
But the intellectual set considers planning for machine rights to be a matter of pressing concern:
Work like this is urgent, she says, because humanity is not ethically prepared to deal with the repercussions of creating sentient life.
What will make judging our creations even harder is the human reliance on anthropomorphism to indicate what should count as a being worthy of moral consideration. “Some [robots] look human, or they’re cute and fluffy, so we think of our cats and dogs,” Schneider says. “It makes us believe that they feel.
We’re very gullible. It may turn out that only biological systems can be conscious, or that the smartest AIs are the conscious ones, those things that don’t look human.”
It’s just the opposite: The denial of human exceptionalism inherent in this discussion actually demonstrates our uniqueness by claiming that rights should come when a machine can mimic distinctly human traits in an adequate manner.
And what about moral accountability? Could a robot, say, commit a crime? Never. Its concept of right and wrong would be entirely dependent on programming. True moral agency is another uniquely human trait that machines will never possess.
Rather than get all caught up in esoteric musing, I suggest an entry level test for determining whether an entity has any moral value, much less that of a human: Is it alive, e.g., is it an organism?
If not, it may be highly valuable piece of property, it may be highly sophisticated and made to appear as if it actually feels and thinks in the human manner, but it is of no greater intrinsic value than a toaster.
Cross-posted at The Corner.