Culture & Ethics Icon Culture & Ethics

Good Grief, Now They’re Pushing for "Machine Rights"

800px-Vitoria_-_Graffiti_&_Murals_0623.JPG

As if the world isn’t already mad enough, the futurists are at it again, assuming that machines programmed to the level of artificial intelligence (AI) — in other words, to self-program independently, without human input or control — should be treated as if they were moral beings capable of rights, responsibilities, and being harmed.

An article in Nature by Hutan Ashrafian seriously proposes that AI machines could be victims of injustice. From “Intelligent Robots Must Uphold Human Rights“:

We must consider interactions between intelligent robots themselves and the effect that these exchanges may have on their human creators. For example, if we were to allow sentient machines to commit injustices on one another — even if these ‘crimes’ did not have a direct impact on human welfare — this might reflect poorly on our own humanity. Such philosophical deliberations have paved the way for the concept of ‘machine rights’.

Most discussions on robot development draw on the Three Laws of Robotics devised by science-fiction writer Isaac Asimov: robots may not injure humans (or through inaction allow them to come to harm); robots must obey human orders; and robots must protect their own existence. But these rules say nothing about how robots should treat each other.

It would be unreasonable for a robot to uphold human rights and yet ignore the rights of another sentient thinking machine.

Animals that exhibit thinking behaviour are already afforded rights and protection.

No, animals aren’t provided “rights,” at least not yet. Nor should they ever be. Rights lie exclusively in the human realm.
Animal welfare should be and is protected — an obligation of human exceptionalism — because they are living and truly sentient beings that can experience real pain and actual suffering.

But AI contraptions would, at most, appear to be sentient — to only mimic. They wouldn’t really “think.” If they were destroyed, who would be harmed beyond, perhaps, their owners?

Nor are they capable of committing crimes or doing injustice — they would be neither moral beings nor moral agents. Whatever they did, it would be programming — not alive, not capable of actual pain or true emotion, which manifest both mentally and physically in the body of living beings.

Do not underestimate the likelihood of artificial thinking machines. Humankind is arriving at the horizon of the birth of a new intelligent race. Whether or not this intelligence is ‘artificial’ does not detract from the issue that the new digital populace will deserve moral dignity and rights, and a new law to protect them.

Baloney. We still have too much work to do in the realm of human rights to worry about machines. If anything, we need laws to keep us from doing something as stupid as develop machines that could act independently of our control.

Why care about this? Because the concept of “machine rights” diminishes human exceptionalism, and it demeans the importance and saps the vitality of the very concept and meaning of rights. That has the potential to hurt us all.

Image by Zarateman (Own work) [CC0], via Wikimedia Commons.

Cross-posted at Human Exceptionalism.

Wesley J. Smith

Chair and Senior Fellow, Center on Human Exceptionalism
Wesley J. Smith is Chair and Senior Fellow at the Discovery Institute’s Center on Human Exceptionalism. Wesley is a contributor to National Review and is the author of 14 books, in recent years focusing on human dignity, liberty, and equality. Wesley has been recognized as one of America’s premier public intellectuals on bioethics by National Journal and has been honored by the Human Life Foundation as a “Great Defender of Life” for his work against suicide and euthanasia. Wesley’s most recent book is Culture of Death: The Age of “Do Harm” Medicine, a warning about the dangers to patients of the modern bioethics movement.

Share

Tags

__k-reviewMind and TechnologytechnologyThe War on HumansViews