Neuroscience & Mind Icon Neuroscience & Mind

Don’t Fear the Robots; Fear the Robot Philosophers

robots

In my forthcoming book The Human Advantage, I take on the growing worry that robots and AIs will take all our jobs. (I argue that we should prepare, not panic.) I also discuss the bad philosophy that leads intellectuals to confuse man and machine.

Believe it or not, many officially smart people think that (1) we are just computers made of meat, (2) computers will soon become conscious, or (3) both.

There’s no evidence for any of these claims, though I see how an atheist might believe them. After all, if you think we’re the mere product of physics, random mutations and natural selection, why wouldn’t you think that the machines we design will surpass us at some point?

You might expect Christian institutions to push back. Unfortunately, far too many buy into the hype. Take Don Howard. He’s a philosophy professor at Notre Dame, a leading Catholic university. In a “Think” piece at NBC.com, he asks whether robots deserve human rights.

I read it expecting a Notre Dame philosopher to take on the bad arguments for “strong AI” (the idea that computers will become conscious persons). Instead, he takes strong AI for granted.

Robots Like Us

He starts out by describing the problem:

A being that knows fear and joy, that remembers the past and looks forward to the future and that loves and feels pain is surely deserving of our embrace, regardless of accidents of composition and manufacture — and it may not be long before robots possess those capacities.

So far as I can tell, he accepts strong AI hokum hook, line, and sinker, despite well-known objections to it. In truth, there’s no more reason to think computers will become conscious than to think that strong tractors will become oxen. The whole argument rests on the assumption that we and computers are basically the same kinds of things.

I don’t blame the man on the street for worrying about killer robots. His ideas are formed by thousands of hours watching sci-fi movies like Star Trek and Terminator. But a philosopher should know better. A philosopher at Notre Dame should really know better.

No Rights

After reading the first two paragraphs, I expected Howard to argue that robots have, or should have, rights just like we do. But instead, he goes on to argue that there are no rights, human or otherwise!

His argument for that claim is … somewhere between bad and non-existent.

First, he quotes another (often wrong) Notre Dame philosopher: “The eminent moral philosopher, Alasdair MacIntyre, put it nicely in his 1981 book, ‘After Virtue’: ‘There are no such things as rights, and belief in them is one with belief in witches and in unicorns.’”

That’s a sneer, not an argument.

No Objective Grounding?

He then claims that the rights referred to in the Declaration of Independence have “no objective grounding.” But what about God? Can’t He endow us with rights, as Jefferson wrote? No. Because, Howard writes,  “almost no one today takes seriously a divine theory of rights.”

Really? There are surely a hundred million Americans who do take the theory seriously. But I guess they don’t count. In any case, an idea’s lack of status is not a serious argument against it.

Besides, lots of secular academics do defend rights. What about that?  “Most of us,” Howard explains, “think that rights are conferred upon people by the governments under which they live — which is precisely the problem. Who gets what rights depends, first and foremost, on the accident of where one lives.” Translate “Most of us” in the previous sentence to mean: “People who think just like me on this question.”

Begging the Question

Philosophers call this “begging the question.” Howard assumes we don’t have genuine rights. Governments merely bestow them on some people. And that’s a problem, since it means that means folks in, say, Ghana, have different rights than Americans do.

But that’s a flaw with his theory of rights, not the “divine theory” he dismisses. The divine theory says we all have the same rights since we are creatures made in the image of God. On this view, governments don’t bestow rights on us. Rather, they recognize our real rights more or less well. The U.S. government recognizes them better than does the People’s Republic of China.

Finally, he points to disagreement about rights as evidence that they don’t exist. But if that were a good argument, it would mean that pretty much nothing exists. Including his argument.

What to Do?

So, what does Howard propose in place of rights? We should turn to “civic virtues” such as “loyalty, service, civility, tolerance and participation.” These will allow us to have a society that is nice to people, and nice to robots. “The world would be a better place,” he claims, “if we spent less time worrying, in a self-focused way, about our individual rights and more time worrying about the common good.”

He seems not to realize that if his arguments against rights were sound, they could just as easily be turned against “civic virtues.” Fortunately, his arguments aren’t sound.

For my part, I’m not that worried about the robots. In the short run, they will disrupt how we do things, but they will also make us wealthier and more productive. I’m far more worried about the bad thinking that leads academics to view man as a machine, and future machines as men.  Nothing good can come from that.

Jay Richards, PhD, is the Executive Editor of The Stream, an Assistant Research Professor in the Busch School of Business and Economics at The Catholic University of America, and a Senior Fellow at Discovery Institute. The Human Advantage: The Future of American Work in an Age of Smart Machines will be out June 19, 2018, but is available for pre-order now.

Image credit: TheDigitalArtist, via Pixabay.

This article is reposted with permission from The Stream.