Neuroscience & Mind Icon Neuroscience & Mind

Terrified of a Robopocalypse? Don’t Be

robot

Elon Musk and Stephen Hawking are scared about artificial intelligence and the prospect of a “Terminator” future. But the U.S. military isn’t.

One result of our culture of materialistic science is a fear that somehow we’ll create computers that are smarter than we are — which is a different thing, of course, from being a lot faster at crunching numbers and running algorithms. Intelligent design proponents know otherwise: mind is special, and matter cannot generate it.

Here’s a rubber-meets-the-road analysis: AI, while useful, certainly isn’t emergent consciousness. It’s in a recent Bulletin of the Atomic Scientists interview, “Don’t fear the robopocalypse,” with Paul Scharre. He headed the drafting group for the Defense Department’s Directive 3000.09 on Autonomy in Weapon Systems.

He doesn’t think that fully automated machines are flexible enough for the military:

There’s this maxim that no plan survives contact with the enemy. Militaries plan, and do battle drills, and train so warfighters have a good concept for how to fight. But at the end of the day, militaries expect that their soldiers will be adaptive and flexible on the battlefield, and that’s actually what’s going to win the day. Machines are very, very poor at that. We don’t know how to build machines that can do that very well at all. That’s another reason why I think you’d want people nearby, to be able to have this very flexible, real-time control over systems. To not have that would be, I think, a very brutal kind of military that might be able to do things with a lot of precision, and accuracy, and speed, but could end up failing quite badly if the enemy does something creative to trick the system.

Scharre thinks that autonomous weapons mainly add distance in warfare.

But I think there’s also this broader philosophical point. When I look at robotics systems, I see them as a new step on a continuum of increasing distance in warfare, [which dates] back to the first time somebody picked up a rock and threw it at someone. At every point in time, people have moved toward weapons that give them greater distance, but people are still fighting wars against other people, and they’re still killing other people. They’re just now killing them with cannons, or missiles, or what I think in the future will be robotics systems. But at the end of the day, [what forces wars to end, at] the political level, will be violence inflicted on humans. That’s tragic and terrible, but I think that is the reality of what war is, so I don’t think that’s likely to change.

These statements sound remarkably like what Robert Marks, Distinguished Professor of Electrical and Computer Engineering at Baylor University, noted in a recent ID the Future podcast:

Anything you can think of writing an algorithm to do will eventually be replaced. But then there are other things such as webmasters, computer programmers, and such, that won’t be replaced. I’ve heard these people referred to as knowledge-workers. … So, yeah I see this quite a bit. The other thing which is also intriguing is the ability of artificial intelligence to augment our performance. I think that all technology augments a human trait — cars go faster than we do, calculators add faster than we do. So, we’re going to be augmented by access to the knowledge of the world. I have that at my fingertips now on my cell phone — it blows my mind! I’m old enough to remember going to the library looking for a paper, and going through the stacks and finally finding the volume and opening it to the page and some jerk has ripped it out and taken it away. And I don’t have to worry about that anymore. I can just look it up on my phone; it’s incredible. Now, there’s a trade-off because I’ve sacrificed totally my privacy. Google knows where I am every second of the day, I imagine. So there are going to be trade-offs of that sort too.

No, technology can’t save us. Neither can it possess the autonomy to pose an existential threat, in itself, unless humans put it to such a use or are careless about how we use it. Simply put, material can’t give birth to mind.

Robert Marks is the co-author of the new book Introduction to Evolutionary Informatics.

Image credit: Gamezona, via Pixabay.