Ask Siri, “Am I just a meat computer?”
Siri will respond, “I don’t really know.”
Siri doesn’t give a thoughtful answer because, well, it can’t think. Siri can follow the instructions (algorithms) human programmers laid out for it, but it can’t cope with open-ended questions or with anything requiring it to think outside the box. It can evade; it can be vague; but it can’t give a meaningful response unless human programmers provide it with that response. This isn’t a flaw in Siri. It’s a hard-and-fast limitation of artificial intelligence generally.
A Meaningful Answer
But ask AI expert Robert J. Marks, “Am I just a meat computer?” That question concerns whether humans, like computers, lack free will; whether the soul is nothing but an illusion; whether humans just grind out algorithms from our conscious and subconscious minds, algorithms shaped by brain neurons and hormones and, beneath those, the laws and constants of physics and chemistry. Ask Marks the question and all it implies, and he can give you a meaningful, even fascinating answer.
He does so in his new book Non-Computable You: What You Do That Artificial Intelligence Never Will. I was one of the editors for this book, and I not only learned a lot, but very much enjoyed it — even, amazingly, the parts with math. Marks writes:
For computers — for artificial intelligence — there’s no other game in town. All computer programs are algorithms; anything non-algorithmic is non-computable and beyond the reach of AI. But it’s not beyond you.
Humans can behave and respond non-algorithmically. You do so every day. For example, you perform a non-algorithmic task when you bite into a lemon. The lemon juice squirts on your tongue and you wince at the sour flavor.
Now, consider this: Can you fully convey your experience to a man who was born with no sense of taste or smell? No. You cannot. The goal is not a description of the lemon-biting experience, but its duplication. The lemon’s chemicals and the mechanics of the bite can be described to the man, but the true experience of the lemon taste and aroma cannot be conveyed to someone without the necessary senses.
If biting into a lemon cannot be explained to a man without all his functioning senses, it certainly can’t be duplicated in an experiential way by AI using computer software. Like the man born with no sense of taste or smell, machines do not possess qualia — experientially sensory perceptions such as pain, taste, and smell. Qualia are a simple example of the many human attributes that escape algorithmic description.
In addition to qualia, many other non-algorithmic traits distinguish humans from machines. Emotion, creativity, mercy, and understanding — true understanding, including common sense and a sense of humor — are some of the traits forever beyond the capacity of machines, no matter how “intelligent” those machines may be, no matter how fast their processing speeds, no matter how much data they can access.
AI Hype and Hyperbole
So says Marks, and he makes a good case, explaining in detail why certain human attributes simply don’t — and can’t — translate into the realm of artificial intelligence.
Marks has spent more than three decades in the field of AI, consulting for places like Microsoft, Boeing, and DARPA. His research supporters include NASA, JPL, NIH, NSF, Raytheon, the Army Research Lab, and the Office of Naval Research; he’s written hundreds of peer-reviewed papers; he’s a Distinguished Professor in the department of engineering and computer science at Baylor University; and he’s the director of the Walter Bradley Center for Natural & Artificial Intelligence. Clearly Marks knows what he’s talking about — and many other experts in the field agree.
Why, then, do news headlines continually suggest that AI is practically human already, and soon will become fully human? Why have smart people like Stephen Hawking and Elon Musk expressed fears that AI will surpass humanity and take over the world?
Marks points out that hype sells, and that expertise in one field doesn’t translate into expertise in another, leaving brilliant people as vulnerable to hype as those of us with more ordinary minds.
Further, he says — and this is a crucial point — a materialistic, atheistic culture naturally wants to explain away human exceptionalism. If humans are just machines, then we aren’t made in the image of a free and creative God, with all the remarkable immaterial traits that entails.
Marks also touches on one other temptation driving the denial of any difference between humans and computers. Some want to believe that AI can be made superhuman and become godlike, perhaps even allowing us to “upload our minds” and merge with it, or else incorporate more and more mechanical components and computer devices into our own bodies and brains until we’re one with this godlike AI.
This idea is known as transhumanism. It’s essentially an AI religion that seeks to replace traditional religion. There’s already a Way of the Future AI Church.
You can see, then, why this topic matters so much.
Hard Boundaries and the Image of God
In truth, you are so much more than a meat computer. You are so much more than wetware. You possess myriad remarkable traits that can never be reduced to algorithms. You possess, among other things, a mind (which is not the same thing as a brain), a heart, and a soul.
In short, there are some hard boundaries that can’t be crossed. You will never be God, but at the same time, artificial intelligence will never be you. You are non-computable, in no small part because you are made in the image of God.
Cross-posted from Salvo Magazine with permission of the author. Robert J. Marks answered questions recently about humans, computers, and algorithms in a conversation with William Dembski and John West: