Neuroscience & Mind
Hype and Fearmongering About Artificial Intelligence Passes Its Sell-By Date
The attribution of superpowers to coming generations of artificially intelligent machines entered self-parody territory with a headline at Wired:
GOD IS A BOT, AND ANTHONY LEVANDOWSKI IS HIS MESSENGER
Many people in Silicon Valley believe in the Singularity—the day in our near future when computers will surpass humans in intelligence and kick off a feedback loop of unfathomable change.
When that day comes, Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”
Yet at the same time, scanning other headlines, the impression is unmistakable that AI hype has passed its sell-by date. Writing at The Stream, Robert J. Marks, co-author of Introduction to Evolutionary Informatics, nails the fundamental reason that machine “brains” are not going to become conscious and take over the world. He explains, “Why You Shouldn’t Worry About A.I. Taking Over the World.”
It comes down to human exceptionalism:
A show-stopping reason that artificial intelligence and robots will never gain the higher abilities of humans is because features such as consciousness, understanding, sentience and creativity are beyond the reach of what we currently define as computers. Alan Turing invented the Turing Machine in the 1930s. The Church-Turing thesis states that anything that can be done on a computer today can be done on Turing’s original machine. It might take a billion or a trillion times as long, but it can be done. Therefore, operations that can’t be performed by a Turing Machine can’t be performed by today’s supercomputers.
Turing showed there were many deterministic operations beyond the powers of the computer. For example, a computer program can’t be written to always analyze what another arbitrary computer program will do. Will an arbitrarily selected computer program eventually stop or will it run forever? Turing showed that a computer can’t solve this problem. The Turing machine, and therefore today’s computers, have fundamental limits on what they can do. In terms of understanding, our brains function beyond Turing machines in many ways.
Searle’s Chinese Room
Philosopher John Searle offered another reason in his Chinese Room argument. Imagine a room with a little man named Pudge. He receives messages in Chinese slipped through a slot in the door. Pudge looks at the message and goes to a large bank of file cabinets in the room where he looks for an identical or similar message. Each folder in the file cabinet has two sheets of paper. On one is written the message that might match the message slipped through the door slot. The second sheet of paper in the file is the corresponding response to that message. Once Pudge matches the right message, he copies the corresponding response. After refiling the folder and closing the file drawer, Pudge walks back to the slot in the door through which he delivers the response and his job is done.
Here’s the takeaway.
Does Pudge understand the question or the response? No. Pudge does his job and doesn’t even read Chinese! He’s simply matching patterns. It might look from the outside like Pudge understands Chinese, but he doesn’t. He’s simply following an algorithm – a step by step procedure to accomplish some goal.
When one follows a step by step procedure to bake a cake, i.e. following a recipe, one is executing an algorithm. That’s all a computer can do. It can follow instructions from an algorithm.
I Lost on Jeopardy, Baby
Remember when IBM’s Watson Supercomputer beat everyone at the game show Jeopardy!? I can imagine Pudge in the Chinese room being reassigned to the Wikipedia room. When Watson is asked a question, Pudge goes to a Wikipedia file cabinet and retrieves the right response and slips it through the slot to the outside. Watson the computer doesn’t understand the questions or the answers. Watson is following a preprogrammed algorithm. It’s not conscious.
So what allows our brain, or rather us, to do things computers can’t? What makes us different?
Some researchers are seeking a materialistic explanation of our remarkable brains. With attention to quantum tubules found in the brain, Sir Roger Penrose and Dr. Stuart Hameroff propose a quantum mechanical model. Hameroff notes their quantum tubule theory of the brain “is in conflict with a major premise of [strong] AI and Singularity.”
The theory of Penrose and Hameroff proposes a physical brain process that is nonalgorithmic. Computers are limited to executing algorithms. Since nonalgorithmic means noncomputable, what Penrose & Hameroff are proposing cannot be simulated on a computer. If the Penrose-Hameroff theory or other work on so-called quantum consciousness is successful and can be engineered into a working model, we will be able to generate machines that do what the brain does. This new technology will not be a computer. We’ll need to give it another name.
If we can build a human-like brain, be afraid. Be very afraid. Skynet might be right around the corner. But as long as computers simply get faster and use more memory, there’s no reason to worry on this account.
Don’t misunderstand. There are real perils accompanying the advance of AI, and you don’t have to be a Luddite reactionary (like me) to appreciate them. But as Marks explains, the problems are all ones of human failure – inattention to the consequences of our own decisions vis a vis computers, whether as users or programmers. The computers themselves are and will remain dumb machines, however “smart” in their applications.
Photo credit: StockSnap, via Pixabay.