Neuroscience & Mind Icon Neuroscience & Mind

No, Your Brain Isn’t a Three-Pound Meat Computer

Brain Isn't Computer.jpeg

In all the latest sound and fury over Artificial Intelligence — Will some future Terminator run us over like ants (as Michio Kaku worries)? Must we act quickly to prevent the rise of an evil AI overlord (per Elon Musk)? — one notes an important, if unstated, assumption: Computers can be intelligent like humans are intelligent. If so, well, perhaps Kaku, Musk, and others are right to stoke fear and thus propel us to action to avert disaster.

On the other hand, if it’s not possible, then, like a well-timed magician’s trick, the fear-mongering may have us looking for one kind of problem even as we wholly miss another.

Now Roger Epstein, former editor of Psychology Today and an active researcher, writes at Aeon to reject the assumption that the brain works like a computer in any way (“The Empty Brain“):

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers — design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them — ever.

. . .

Computers, quite literally, process information — numbers, letters, words, formulas, images…Humans, on the other hand, do not — never did, never will.

Calling the brain, as some have, a three-pound computer made of meat presses the metaphor too far. Epstein recalls other metaphors, all now rejected, drawn from the technology of their day. He predicts the day will come, and is coming, when we’ll look back and consider the “brain is a computer” metaphor quaint.

Metaphors always trail what is; the best of them still leave behind much of what we are. Metaphors, and models give us a convenient shorthand, but when flipped around to proscribe what we are, they mislead and delude. Computers do not play games like humans play games. Computers do not create like humans create. Computers, at their most fundamental level, do not even solve computational problems like humans solve computational problems.

Computers are machines we’ve made, into which we can put reformulated pieces of ourselves. They are a tool, not a replacement. And they most certainly are not electronic versions of whatever it is that goes on inside our heads. Epstein continues:

Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been “stored” in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.

If Epstein is correct, and I believe he is, then the entire AI endeavor, at least in its most extreme forms, will fail. We will not succeed at downloading our minds onto a computer. We will not succeed at creating a computer whose (so called) intelligence is anything like ours. Machines will not suddenly become self-aware and sweep humanity aside.

What should give us pause, however, is that a flawed metaphor inspiring false fears will lead us to miss the real problems with AI and the true magnitude of our own minds. AI works because we deposit portions of our intelligence, recoded into the algorithms a computer can use, into the machines. As with any complex endeavor in which our off-the-cuff guesses are often incorrect, the results of these algorithms can elude our guesses and may produce solutions we might otherwise have missed. This fact in itself demonstrates that whatever a computer is doing, it is not the same thing we do in our own heads.

But since those algorithms capture, at best, only a small portion of our intelligence, they have edges or boundaries, places where they fail because the machine has gone outside the map we coded into it. AlphaGo, which defeated a world-ranked Go player, made a few moves that were not just poor or weak, but were of the what-the-heck-were-you-thinking variety. Why? Because it wasn’t thinking: It was following a map, an algorithm, even if a self-adjusting one, and found itself lost once it passed borders.

AI machines are more a form of mimicry than anything even approaching intelligence. That we’re better at creating mimicking machines does not change the reality that these machines, like Polly requesting a cracker, do their work without understanding a single word uttered.

We need metaphors to talk about complex things. But when we replace the real with a model, we lose that which we were originally trying to understand.

The real problem with AI, then, is not the prospect of Terminator, but, instead, the likelihood of our blindly depending on machines, lulled to trust them by bad metaphors. The danger is that computers will fail us, and possibly do so in very bad ways.

Image credit: © Tatiana Shepeleva — stock.adobe.com.

Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Share

Tags

Computational SciencesScienceViews