Yes, "We’ve Been Wrong About Robots Before," and We Still Are

IBM_Watson.PNG

More hype on artificial intelligence, now from Bloomberg News ("Robot Brains Catch Humans in 25 Years, Then Speed Right On By") and promoted by Drudge:

We’ve been wrong about these robots before.

Soon after modern computers evolved in the 1940s, futurists started predicting that in just a few decades machines would be as smart as humans. Every year, the prediction seems to get pushed back another year. The consensus now is that it’s going to happen in … you guessed it, just a few more decades.

There’s more reason to believe the predictions today. After research that’s produced everything from self-driving cars to Jeopardy!-winning supercomputers, scientists have a much better understanding of what they’re up against. And, perhaps, what we’re up against.

Nick Bostrom, director of the Future of Humanity Institute at Oxford University, lays out the best predictions of the artificial intelligence (AI) research community in his new book, "Superintelligence: Paths, Dangers, Strategies."

MInd-and-Technology3.jpgThis sort of thing aggravates me to no end. I’ve read most of Bostrom’s book. "Superintelligence" and the supposed "dangers" it poses to humanity are about where the global-warming scare was circa 2005. It seems like every decade or so media and academia have to whip up some pseudo-scientific issue to terrify the unknowing public, while also promising them that a few elite thinkers using money-making "science" can figure out our path forward.

Nothing has happened with IBM’s "supercomputer" Watson (pictured above) other than the probabilistic scoring of questions to answers using Big Data, with minimal actual natural language understanding. No surprise there: language understanding frustrates computation and has embarrassed AI since its inception. Outside of playing Jeopardy — in an extremely circumscribed only-the-game-of-Jeopardy fashion — the IBM system is completely, perfectly worthless.

It’s worthlessness as a model of human intelligence is obvious from the fact that we clearly don’t have a "play Jeopardy" program running inside us, but rather a general intelligence giving us a qualitatively different facility for general language understanding. IBM, by the way, has a penchant for upping their market cap by coming out with a supercomputer that can perform a carefully circumscribed task with superfast computing techniques. Take Deep Blue beating Kasparov at chess in 1997. Deep Blue, like Watson, is useless outside of the task it was designed for, and so it too told us nothing about human intelligence — or, really, "intelligence" at all, in the sense of a scientific theory or an insight into general thinking

Self-driving cars are another source of confusion. Heralded as evidence of a coming human-like intelligence, they’re actually made possible by brute-force data: full-scale replicas of street grids using massive volumes of location data. The roof-mounted laser (which costs $80,000 a unit, so don’t expect self-driving cars on the market anytime soon) as well as onboard GPS and cameras fix the location of the vehicle on the "game" grid using the real-time GPS coordinates and images from the cameras. The car is then driven around in the physical world, and the navigation is improved by using (what else?) machine learning from data on prior runs until it can get around in the area Google has plotted for it.

Computers are fast and have large memory for Big Data. Got it. But, again, this has nothing to do with human-intelligence. The hallmark of human intelligence is general thinking about different tasks, not brute-force computation of circumscribed tasks. But even this observation gives too much away to superintelligence enthusiasts.

For many commonsense "tasks" remain black boxes to computation, using Google’s Big Data or not. Interestingly, where brute computation and big data fail is in surprisingly routine situations that give humans no difficulty at all. Take this statement, originally from computer scientist Hector Levesque (it also appears in Nicholas Carr’s 2014 book about the dangers of automation, The Glass Cage):

The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?

Watson would not perform well in answering this question, nor would Deep Blue. In fact there are no extant AI systems that have a shot at getting the right answer here, because it requires a tiny slice of knowledge about the actual world. Not "data" about word frequencies in languages or GPS coordinates or probability scoring of next-best chess moves or canned questions to canned answers in Jeopardy. It requires what AI researches call "world knowledge" or "common sense knowledge."

The problem of commonsense knowledge — having actual, not "simulated" knowledge about the real world — is a thorny issue that has relegated dreams of true, real AI to the stone ages so far.

When we look closely at real intelligence, things get even gloomier (or more embarrassing) for superintelligence enthusiasts, trumpeting endlessly these days about the coming rise of smart robots. It is, indeed, much like the way scientists, politicians, and celebrities reached a fever pitch about imminent global catastrophe from climate change a decade ago.

(Admittedly the analogy falls short in one respect. Superintelligence enthusiasts like Bostrom warn us about smart robots, while others such as Ray Kurzweil assure us about the coming Golden Age. Take your pick — it’s either the Apocalypse as robots outsmart and eliminate humans like pests, or we merge with them and create a new heaven on Earth.)

The commonsense knowledge problem, too, is letting superintelligence enthusiasts off easy, as the task of picking out relevant pieces of world knowledge is even trickier. Having real knowledge about the world and bringing it to bear on our everyday cognitive problems is the hallmark of human intelligence, but it’s a mystery to AI scientists, and has been for decades.

The issue here is not one of degree, where the problem of general intelligence is slowly yielding to increased knowledge about machine intelligence, but rather one of kind. General human intelligence — as opposed to automated techniques for circumscribed tasks — is of a qualitatively different sort. There aren’t any engineered systems that understand basic language, from Google or IBM or anyone else. The prospect isn’t imminent either. There aren’t even any good theories pointing in promising directions. It’s simply one of life’s mysteries, and perhaps a fundamental limitation of the underlying theory — the theory of computation, the theory of Turing machines, that is. Depressingly, too, the current fad touting a coming superintelligence is in fact obscuring this mystery, rather than illuminating it as good science should.

Can an alligator run a steeplechase? As Levesque has pointed out, Big Data can’t "crack" the answer here, because the syntactic terms "alligator" and "steeplechase" aren’t likely to occur together anywhere else or with enough frequency to exploit shallow techniques.

Given that minds produce language, and that there are effectively infinite things we can say and talk about and do with language, our robots will seem very, very stupid about commonsense things for a very long time. Maybe forever.

Image source: Wikipedia.

Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Share

Tags

Mind and TechnologyNewsSciencetechnologyViews