Neuroscience & Mind Icon Neuroscience & Mind

Robert Marks: What Computers Will Never Do

“About an hour into the second game of its match against a world-ranked Go player, the Google Artificial Intelligence program AlphaGo made a move that stunned the commentators:

“I don’t really know if it’s a good or bad move,” said Michael Redmond, a commentator on a live English broadcast. “It’s a very strange move.” Redmond, one of the Western world’s best Go players, could only crack a smile.

“I thought it was a mistake,” his broadcast partner, Chris Garlock, said with a laugh.

South Korean Lee Sedol, AlphaGo’s challenger and one of the world’s best Go players, felt otherwise: “He stared at the board, then got up from the table and left the room.” He returned shortly to resume play. Regardless, he never recovered and eventually lost the match.

AlphaGo’s move was not a mistake. Commentators later described it as a “brilliant” move of the kind that almost no human player would think to make. To some, the move appeared creative in a manner surpassing human players. Others demurred. Jerry Kaplan, a widely respected computer scientist and entrepreneur, was not impressed: “This is the latest in a long history of overhyped [artificial intelligence] demonstrations. It’s a machine engaging in taking a certain set of actions that are a result of the clever programming that has been done to build it.”

So, which is it? Was AlphaGo demonstrating creativity exceeding that of humans (at least within the game of Go)? Or was it simply, albeit in ways difficult to tease apart, following its programming?

Dr. Robert Marks, Distinguished Professor of Electrical and Computer Engineering at Baylor University, spoke recently at Caltech, addressing this question. Even though the title of his talk, “Some Things Computers Will Never Do,” gives away his conclusion, I encourage watching the entire session to understand why computers, at least those we know how to build, including the over-hyped AlphaGo, will never succeed at being creative in the sense that all humans have experienced at one time or another.

Defining intelligence will get you into trouble. Every definition, it seems, leaves something or someone, that we’d otherwise consider intelligent, out. When it comes to such a definition, possibly for selfish reasons, the AI community plays it fast and loose. A new book, Machine Learning: The New AI (MIT Press), by Ethem Alpaydin, a professor at Bogaziçi University in Istanbul, offers this observation:

Intelligence seems not to originate from some outlandish formula, but rather from the patient, almost brute force use of simple, straightforward algorithms.

To be fair, this is the kind of definition I expect from someone whose career builds on Machine Learning and (so called) Big Data. The definition underlies the assumptions of the entire field. Falsifying it would invalidate at least some of the work and all the hype on the subject. Regardless, many AI researchers believe intelligence arises from algorithms. It is not magic. It is does not ooze out of some undefinable, especially immaterial, property, but results from ordinary brain processing. Build a machine like the brain, or sufficiently similar, or at least similar in the right way, and — poof! — intelligence emerges.

One point Dr. Marks makes early in his talk echoes Jerry Kaplan’s sentiments, noted above. Obtaining grants, selling a product, or winning attention in the noisy, overpopulated arena of new technology nearly demands that you overstate your case. In much the same way, to get noticed at a crowded party, you might choose to wear loud colors and outrageous shoes. Marks suggests that many claims amount to little more than marketing. He calls out the hedges (e.g., “may be able to…,” “In the near future…”) and the “seductive semantics” (e.g., “simulates the brain,” “neural network,” “deep learning”) used to sell and describe AI. Blind faith in the assumptions, propelled by marketing, has fostered a cacophony of promotion and hype: The Terminator is coming.

Marks rejects the hot-air hype and weasel words. He grounds his argument in something fundamental: an observation that well-educated AI theorists and researchers should know, but whose gushing pride in their achievements pushes into the shadows. The challenge, known since the founding of computer science, proves that algorithmic approaches to AI — and those are the only type we know how to build — cannot produce the creative intelligence that AI researchers employ when they create their machines.

What is this challenge? It is one that Alan Turing, in many ways the progenitor of modern computing, thought about. It states that you cannot write a program to tell if another program will run forever or eventually come to a halt. Turing in 1936 proved that such an algorithm, one that works for all possible programs, cannot exist. Robert Marks explains the thinking behind the proof, connecting it to other undecidable problems known from mathematics (such as Goldbach’s conjecture) and the work of Kurt Gödel (whose incompleteness theorem ruined Alfred North Whitehead and Bertrand Russell’s all-encompassing mathematics project). The big point, however, is that Turing’s proof sets insuperable limits on what algorithmic computers can do.

Human creativity, Dr. Marks notes, occurs, not algorithmically, by following a (possibly very complex) set of steps, but often in a “flash of insight.” Mathematicians, musicians, writers, engineers, and artists all testify to this. Deep insights often occur unbidden. Roger Penrose, a former colleague of Cambridge physicist Stephen Hawking, years ago established that the human mind is not a computer and that, as a result, computers cannot be creative. Marks leaves it as an open question whether non-algorithmic computers (which we do not have and cannot yet build) could demonstrate creativity. AlphaGo, no matter how unexpected the move it made, was not creative like humans are creative. It was, like the machines before it and all who follow its lead, just doing what it was told to do.

AI researchers grasping for grants and start-ups bounding after venture capital are far from the only ones to engage in a bit of self-promoting hyperbole. The problem with the hype is that it redirects our attention from what does matter to that which cannot occur. While looking for and worrying about one thing (such as Terminator) we miss the very real and possibly damaging impacts, that AI can have, what data scientist Cathy O’Neil calls “Weapons of Math Destruction.”

Watch Dr. Marks’s talk, learn what computers cannot do, and quit worrying about the Terminator. Then pay more attention to the real uses, and possible abuses, of these amazing machines.

Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Share

Tags

Computational Sciencessciencetechnology