The moment that humanity is forced to take the threat of artificial intelligence seriously might be fast approaching, according to futurist and theoretical physicist Michio Kaku.
In an interview with CNBC’s “The Future of Us,” Kaku drew concern from the earlier-than-expected victory Google’s deep learning machine notched this past March, in which it was able to beat a human master of the ancient board game Go. Unlike chess, which features far fewer possible moves, Go allows for more moves than there are atoms in the universe, and thus cannot be mastered by the brute force of computer simulation.
“This machine had to have something different, because you can’t calculate every known atom in the universe — it has learning capabilities,” Kaku said. “That’s what’s novel about this machine, it learns a little bit, but still it has no self awareness … so we have a long way to go.”
But that self awareness might not be far off, according to tech minds like Elon Musk and Stephen Hawking, who have warned it should be avoided for the sake of future human survival.
And while Kaku agreed that accelerating advances in artificial intelligence could present a dilemma for humanity, he was hesitant to predict such a problem would evolve in his lifetime. “By the end of the century this becomes a serious question, but I think it’s way too early to run for the hills,” he said.
“I think the ‘Terminator’ idea is a reasonable one — that is that one day the Internet becomes self-aware and simply says that humans are in the way,” he said. “After all, if you meet an ant hill and you’re making a 10-lane super highway, you just pave over the ants. It’s not that you don’t like the ants, it’s not that you hate ants, they are just in the way.”
Unlike others, Kaku is cautious in suggesting that few if any of us will live long enough to actually see the Terminator arise. Fears of our own creation coming to life are as old as history, from Golems through Frankenstein’s monster, to, now, the ascent of sentient computers. The publicized successes of Artificial Intelligence and our deep faith in technology spur this fear’s most recent form.
But should they?
Kaku makes the case that something significant took place when DeepMind’s AlphaGo beat Lee Sedol, considered one of the strongest Go players in the world, in March. Go is a computationally intractable game; that is, the game is too big for a computer, even one the size of the physical universe, to win through sheer brute force (i.e., by trying every conceivable position). To create a winning machine, DeepMind’s developers had to design heuristics capable of taking on ranked players.
Prior game-playing systems built heuristics using known rules and strategies, but, since even the best Go players cannot articulate why they make the moves they do, encoding rules and strategies for Go has led to only moderate success. DeepMind’s breakthrough came in creating a Neural Network that “learned” from prior play what good moves and good board positions looked like. It’s the ability to learn that Kaku believes puts us on the path, ultimately, to the Terminator.
AlphaGo used two sets of so-called Neural Networks to help it evaluate the board and select a move. A Neural Network learns, through controlled training, by adjusting the strength of the connections between the nodes in the network. Think of it as a grid of points with strings connecting a point to its nearest neighbors. Learning consists of adjusting how much tension each point puts on its neighbors through those strings, pulling the entire grid into a shape corresponding to the pattern the programmers want to detect.
Programmers do not know the correct tension values to properly match a pattern. So, instead, they build into the network a mathematical feedback system that allows each point in the grid to adjust the tension it gives its neighbors as the network succeeds and fails at detecting the desired pattern.
Creating Neural Networks that work is hard; they do not always succeed. Sometimes small changes in the pattern will cause the network to fail. Sometimes the training plateaus or oscillates rather than converging on working tensions. Creating networks that successfully matched patterns to win at Go took very clever programming and skill.
“Learning,” then, is a loose term. The “learning” a Neural Network undergoes is a very far cry from what it means when we learn. All it means is that, using procedures developed by clever programmers, the system self-adjusts. To leap from self-adjusting programs to Terminator-style computers paving over humans that just happen to get in the way is not grounded in the data. It is a leap of faith worthy of a committed mystic.
The real problem is in what such leaps obscure. AlphaGo-like systems behave in ways that, because they self-adjust, we cannot predict. Because we cannot predict their behavior, we cannot know how, or when, a system will fail.
Making a bad decision in a game of Go is not threatening to humanity. But putting such systems in control where human life or safety is at stake does matter. And we do ourselves no favor worrying about a future that is nothing more than a statement of faith while the real problem lies closer at hand with the encroaching use of so-called artificially intelligent machines controlling critical systems. The final behavior of those systems is best left in the minds and hands of the only intelligent agents we know of: humans.