Neuroscience & Mind Icon Neuroscience & Mind

Artificial Intelligence and the Language Barrier

If you have a few free minutes, try, for fun, filling them with Google Translate. And you need not be multilingual to enjoy it. Start with something straightforward: Enter an English phrase or sentence (idioms bring particular pleasure). Click a language, say, Spanish, and then “translate.” Copy and paste the translated results over your original English phrase, reverse both languages (so that, in this example, Spanish is now where you begin and English is where you end), and again click “translate.” Did you get anything remotely like your original phrase? Possibly. But not likely and certainly not always. I tried this myself and got the results below:

There is a sale on sails on the boat by the bay. (English original)

No es una venta de velas en el barco por la bahía. (Spanish translation)

It is not a sale of sails on the boat in the bay. (English translation of the Spanish)

And this (from a text conversation I had with my wife):

Talked to her about the email addresses, she seemed very on board. Told her we didn’t want to hold her back. Wanted to see her get a job and get on her way. So as not to seem overbearing. (English original)

Parlé à elle des adresses e-mail, elle semblait très à bord. Je lui ai dit que nous ne voulions pas la retenir. Je voulais voir son obtenir un emploi et d’obtenir sur son chemin. Afin de ne pas paraître arrogant. (French translation)

Talked to her e-mail addresses, it seemed very on board. I told him that we did not want to remember. I wanted to see her get a job and get his way. In order not to appear arrogant. (English translation of the French)

People with much more free time than I do have translated well-known songs, publishing their results on YouTube (above). Like I said, it can be an entertaining way to fill a few free minutes. Google’s translator works well for many situations. But, as I’ve noted before, it, like any computer program, falls off cliffs as it wanders into the unexpected. That is what computers do when they encounter data outside the bounds of their programming.

To be fair, translation, even for skilled humans, is difficult. Good, nuanced translation is high art. Language translation entails so much more than word substitution, replacing words from one language with those in another. Languages vary in what they can express so that words do not neatly match each across the linguistic divide (encouraging “loan” words and terms, such as “hors d’oeuvres,” between languages). Idioms and slang almost never translate without loss. Grammar differs, which affects emphasis and meaning. Some languages offer grammatical clues (such as declensions), while others do not. Language does not succumb to rules. And it is rules, in one form or another, that guide computer behavior, including Artificially Intelligent machines.

Will Knight, a senior editor at the MIT Technology Review online magazine, laments AI’s language problem. Knight suggests that, “without language understanding,” AI use will remain a cold, inhuman relationship. He then recounts AI’s recent victories — notably that Google’s AlphaGo defeated a world-ranked Go champion a couple of months ago — hopeful that, through these, AI can finally break its language barrier. But language is a human skill we do not understand whose origin remains a mystery.

Knight’s hopes are based on AI’s latest innovations, especially something called “Deep Learning.” Deep Learning is not what it may sound like. It does not describe a computer that gains in-depth knowledge of a topic. (Even computer scientists succumb to marketing temptations, employing terms promising more than they give.) Rather, deep learning, in the context of AI, describes a technique where multiple layers within a program get automatically tweaked and tuned through the consumption of enormous quantities of data. The results of each layer are then combined to achieve a result superior to what any one layer can do on its own. For example, a deep learning system trained to recognize pictures of cats may have a layer that “sees” shapes, another that detects lines and boundaries, and another that identifies possibly furry regions. Specialization and consolidation, much like the software minions in IBM’s Watson, yield better answers than do systems using a single, “all knowing” layer. Divide and conquer is as successful a strategy in computer science as it is in warfare.

Yet Knight mistakes what the Deep Learning in AlphaGo (and other AI machines) implies. He is correct that AlphaGo “represents a true milestone in machine intelligence and AI,” but not for the reason he says. Knight claims that Google did not “teach” AlphaGo to play the game of Go, that the machine developed an “intuitive sense of strategy.” That is false. The research paper describing AlphaGo’s programming is clear: AlphaGo is not that different from other game-playing computer programs. More or less, it looks at the current board position, rapidly explores possibly moves, evaluates their results, and chooses where to lay its next stone. For simple games, such as Tic-Tac-Toe, a program can examine all possible moves in a fraction of a fraction of a second. It is easy to win when you know all possible outcomes. For other games, such as chess and, even worse, Go, the choices explode rapidly beyond what any computer could ever examine. Successful searches, then, require measuring the quality of each move and board position without having to search to the game’s end.

Champion human Go players do rely on their trained intuition, guided by general principles and strategies. AlphaGo, on the other hand, relies on two pattern-matching networks, similar in kind to those that detect cats in pictures. One evaluates board positions (the “value network”) and the other possible moves (the “policy network”). AlphaGo, running hundreds of computers using thousands of processors, uses these to limit its search, to trim the choices. The machine, however, was still programmed to play Go. It did not learn to play Go. At best, the (so called) Deep Neural Networks “learned” (in a loose, mathematical sense) what better choices looked like: It “recognizes” high-quality boards and moves just like other machines “recognize” cats. AlphaGo has no more “intuitive sense” of play than do the programs on your iPhone.

Language is not a game. Researchers describe games, like Go, as “perfect information” environments. That is, there are no surprises. There may be more to it than a computer can compute, but nothing appears that violates the rules. Language is anything but a “perfect information” environment. Words are vague. Meaning is unclear. (And arguments thus ensue.) Context is determinative. Language has no rules of the kind used to play Go. The beauty of language is its lack of strict rules. Plays and poetry, romance and humor, even daily conversation, would wither under inviolate restrictions. The one attempt to create a well-behaved, uniform language failed. Knight’s review of various AI attempts at language, even at Google, exemplifies the challenge.

This is the perennial problem of AI advocates: They continue both to overestimate the impacts of their breakthroughs and to underestimate the problem at hand. They especially underestimate the marvel of the human mind with its elusive quality of intelligence. Consider Watson, as I wrote here previously: IBM researchers achieved a remarkable win and advanced the science behind question-and-answer systems, but, even with a few high-profile accomplishments, they have yet to achieve the same success in applying Watson to medicine.

Deep Learning may help researchers to construct machines that recognize more complex phrases and sentences. I fully expect that Watson and similar systems will improve. These machines, however, will never have an “intuitive sense” and they will likely continue to stumble over the vagaries of language. We’ll see better translations, improved question-and-answer systems, and more reliable command-driven interfaces (such as Siri). However, truly conversational machines will remain science fiction. And what improvements do come will arrive through the hard work and intelligence of the scientists and researchers building those systems. They will not emerge, magically, from the circuits and software.

Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Share

Tags

Computational SciencesScienceViews