Alan Turing and the New Emergentists

Alan_Turing.jpg

The acclaimed Alan Turing biographical film The Imitation Game is up for multiple Oscars on Sunday. It is a tale of Turing as a tragic hero and misunderstood genius, irascible, certainly idiosyncratic, who insinuates himself into a job interview at Bletchley Park as a self-proclaimed mathematical genius, which later is born out as true. He "invents" the digital computer to solve the decryption challenge posed by the German Enigma machines, and thus saves the Allied powers from Hitler.  

The film is a human-interest story, and accurate enough, though John von Neumann in the U.S. was busy engineering a prototype as well. However you wouldn’t watch it with an eye toward learning about the history of computing, or perhaps most interesting, about Turing’s legacy in current thought about Artificial Intelligence.

Well, what shall we say of that legacy?

To decide whether a machine has a mind, Turing famously said, talk to the machine. Language is for minds, and so if we can’t tell the difference between a machine and a human in conversation (say, by teletype or text), then we should grant the machine the status of a human mind.

MInd-and-Technology3.jpgPractitioners of AI often call natural language understanding "AI-Complete," meaning a computer that interprets and generates discourse can do anything else that a human can do. Turing’s famous test is thus a behavioral definition that ignores what’s happening inside a machine, and focuses on what the machine can actually do. Specific tasks like playing a game of chess or even the game of Jeopardy! don’t count, because programmers can make a special-purpose machine to "play" those games.

In contrast, as Turing noted, language is domain independent (we can talk about anything), and so all these special purpose techniques inevitably fall short. Doubt it? Just keep talking to the machine, and eventually it’ll show that it doesn’t understand, and (running the test in reverse) that it therefore doesn’t deserve credit as having a mind.

Many people interested in questions about Artificial Intelligence still endorse some version of Turing’s iconic test. Toronto computer scientist Hector Levesque, in a shot-across-the-bow paper delivered to an International Joint Conference on Artificial Intelligence (IJCAI) audience in 2013, pointedly challenged the Turing Test, accusing it of being biased towards what he called "bag of tricks" programming approaches.

For instance, when we ask a machine a question it doesn’t "know," it can always reply with duplicity or trickery: "I don’t know, what do you think?" and so on. Levesque is right; but in the big picture, Turing was too. The Turing Test is hard precisely because understanding a natural language like English or French or Swahili is hard. No wonder AI scientists often resort to a "bag of tricks."

It’s telling that no computer has come close to passing the Turing Test, decades after Turing first proposed it, and seemingly eons away after exponential increases in memory and computer power (a smartphone today has more processing power than a supercomputer in the 1950s, easily). Turing was right; language is domain independent (or not topic-constrained) and so is hard for a machine running a program to "get." Language is effectively infinite, programs are finite. This is one quick way of putting the issue that still captures the essence of the problem.

But there’s another issue lurking here. Does the machine, even one that might somehow pass the Turing Test, really have a mind? Where do minds come from, after all? The current hype about a looming threat from "superintelligence" reveals something striking about this age-old philosophical question.

A quick review of some old philosophical debates is in order here. When I was in graduate school in the 1990s, it was difficult to walk into a seminar discussing issues in AI or the philosophy of mind without someone mentioning functionalism. Functionalism is the view that mind is like a computer program running on the brain. And, like software generally, the hardware specifics don’t matter as much as running the right program.

Hence, mind is software running on "wetware" for humans (the brain), and it might equally be software running on silicon for thinking machines (digital computers). Functionalism thus liberated the philosophy of mind from the species-ism inherent in the view that only human brains could have minds. Given that you have the right program running, it shouldn’t matter (says the functionalist) whether it’s running on biological or computational hardware.

And so the functionalist view of mind was born, Phoenix-like, out of the ashes of failed behaviorism (cf., Skinner and his rats) and a brief embrace of "identity" theories (cf. J.J.C. Smart and logical positivism) which identified mental states directly with some physical states (the belief that "shortbread is good" just is the firing of such-and-such neurons in my brain right now).

Functionalism made better sense of puzzles about mind than these earlier theories, and with the success of electronic computers, functionalism became the only game in town. The new field of cognitive science, an umbrella discipline including psychology, neuroscience, computer science, and AI among others, quickly fit functionalist theory to the computer metaphor, and that’s how we got the Computational Theory of Mind. (It replaced the electric-wire model of brains inspired by telegraphs and telephones, which itself replaced the earlier steam engine view. Before that it was a clock.)

All is good. Only, functionalism as a philosophical theory is pretty much dead today. Savvy former-functionalists such as Harvard philosopher Hillary Putnam became reluctant critics of the once golden theory, as they realized that the basic problems with identity theories of mind inevitably plagued functionalist accounts, too.

The issues here get thorny and thoroughly academic, but the end result of all the philosophical debates in the Eighties and Nineties is that "meaning ain’t in the head," or in other words that whatever we’re doing when we believe or feel or think, it’s not possible to isolate the process and define it locally, i.e., in your head. Language and language users are ultimately understandable only in a "holistic" sense (an unfortunate word because it too is holistic), which is to say, embedded in a large linguistic context which includes facts about the environment, other language users, and so on. So functionalism, at least in its original philosophical sense is dead, now, too. This should have spelled trouble for the Computational Theory of Mind, but surprisingly (or not), it seems hardly to have mattered.

None of this bothered Alan Turing, mind you. Turing’s self-proclaimed interest in his 1950 "Computing Machinery and Intelligence" (arguably the most famous AI paper ever, and certainly the first in the modern sense) was to abstract away — really to ignore — such issues in the philosophy of mind and to provide a purely behaviorist litmus test for intelligence. He avoided defining intelligence in theoretical terms; he wanted rather to know when something was intelligent, whatever "intelligence" turned out to be in the end.

Turing was, in this sense too, a genius. While puzzles about the nature of mind seemed a perennial coffee table discussion, Turing offered a plausible path forward. But the question of what a "mind" and "intelligence" really are was left open, in 1950 and still today.

Artificial Intelligence research and much of neuroscience now defends reductionist accounts of mind, often using some version of functionalism. And neuroscientists — even more than AI scientists, they’re apt to give short shrift to philosophical debates anyway — even embrace identity theories or eliminativism: the latter being the view that mind and consciousness and belief are "folk concepts" that actually have no scientifically respectable description, and thus don’t exist, and thus should be eliminated from our discourse.

And so it goes. Philosophy rages on in a teacup as often as it effects any change in scientific discourse. But there’s another view of mind that’s increasingly the rage today, and superintelligence enthusiasts and AI proponents wear it on their sleeve: emergence, or "emergentism."

Emergentist theories of mind are popular for the same reason that magic shows or mystical experiences are: they don’t need to be explained. For the emergentist, when we say "such-and-such has a mind" we just mean that "such-and-such became so complicated that a mind sprang forth." Minds emerge from complexity, according to this view. Hence, when a stodgy philosopher complains that we can’t get rid of cognitive states like beliefs, because they have non-truth-theoretic consequences in a first-order calculus and (insert more musty complaining here), the New Emergentist — a Kurzweil, say, or a Nick Bostrom, or Elon Musk or Bill Gates or Stephen Hawking perhaps — can simply say "Well, yes, but you see those aspects of mind just emerge when an AI program is run fast enough."

Emergentist theories of mind, in other words, fit nicely with the gee-whiz enthusiasm today for fast computing. Headline: "IBM Blue Gene/Q supercomputer cracks mind-body problem."

Sarcasm here is hard to contain, because the emergentist thesis is a fantastically sterile philosophical position. It allows anyone to explain cognitive properties or entities like minds simply by relegating their occurrence to something else that’s poorly understood, like complexity. The magic trick is then given a suitably scientific sounding label like "emergence."

Lazy views like this can, and should, be attacked with hard questions. It’s reasonable to ask the New Emergentist, for instance, the following. One: How do you know mind emerges? What do we know in the natural world that definitely does emerge? And how could we ever tell if a mind did in fact "emerge"? What are the necessary and sufficient conditions? And two: Is this Dualism, then? What emerges? A property or substance? And how does this square with scientific materialism, anyway?

Let’s look at these questions in more detail. In the first case, the issue is epistemic. We may believe that minds pop into existence when certain programs are run on fast hardware, or (even worse) when the totality of routers and servers and computers and laptops linked together into the Internet "run" on planet Earth. The latter is the belief held by folks in something called the global brain or "noosphere" — the notion that our technology is collectively evolving a mind. In that case, the Ultimate Mind is somehow obsessed with collecting our personal data, uploading and downloading pornography, and selling us products we don’t need.

Fine, but then we must ask how we know or have any rational basis to believe this is actually true. By "true," I mean "True." Factual. Most of these folks are also skeptical of and even hostile to historical ideas like religion and the belief in a soul, so the issue is how they maintain a thoroughgoing faith in the emergence of minds from complicated technology.

The second issue is metaphysical. It too is closely linked to the epistemic issues, but in its metaphysical guise it’s the question of ontological commitments — what exists in the Universe? Minds apparently do, though they simply "emerge" into it mysteriously from complex systems. (Here I have to suppress, constantly, an urge to exclaim "Presto!") The ontological issue can be classified as strong, in which case we say that a new substance emerges when a mind does, or weak, in which case we’re committed only to the view that some property (possibly epiphenomenal) emerges, but no new substance in the Aristotelian or commonsense sense springs forth.

This all brings us back to Alan Turing. Whatever his faults, Turing wasn’t much interested in envisioning a Singularity, or a future eschatology involving smarter-than-human machines with minds. He was interested in the limits of machines — the question of whether they could think at all.

It is clear particularly from his 1950 paper that he felt somewhat hopeful and even optimistic that Turing machines could be made to exhibit a range of intelligent behaviors, and even to learn, so that they could eventually be made to think like humans. He was aware, too, of the standard philosophical and scientific objections to his view.

A century earlier, Lady Lovelace had articulated the central worry of AI hopefuls everywhere, in what has come to be known as the "Lady Lovelace Objection." Lovelace worked with the once world-famous (and now forgotten) 19th-century scientist Charles Babbage on his Difference Engine, an early progenitor of modern computers that never quite got off the ground, so to speak (it was massive). Lovelace, reflecting on the monstrous Difference Engine, remarked once that a machine could only be made to do what it is programmed to do, and nothing more.

Turing felt the Lovelace Objection deeply, and almost personally, and took pains in his 1950 defense of machine intelligence to refute it. Random elements could be incorporated into programs, mused Turing, and they could be made to learn eventually using randomizing techniques. (Monte Carlo algorithms, used in financial prediction, are based on this idea. As usual, many of Turing’s musings proved fruitful, if not in the full sense he may have intended.)

A program might, he continued, "scintillate on its own accord." Later, his former statistician I.J. Good would take Turing’s seminal ideas and inaugurate the official beginning of AI as a Grand Vision, of Artificial Intelligence as the faith in the coming of Mind and the emergence of novel beings in the Universe. Good’s 1960s speculation is supremely relevant to today’s discussion:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Turing, always the scientist, never said such things. But in his hope that computers would come to say and think more than they were programmed to, he sowed the initial seeds of Good’s broader vision. Today, Good’s thoughts seem more relevant than Turing’s. Minds are coming, say the New Emergentists. Functionalism may be dead, but who cares about philosophy when one has a Grand Vision, anyway?

It’s hard to combat such a view, perhaps, but it’s notable that Turing himself never endorsed it. He never echoed (in writing anyway), the full-blown claims of his statistician Good, and while he no doubt would be elated at the success of modern computation, he might also notice something that superintelligence enthusiasts and bandwagon emergentists have missed.

No computer has passed the Turing Test today. Not even close; not even using the "bag of tricks" that Levesque felt should be eliminated, making the test fairer (but harder even so). It’s a cautionary tale and a lesson that seems somehow hopelessly lost today in all the hype. Reading his original paper, and reflecting on who he was as a scientist and a philosopher, it’s hard to believe that Turing, were he alive today, would endorse the New Emergentists and their Grand Vision of our future, without some good-old fashioned evidence: passing, first, his test.

That day is very likely a long way off, and so we would all do well to reign in our speculations about imminent superintelligence. Turing, one can only believe, would likely approve.

Image: “Turing in slate at Bletchley Park,” by Jon Callas from San Jose, USA (Alan Turing) [CC BY 2.0], via Wikimedia Commons.

Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Share

Tags

Computational SciencesContinuing SeriesFilms and VideoMind and TechnologyPhilosophyScience