I am reviewing Jason Rosenhouse’s new book, The Failures of Mathematical Anti-Evolutionism (Cambridge University Press), serially. For the full series so far, go here.
Rosenhouse devotes a section of his book (sec. 6.10) to conservation of information, and prefaces it with a section on artificial life (sec. 6.9). These sections betray such ignorance and confusion that it’s best to clean the slate. I’ll therefore highlight some of the key problems with Rosenhouse’s exposition, but focus mainly on providing a brief history and summary of conservation of information, along with references to the literature, so that readers can determine for themselves who’s blowing smoke and who’s got the beef.
Rosenhouse’s incomprehension of conservation of information becomes evident in his run-up to it with artificial life. Anyone who has understood conservation of information recognizes that artificial life is a fool’s errand. Yet Rosenhouse’s support of artificial life is unmitigated. The term artificial life has been around since the late 1980s, when Christopher Langton, working out of the Santa Fe Institute, promoted it and edited a conference proceedings on the topic. I was working in chaos theory at the time. I followed the Santa Fe Institute’s research in that area, and thus as a side benefit (if it may be called that) witnessed first-hand the initial wave of enthusiasm over artificial life.
Artificial life is computer simulations that produce life-like virtual things, often via a form of digital evolution that mimics selection, variation, and heredity. The field has had its ups and downs over the years, initially generating a lot of enthusiasm, then losing it as people started to ask “What’s this got to do with actual biology?”, after which people forgot these nagging concerns, whereupon a new generation of researchers got excited about it, and so on to repeat the cycle. Rosenhouse, it seems, represents the latest wave of enthusiasm. As he writes: “[Artificial life experiments] are not so much simulations of evolution as they are instances of it. In observing such an experiment you are watching actual evolution take place, albeit in an environment in which the researchers control all the variables.” (p. 209)
Conservation of information, as developed by my colleagues and me, arose in reaction to such artificial life simulations. We found, as we analyzed them (see here for several analyses that we did of specific artificial life programs such as Avida, which Rosenhouse lauds), that the information that researchers claimed to get out of these programs was never invented from scratch and never amounted to any genuine increase in information, but rather always reflected information that was inputted by the researcher, often without the researcher’s awareness. The information was therefore smuggled in rather than created by the algorithm. But if smuggling information is a feature rather than a bug of these simulations (which it is), that undercuts using them to support biological evolution. Any biological evolution worth its salt is supposed to create novel biological information, and not simply redistribute it from existing sources.
For my colleagues and me at the Evolutionary Informatics Lab (EvoInfo.org), it therefore turned into a game to find where the information supposedly gotten for free in these algorithms had in fact been surreptitiously slipped in (as in the shell game find the pea). The case of Dave Thomas, a physicist who wrote a program to generate Steiner trees (a type of graph for optimally connecting points in certain ways), is instructive. Challenging our claim that programmers were always putting as much information into these algorithms as they were getting out, he wrote: “If you contend that this algorithm works only by sneaking in the answer into the fitness test, please identify the precise code snippet where this frontloading is being performed.”
We found the code snippet, which included the incriminating comment “over-ride!!!” Here is the code snippet:
x = (double)rand() / (double)RAND_MAX; num = (int)((double)(m_varbnodes*x); num = m_varbnodes; // over-ride!!!
As we explained in an article about Thomas’s algorithm:
The claim that no design was involved in the production of this algorithm is very hard to maintain given this section of code. The code picks a random count for the number of interchanges; however, immediately afterwards it throws away the randomly calculated value and replaces it with the maximum possible, in this case, 4. The code is marked with the comment “override!!!,” indicating that this was the intent of Thomas. It is the equivalent of saying “go east” and a moment later changing your mind and saying “go west.” The most likely occurrence is that Thomas was unhappy with the initial performance of his algorithm and thus had to tweak it.
A Famous Algorithm
We saw this pattern, in which artificial life programs snuck in information, repeated over and over again. I had first seen it in reading The Blind Watchmaker. There Richard Dawkins touted his famous “Weasel algorithm” (which Rosenhouse embraces without reservation as capturing the essence of natural selection; see pp. 192–194). Taking from Shakespeare’s Hamlet the target phrase METHINKS IT IS LIKE A WEASEL, Dawkins found that if he tried to “evolve” it by randomly varying letters while needing them all to spell the target phrase at once (compare simultaneous mutations or tossing all coins at once), the improbability would be enormous and it would take practically forever. But if instead he could vary letters a few at a time and if intermediate phrases sharing more letters with the target phrase were in turn subject to further selection and variation, then the probability of generating the target phrase in a manageable number of steps would be quite high. Thus Dawkins was able on average to generate the target phrase in under 50 steps, which is far less than the 10^40 steps needed on average if the algorithm had to climb Mount Improbable by jumping it in one fell swoop.
Dawkins, Rosenhouse, and other fans of artificial life regard Dawkins’ WEASEL as a wonderful illustration of Darwinian evolution. But if it illustrates Darwinian evolution, it illustrates that Darwinian evolution is chock-full of prior intelligently inputted information, and so in fact illustrates intelligent design. This should not be controversial. To the degree that it is controversial, to that degree Dawkins’s WEASEL illustrates the delusional power of Darwinism. To see through this example, ask yourself where the fitness function that evolves intermediate phrases to the target phrase came from? The fitness function in question is one that assigns highest fitness to METHINKS IT IS LIKE A WEASEL and varying fitness to intermediate phrases depending on how many letters they have in common with the target phrase. Clearly, the fitness function was constructed on the basis of the target phrase. All the information about the target phrase was therefore built into — or as computer scientists would say, hard-coded into — the fitness function. And what is hard-coding but intelligent design?
But There’s More
The fitness function in Dawkins’s example is gradually sloping and unimodal, thereby gradually evolving intermediate phrases into the target. But for any letter sequence of the same length as the target phrase, there’s a fitness function exactly parallel to it that will evolve intermediate phrases to this new letter sequence. Moreover, there are many more fitness functions besides these, including multimodal ones where evolution may get stuck on a local maximum, and some that are less smooth but that still get to the target phrase with a reasonably large probability. The point to appreciate here is that rigging the fitness functions to get to a target sequence is even more complicated than simply going at the target sequence directly. It’s this insight that’s key to conservation of information.
I began using the term conservation of information in the late 1990s. Yet the term itself is not unique to me and my colleagues. Nobel laureate biologist Peter Medawar introduced it in the 1980s. In the mid 1990s, computer scientists used that term and also similar language. We may not all have meant exactly the same thing, but we were all in the same ballpark. From 1997 to 2007, I preferred the term displacement to conservation of information. Displacement gets at the problem of explaining one item of information in terms of another, but without doing anything to elucidate the origin of the information in question. For instance, if I explain a Dürer woodcut by reference to an inked woodblock, I haven’t explained the information in the woodcut but merely displaced it to the woodblock.
Darwinists are in the business of displacing information. Yet when they do, they typically act all innocent and pretend that they have fully accounted for all the information in question. Moreover, they gaslight anyone who suggests that biological evolution faces an information problem. Information follows precise accounting principles, so it cannot magically materialize in the way that Darwinists desire. What my colleagues and I at the Evolutionary Informatics Lab found is that, apart from intelligent causation, attempts to explain information do nothing to alleviate, and may actually intensify, the problem of explaining the information’s origin. It’s like filling one hole by digging another, but where the newly dug hole is at least as deep and wide as the first one (often more so). The only exception is one pointed out by Douglas Robertson, writing for the Santa Fe Institute journal Complexity back in 1999: the creation of new information is an act of free will by intelligence. That’s consistent with intelligent design. But that’s a no-go for Darwinists.
Next, “Conservation of Information — The Theorems.”
Editor’s note: This review is cross-posted with permission of the author from BillDembski.com.