Evolution Icon Evolution
Intelligent Design Icon Intelligent Design

Information, Past and Present

Editor’s Note: ENV asked the Evolutionary Informatics Lab to respond to a critique of the work of William Dembski by Joe Felsenstein, Professor of Genome Sciences and of Biology at the University of Washington. The context for Felsenstein’s comments was a post at Panda’s Thumb volunteering his and his readers’ help and criticism to Stephen Meyer in advance of the publication of Dr. Meyer’s forthcoming book Darwin’s Doubt (which no one at PT has yet seen). Winston Ewert, whom ENV contributor Robert J. Marks II commends to us as “the brains behind many of our recent publications in evolutionary informatics,” contributed the following response. Dembski replied to Felsenstein earlier; see here.
Readers will also wish to note that
Darwin’s Doubt, due out on June 18, can be preordered now at a steep discount, but not for long, exclusively at DarwinsDoubt.com. The deal includes free shipping and four free digital books.
DD Square Ad1.2.jpgWriting at the group blog Panda’s Thumb, Joe Felsenstein claims that objections made to earlier work by Dembski were valid but that Dembski ignored them. Furthermore, he implies that Dembski is attempting to adapt his ongoing work to these criticisms without admitting their validity. In fact, Dembski’s more recent work is consistent with his previous work, and the objections to it are based on misunderstandings.
In addition, Felsenstein claims that “The Search for a Search: Measuring the Information Cost of Higher Level Search,” by Dembski and Robert Marks, and related papers do not make an argument that the designer needed to intervene in the evolutionary process. This is true, but misunderstands the nature of the argument.
Probability in the Design Inference
Felsenstein describes Dembski’s 2005 paper, “Specification, the Pattern that Signifies Intelligence,” as a reformulation of the design inference. According to Felsenstein, Dembski has added the probability term that differentiates it from prior versions and renders the inference redundant. However, this is simply not the case. Probability has always been part of the design inference.
The subtitle of Dembski’s book The Design Inference is Eliminating Chance Through Small Probabilities. The explanatory filter introduced in that work includes the elimination of chance as a step in the process. On page 58, Dembski discusses Richard Dawkins’s rejection of a design inference. Dawkins rejects the design inference because he holds that the probability of life arising is relatively large. As a result, Dembski indicates that “Dawkins is under no obligation to draw the conclusion of the design inference.” On page 184, Dembski discussed the Generic Chance Elimination Argument. Step 4 instructs the subject to calculate the probability of the event occurring given the chance hypothesis.
On page 72 of No Free Lunch, Dembski again presents the Generic Chance Elimination Argument. Step 7 instructs the subject to calculate the probability of the event under all relevant chance hypotheses. On page 193, Dembski considers a search algorithm producing the phrase, “METHINKS IT IS LIKE A WEASEL.” He argues that since the probability of producing that phrase is effectively 1, it contains no complexity and thus does not exhibit specified complexity. In all discussion in the book regarding the Law of Conservation of Information, Dembski is using Shannon information, which is defined as the negative logarithm of the probability. In discussing a design inference for the bacterial flagellum, Dembski attempts a sketch of the probability of its arising through natural selection.
Felsenstein claims that Dembski has introduced a probability term in his 2005 paper. However, the opposite is the case. The probability in the form of Shannon information existed in No Free Lunch. The change in the 2005 paper was to add a term for specification. This was an improvement in the way that specification was handled; it was not a change in the way that probability was handled.
The design inference in all its forms has always involved calculating the probability of relevant chance hypotheses including natural selection. This is not a new feature of a revamped design inference, but a critical component of the design inference since its inception.
Felsenstein objects that “the declaration that Specified Complexity is observed in nature is not obvious,” which echoes Dembski’s statement (in “Explaining Specified Complexity“), “Does nature exhibit actual specified complexity? The jury is still out.” The argument of specified complexity was never intended to show that natural selection has a low probability of success. It was intended to show that if natural selection has a low probability of success, then it cannot be the explanation for life as we know it.
Objections to the Law of Conservation of Information
Felsenstein draws attention to objections raised to the Law of Conservation of Information as explored in No Free Lunch. The output of some stochastic process is assumed to be an instance of Complex Specified Information. The argument is that the input to such a process must also be an instance of Complex Specified Information. This requires showing that the input was both complex and specified. The question under consideration is whether the input was specified.
Dembski argued that the input is specified because it consists of input which when run through the stochastic process produces specified output. Elsberry and Shallit argue that this is not detachable. However, they have misunderstood Dembski’s definition of detachability.
First, they appear to believe that K in Dembski’s definition refers to the entire background knowledge of a subject. However, for a number of reasons, this can be seen to be incorrect. Firstly, it is required that K explicitly and uniquely identifies the function f, which is used to define the specification. The entirety of a subject’s background knowledge cannot possibly uniquely and explicitly identify a single function. Furthermore, Remark 2.5.4 on page 64 states that f would be substituted for K, were that not to be an abuse of notation, further demonstrating that K is equivalent in some sense to f, not the entirety of background knowledge. Part 4 in the Generic Chance Elimination argument on page 72 has the subject identifying background knowledge K, which would not be necessary if K were the entirety of the background knowledge. The conditional independence requirement for K would also be extremely hard to satisfy if K consisted of all background knowledge. K corresponds only to the background knowledge corresponding to the particular definition of specification.
Second, Elsberry and Shallit object that the natural process under consideration might not be in the background knowledge of the subject. However, Dembski has never claimed that every subject will be able to identify specified complexity in every case. The design inference is argued to be resilient against false positives, but not false negatives. Furthermore, after investigation, the subject will learn about the natural process and thus it will enter the background knowledge of the subject. At that time, the subject will be able to make the design inference.
Third, this raises the question of whether knowledge gained about the process might invalidate the conditional independence requirement. The requirement states that Pr(E|K&H) = Pr(E|H). This requires that any knowledge used to determine the specification does not somehow affect the probability of the event occurring. The natural process happens after the event E, and thus is independent of that event. Therefore, that background knowledge is in fact correctly detached.
Felsenstein has his own objection to the Law of Conservation of Information proof. He argues that Dembski uses two different specifications, one before and one after the natural process. This is true. What Felsenstein fails to make clear is why he thinks this is a problem. One cannot simply assert that the argument should have taken a particular form. One must show why the form it took was invalid. The problem appears to be that Felsenstein is trying to recast the Law of Conservation of Information as an argument that no process can raise the probability of high-fitness genomes. But the law makes no such argument, but rather takes the improbability as a premise.
Search for a Search
I agree with Felsenstein on one point. Conservation of Information does not logically show that a designer would have to intervene after the start of the universe. Section 6.6 of No Free Lunch discusses this and states that such front-loaded design is a logical possibility. Dembski also discusses a number of reasons that he thinks it unlikely. But the argument makes no attempt to completely rule out the possibility of a front-loaded universe.
However, it does show that natural selection is not an adequate explanation for life. Search processes, to succeed, depend on external information, such as a useful fitness landscape. Credit for biological life cannot go to natural selection, but must go to whatever information sources natural selection is using. In order to invoke natural selection as the explanation for life, we must also postulate why the fitness landscapes would be useful to the evolutionary process. While conservation of information does not rule out natural selection, it points to a hole in the theory of evolution.
Felsenstein suggests that the properties of physics predict smoother-than-random fitness surfaces. However, I can easily devise fitness surfaces on which the process of evolution will perform terribly despite the fact that they are smooth. For example, suppose that within a certain fold, all proteins exhibit a smooth fitness surface. Outside of a fold, all proteins exhibit no functionality and thus consist of a trivially smooth surface. Furthermore, islands of folding proteins are rare and distributed evenly throughout sequence space. Despite the smoothness of the fitness surface, evolution will perform terribly on it.
Biological life clearly needs to be explained. Conservation of information does not exclude the possibility of a Darwinian process as the explanation. However, it does pose a challenge to Darwinian evolution as being incomplete. Darwinian evolution does not satisfactorily explain the information in the genome. It depends on an explanation that does not yet exist. Until such an explanation exists and is tested, Darwinian evolution does not explain biological life.
Closing Thoughts
While the more recent work of the Evolutionary Informatics Lab has improved and developed Dembski’s earlier work, the original work was not fundamentally flawed. Felsenstein’s objections to that work are based on a misunderstanding of what Dembski claimed. Felsenstein’s grasp of Conservation of Information is better. However, his objections fail to comprehend the nature of the problem for Darwinian evolution posed by it.
Winston Ewert is Research Assistant at the Evolutionary Informatics Lab.

Winston Ewert

Senior Fellow, Senior Research Scientist, Software Engineer
Winston Ewert is a software engineer, intelligent design researcher, and Senior Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He received his Bachelor of Science Degree in Computer Science from Trinity Western University, a Master’s Degree from Baylor University in Computer Science, and a PhD from Baylor University in Electrical and Computer Engineering. His specializes in computer simulations of evolution, specified complexity, information theory, and the common design of genomes. He is a Senior Research Scientist at Biologic Institute, a Senior Researcher at the Evolutionary Informatics Lab, and a Fellow of the Bradley Center.