Evolution Icon Evolution
Intelligent Design Icon Intelligent Design

The Displacement Fallacy: Evolution’s Shell Game

Photo credit: Holger.Ellgaard, CC BY-SA 3.0 , via Wikimedia Commons.

Author’s note: Conservation of information is a big result of the intelligent design literature, even if to date it hasn’t gotten the attention it deserves. It quantifies the amount of information needed to increase the probability of finding a needle in a haystack so that the needle can actually be found. The upshot of conservation of information is that the information needed to find a needle in a haystack in turn requires finding another needle in a haystack, implying there is no free lunch in search. I just wrote up a full account of conservation of information for the journal BIO-Complexity in a paper titled “The Law of Conservation of Information: Natural Processes Only Redistribute Existing Information.” What follows is a section from that paper on the displacement fallacy. This section is accessible and helps clarify the intuitions underlying conservation of information. 

The discovery of conservation of information didn’t start with proving a mathematical theorem. Rather, its discovery came from repeatedly noticing how efforts to account for the success of searches whose odds of success were seemingly hopeless always smuggled in information that wasn’t properly accounted for. One hole was filled, but only by digging another, and so a new hole now in turn needed to be explained. This failure of explanation became especially evident in the evolutionary literature. Darwinian approaches to biological evolution and evolutionary computing sought to explain the origin of information through some process that directly used or else mimicked natural selection. Yet rather than admit a fundamental gap in explanation, this literature simply invoked selection as a backstop to explain the origin of information, the backstop itself being exempt from further explanation.

The move to explain the origin of information by invoking some separate unexplained source of information, typically via a selection process, was so common in the evolutionary literature that it deserved its own name: displacement.1 Displacement became the tool of choice among evolutionary critics of intelligent design as they tried to invalidate the logic of the design inference, which inferred design for events both specified and improbable. Critics claimed that once natural selection came into play, it acted as a probability amplifier that removed any seeming improbability that might otherwise have made for a valid design inference. Accordingly, critics argued that seeming products of design could be explained away through evolutionary processes requiring no design.2

Improbable Products

But this attempt to invalidate the design inference was too easy. Products can be designed, but also processes that build products can be designed (compare a Tesla automobile with a Tesla factory that builds Tesla automobiles — both are designed). The design inference makes sense of improbable products. Conservation of infor­mation, through the search for a search, makes sense of improbable processes that output probable products. Making sense of displacement was a crucial step in developing a precise mathematical treatment of conservation of information.

Whereas conservation of information was a mathematically confirmed theoretical finding, displacement was an inductively confirmed empirical finding. Over and over information supposedly created from scratch was surreptitiously introduced under the pretense that the information was already adequately explained when in fact it was merely presupposed. In effect, displacement became a special case of the fallacy of begging the question, obscuring rather than illuminating evolutionary processes.

One of the more brazen examples of displacement that I personally encountered occurred in a 2001 interview with Darwinist Eugenie Scott on Peter Robinson’s program Uncommon Knowledge. Scott and I were discussing evolution and intelligent design when Robinson raised the trope about a monkey, given enough time, producing the works of Shakespeare by randomly typing at a typewriter. Scott responded by saying that contrary to this example, where the monkey’s typing merely produces random variation, natural selection is like a technician who stands behind the monkey and whites out every mistake the monkey makes in typing Shakespeare.3 But where exactly do you find a technician who knows enough about the works of Shakespeare to white out mistakes in the typing of Shakespeare? What are the qualifications of this technician? How does the technician know what to erase? Scott never said. That’s displacement: The monkey’s success at typing Shakespeare is explained, but at the cost of leaving the technician who corrects the monkey’s typing unexplained.

About That Weasel

In his book The Blind Watchmaker, Richard Dawkins claims to show how natural selection can create information by appealing to his well-known METHINKS IT IS LIKE A WEASEL computer simulation.4 Pure random sampling of the 28 letters and spaces in this target phrase would have a probability of only 1 in 27^28, or roughly 1 in 10^40, of achieving it. In evolving METHINKS IT IS LIKE A WEASEL, Dawkins’s simulation was able to overcome this improbability by carefully choosing a fitness landscape to assign higher fitness to character sequences that have more corresponding letters in common with the target phrase.

Essentially, in place of pure randomness, Dawkins substituted a hill-climbing algorithm with exactly one peak and with a clear way to improve fitness at any place away from the peak (smooth and increasing gradients all the way!).5 But where did this fitness landscape come from? Such a fitness landscape exists for any possible target phrase whatsoever, and not just for METHINKS IT IS LIKE A WEASEL. Dawkins explains the evolution of METHINKS IT IS LIKE A WEASEL in terms of a fitness landscape that with high probability allows for the evolution to this target phrase. Yet he leaves the fitness landscape itself unexplained.6 In so doing, he commits a displacement fallacy.7

Displacement is also evident in the work of Dawkins as he shifts from computer simulations to biological evolution. Indeed, his entire book Climbing Mount Improbable can be viewed as an exercise in displacement as applied to biology.8 In that book, Dawkins compares the emergence of biological complexity to climbing a mountain. He calls it Mount Improbable because if you had to get all the way to the top in one fell swoop (that is, achieve a massive increase in biological complexity all at once), it would be highly improbable. But does Mount Improbable have to be scaled in one leap? Darwinism purports to show how Mount Improbable can be scaled in small incremental steps. Thus, according to Dawkins, Mount Improbable always has a gradual serpentine path leading to the top that can be traversed in baby-steps.

But where is the verification for this claim? It could be that Mount Improbable is sheer on all sides and getting to the top via baby-steps is effectively impossible. Consequently, it is not enough to presuppose that a fitness-increasing sequence of baby steps always connects biological systems. Such a connection must be demonstrated, and to date it has not, as Michael Behe’s work on irreducible complexity shows.9 But even if such a connection could be demonstrated, what would this say about the conditions for the formation of Mount Improbable in the first place?

Mountains, after all, do not magically materialize — they have to be formed by some process of mountain formation. Of all the different ways Mount Improbable might have emerged, how many are sheer so that no gradual path to the summit exists? And how many do allow a gradual path to the summit? A Mount Improbable with gradual paths to the top may itself be improbable. Dawkins simply assumes that Mount Improbable must be such as to facilitate Darwinian evolution. But in so doing, he commits a displacement fallacy, presupposing what must be explained and justified, and thus illicitly turning a problem into its own solution.10

Examples of Displacement

In the evolutionary computing literature, examples of displacement more sophisticated than Dawkins’ WEASEL can readily be found. But the same question-begging displacement fallacy underlies all these examples. The most widely publicized instance of displacement in the evolutionary computing literature appeared in Nature back in 2003. Richard Lenski, Charles Ofria, Robert Pennock, and Christoph Adami had developed a computer simulation called Avida.11 They claimed that this simulation was able to create complex Boolean operators without any special input or knowledge. One of the co-authors, Pennock, then went further to claim that Avida decisively refuted Michael Behe’s work on irreducible complexity.12 And given that irreducible complexity is a linchpin of intelligent design, Pennock in effect claimed that Avida had also refuted intelligent design.

But in fact, as Winston Ewert and George Montañez showed by tracking the information flow through Avida, the amount of information outputted through newly formed complex Boolean operators never exceeded the amount of information inputted. In fact, Avida was jury-rigged to produce the very complexity it was claiming to produce for free: Avida rewarded ever-increasing complexity simply for complexity’s sake and not for independent functional reasons. Other examples like Thomas Schneider’s ev, Thomas Ray’s Tierra, and David Thomas’s Steiner tree search algorithm all followed the same pattern.13 Ewert and Montañez were able to show precisely where the information supposedly created from scratch in these algorithms had in fact been embedded from the outset.14 Displacement, as their research showed, is pervasive in this literature.

The empirical work of showing displacement for these computer simulations set the stage for the theoretical work on conservation of information. These simulations, and their consistent failure to explain the origin of information, prompted an investigation into the precise numerical relation between information inputted and information outputted. Showing displacement started out as a case-by-case effort to uncover where precisely information had been smuggled into a computer simulation. Once the mathematics of conservation of information was developed, however, the need to find exactly where the information was smuggled in was no longer so important, theory stepping in where observation fell short.

The Pigeonhole Principle

Theory guaranteed that the information was smuggled in even if the evolutionary simulations became so byzantine that it was hard to follow their precise information flow. By analogy, if you have a hundred and one letters that must go into a hundred mailboxes, the pigeonhole principle of mathematics guarantees that one of the mailboxes must have more than one letter.15 Checking this empirically could be arduous if not practically impossible because of all the many possible ways that these letters could fill the mailboxes. Theory in this case comes to the rescue, guaranteeing what observation alone cannot.

Displacement is a shell game. In a shell game, an operator places a small object, like a pea, under one of three cups and then rapidly shuffles the cups to confuse observers about the object’s location. Participants are invited to guess which cup hides the pea, but the game often relies on sleight of hand and misdirection to increase the likelihood that participants guess incorrectly. So long as the game is played fairly, the pea is under one cup and remains under one cup. It cannot magically materialize or dematerialize. The game can become more sophisticated by increasing the number of cups and by the operator moving the cups with greater speed and agility. But by carefully tracking the operator, it is always possible to determine where the pea started out and where it ended up. The pea here is information. Displacement says that it was always there. Conservation of information provides the underlying mathematics to demonstrate that it was indeed always there.

Notes

  1. My first serious treatment of displacement occurred in Chapter 4 of William A. Dembski, No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (Lanham, MD: Rowman & Littlefield, 2002).
  2. For an account of natural selection as a probability amplifier as well as a refutation of trying to use it to overturn the logic of the design inference, see William A. Dembski and Winston Ewert, The Design Inference: Eliminating Chance Through Small Probabilities, 2nd ed. (Seattle: Discovery Institute Press, 2023), Chapter 7.
  3. “Darwinism under the Microscope,” PBS television interview of William Dembski and Eugenie Scott by Peter Robinson for Uncommon Knowledge, filmed December 7, 2001, on the Stanford campus, with video available online at https://www.hoover.org/research/darwin-under-microscope-questioning‌-darwinism (last accessed December 9, 2024).
  4. Richard Dawkins, The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design (New York: Norton, 1986), 45–50.
  5. For hill climbing, see Sheldon H. Jacobson and Enver Yücesan, “Analyzing the Performance of Generalized Hill Climbing Algorithms,” Journal of Heuristics 10, no. 4 (2004): 387–405.
  6. As Stuart Kauffman puts it, “Life uses mutation, recombination, and selection. These search procedures seem to be working quite well. Your typical bat or butterfly has managed to get itself evolved and seems a rather impressive entity… Mutation, recombi­nation, and selection only work well on certain kinds of fitness landscapes, yet most organisms are sexual, and hence use recombination, and all organisms use mutation as a search mechanism… Where did these well-wrought fitness landscapes come from, such that evolution manages to produce the fancy stuff around us?” Kauffman answers his own question: “No one knows.” Stuart A. Kauffman, Investigations (New York: Oxford University Press, 2000), 18–19.
  7. For a counter-simulation of the Dawkins WEASEL simulation, see “Weasel Ware — Evolutionary Simulation,” by Winston Ewert and George Montañez at https://www.evoinfo.org/weasel.html. This counter-simulation shows how sensitive Dawkins’ simulation is to initial inputs and how easily it is set adrift when the fitness landscape is not as neat and tidy as Dawkins’s simulation demands.
  8. Richard Dawkins, Climbing Mount Improbable (New York: Norton, 1996).
  9. See Michael J. Behe, A Mousetrap for Darwin (Seattle: Discovery Institute Press, 2020).
  10. The three previous paragraphs are drawn in part from a lecture I gave at Oxford University’s Ian Ramsey Centre on October 30, 2003 titled “Gauging Intelligent Design’s Success.” Though on faculty at Oxford, Richard Dawkins was not in attendance. The lecture is available at https://billdembski.com/documents/2003.11.Gauging_IDs_Success.pdf (last accessed December 13, 2024).
  11. Richard E. Lenski, Charles Ofria, Robert T. Pennock, and Christoph Adami, “The Evolutionary Origin of Complex Features,” Nature 423 (May 8, 2003): 139–144.
  12. Pennock, citing the 2003 Nature article, claims that “colleagues and I have experimentally demonstrated the evolution of an IC system.” IC here is “irreducibly complex.” Quoted from Robert T. Pennock, “DNA by Design? Stephen Meyer and the Return of the God Hypothesis,” in W.A. Dembski and M. Ruse, eds., Debating Design: From Darwin to DNA, 130–148 (Cambridge: Cambridge University Press, 2004), 141.
  13. For ev, see Thomas D. Schneider, “Evolution of Biological Information,” Nucleic Acids Research 28, no. 14 (2000): 2794–2799. For the best place to understand Tierra, see Thomas Ray’s website https://tomray.me/tierra. For a search algorithm purported to solve the Steiner Tree problem without the need for full prior information, see Dave Thomas, “War of the Weasels: An Evolutionary Algorithm Beats Intelligent Design,” Skeptical Inquirer 34, no. 3 (2010): 42–46 and then a follow-up by Thomas titled “Target? TARGET? We Don’t Need No Stinkin’ Target!” https://pandasthumb.org/archives/2006/07/target-target-w-1.html (last accessed December 10, 2024).
  14. See the counter-simulations by Ewert and Montañez at EvoInfo.org: contra Avida, see their “Minivida – Dissection of Avida Digital Evolution” at https://www.evoinfo.org/minivida; contra ev, see their “Ev Ware – Evolutionary Simulation” at https://www.evoinfo.org/ev (last accessed December 13, 2024). See also Robert J. Marks II, William A. Dembski, and Winston Ewert, Introduction to Evolutionary Informatics (Singapore: World Scientific Publishing, 2017), where we critique all these evolutionary simulations that purport to create novel information that exceeds their prior informational input. Dave Thomas is critiqued in this book on pages 119–120 and 241–242.
  15. Martin Aigner, Discrete Mathematics, trans. D. Kramer (Providence, RI: American Mathematical Society, 2007), 30.

Cross-posted at Bill Dembski on Substack.