Evolution Icon Evolution
Intelligent Design Icon Intelligent Design

More on "Complexity by Subtraction": You Think They’re Kidding? They’re Not.

As David Klinghoffer noted earlier, a new alternative Darwinian hypothesis has surfaced to answer the argument for intelligent design from irreducible complexity. Darwinists still intensely feel the sting of Michael Behe’s book Darwin’s Black Box (1996), which introduced that term, even as they routinely deny it. In a nutshell, the new theory goes like this: Things started complex, with complexity emerging somehow through a process of, well, emergence. Afterward, things got simpler. Darwinian evolution winnowed it down into irreducibly complex molecular machines like the cilium and flagellum.

That proposal was put forward by two evolutionists, Daniel W. McShea of Duke University and Wim Hordijk of Switzerland. Let’s take a closer look at their paper.

The bright idea is the subject of their paper, “Complexity by Subtraction,” published in Evolutionary Biology. A draft of the paper is available free online. Under “Acknowledgements” at the end of the paper, the authors reveal the genesis of their inspiration:

The main ideas described in this paper originated at a catalysis meeting at the National Evolutionary Synthesis Center (NESCent) in Durham, NC, USA. They were developed further and finalized into the current paper during a subsequent short-term research visit of WH [Wim Hordijk] at, and supported by, NESCent. We thank Robert Brandon for suggesting the apt and evocative phrase “complexity by subtraction.” (Emphasis added.)

NESCent is funded by the National Science Foundation — American taxpayer dollars at work in a recession. You can only wonder what was said at that “catalysis meeting.” Maybe it was something like, “Nobody has satisfactorily answered the irreducible complexity argument with Darwinian gradualism. What are we going to do? We need alternatives. Think of SOMETHING! Well, we’re not getting anywhere with addition. How about subtraction?”

The Challenge
The authors are well aware of the challenge set by intelligent design. It’s rare for Darwinists to cite the work of ID advocates. Behe is not mentioned; and the famous phrase “irreducible complexity” is put in scare quotes. Dembski gets a reference for the 2004 book he co-edited with Michael Ruse, Debating Design. No more recent ID work receives a reference. There’s no doubt, however, that they have ID in mind.

They explain that their purpose is not to do away with the standard Darwinian model that proposes bottom-up gradualism. But why offer alternatives if that one works so well?

Darwin’s answer has been a model for evolutionists answering modern challenges to evolution, from the argument from design to the problem of “irreducible complexity.” For example, skeptics have charged that the bacterial flagellum is so complex — consisting as it does of so many interdependent parts — that it could not possibly have arisen by incremental addition. Intermediates would not have been adaptive, the complaint goes. They would not have been preserved by natural selection. Half a flagellum does not propel a bacterium. The Darwinian reply has been to argue that intermediates could indeed have been adaptive. And the modern argument adds to Darwin’s tactic the possibility of exaptation, or change of function. The intermediates in the incremental build-up to the bacterial flagellum could have functioned for something other than propulsion.
The Darwinian route to complexity works, and indeed it could be right — right in the sense that it could correctly describe the most common route by which complex structures arise in evolution. But there are other possible routes. Here we show how complexity could arise, not by incremental addition but by incremental subtraction. We offer an evolutionary logic in which function arises in structures that are already complex, sometimes more complex than they need to be. Natural selection then favors a reduction in the complexity of these structures. They lose parts, to produce structures that are still functional, sometimes improvements, and often still sporting considerable residual complexity. There is nothing undarwinian about this route. It relies heavily on the principle of natural selection.

Astonishingly, McShea and Hordijk argue in their paper that the original bacterial flagellum must have started out even more complex than what we see today. Over time, natural selection winnowed it down to just the essential parts. That’s why it only appears to be irreducibly complex!

The Contestants
Hordijk is a computer programmer at SmartAnalytiX.com in Lausanne. McShea is an associate professor of biology at Duke. He’s interested in the emotions of animals (like whales), and the emergence of complexity (see McShea Lab). Robert Brandon collaborated with McShea on a book called Biology’s First Law (2010).
What is that “first law”? It’s not Pasteur’s law of biogenesis, cell theory, the germ theory of disease, or even Darwin’s law of natural selection. It’s something new they came up with: the “Zero-Force Evolutionary Law” or ZFEL. Judging from the eight references to his own work in the paper, McShea must have been working on this since 2000 at least.

First Pitch: Complexity Redefined
McShea explains his “zero-force evolutionary law” or ZFEL in the paper:

In any system with reproduction and heritable variation, the expectation in the absence of opposing forces or limiting constraints is increasing complexity. The reason is that parts in a system will tend, on average, to accumulate variation and therefore tend to become more different from each other. And to the extent that variation is or becomes discontinuous, the number of part types is also expected to increase. The underlying principle is what McShea and Brandon (2010) call the zero-force evolutionary law (ZFEL).

In two words, clutter accumulates. That’s what McShea means by complexity in its “pure” sense. He’ll get to irreducible complexity later. He just means that reproducing systems tend to spontaneously differ, adding parts to the mess, with “zero force” involved. (Natural selection, he believes, is a force that can counteract ZFEL). Parts accumulate, whether or not they do anything. He doesn’t want to “contaminate” clutter with function just yet.

Notice that complexity as number of part types includes no notion of function. This is complexity in what might be called its “pure” sense (McShea and Brandon, 2010), uncontaminated with any consideration of the degree of adaptedness, sophistication, or function. It is not that functionality is unimportant. On the contrary, in studies of the evolution of complexity, a central question has to do with a possible connection between complexity and functionality. (Indeed, it is central in this one.) Rather, it is that in order to investigate that connection, it is essential to keep the concepts separate.

McShea’s first law of biology almost sounds like a form of the Second Law of Thermodynamics: parts change and spread out in random ways. Look at all the cell types that emerged from the first unicellular organism, he suggests. Even non-producing systems with “memory” show complexity: look at all the craters on the moon. It’s simple:

Complexity is easy. It is spontaneous. No special mechanism beyond the simple tendency for parts to become different from each other is needed to account for it. No selective advantage to complexity needs to be invoked. Of course, selection can favor complexity, in which case one would expect differentiation to occur even more quickly, but a rapid rise in complexity is not, all by itself, evidence for a selective advantage. Of course, selection is necessary to explain adaptive complexity, to explain why a system with many part types functions, why it does something. But not to explain pure complexity, not to explain the existence of many part types. (Italics in original.)

There’s the challenge. The parts can’t just sit there. They have to do something. How are McShea and Hordijk going to connect the clutter of parts to some useful function? It’s not automatic; “there is no reason to think that their initial state — characterized by high levels of differentiation — is more than minimally functional.
Wait a minute: where did that minimal function sneak in? McShea snatched it out of Behe’s hand. Then, he grabs the ball he just stole and runs with it:

What there is evidence for, and what we draw attention to, is the reduction in complexity that followed, apparently from selection for improved function, which in turn seems to have required simplification. The resulting structure still has considerable residual complexity. But that complexity was arrived at not by accumulation, not by a build-up from a simple starting condition, not by addition. Rather it was produced by reduction, by building down from an even-more-complex starting condition, by subtraction.

Computers to the Rescue
McShea’s whole story to this point is based on sleight of hand. He snuck in “minimal function” from behind his back, then assumed Darwinism selected that contraband for “improved function.” But all that clutter was in the way, so Darwin subtracted the clutter, leaving behind elegant systems of wondrous complexity. Evidence, please?
Here’s where Hordijk comes in. He produces a whiz-bang presentation about cellular automata subjected to evolutionary (genetic) algorithms, that supposedly illustrate the complexity-by-subtraction hypothesis. Workers in the field of evolutionary informatics know that you can get computerized organisms to solve problems and do calculations by “evolution,” provided that intelligent programmers pull the strings. For instance, by giving digital organisms a fitness function, rewarding the winners, and turning them loose, Eureka! A solution “emerges” by selection. Digital organisms can take a population of black and white digital marbles and turn them into just black ones, or all white ones, or a 50-50 mix, even if they are only allowed to contact their three nearest neighbors. If they know the goal, and are rewarded for it, the solution arrives as an “emergent property” of the evolutionary algorithm, unforeseen by the programmers.

Too bad Hordijk didn’t read No Free Lunch (Dembski, 2007). There, Dembski used the No Free Lunch theorems to show that no evolutionary algorithm is superior to blind search. One cannot sneak extra information, like rewards and goals, into the algorithm without abandoning Darwinism and employing intelligent design. Hordijk’s contribution, therefore, collapses into irrelevance.

One more well-worn analogy made it into the paper. A stone arch looks irreducibly complex, doesn’t it? Well, the arch may have been there all along in a pile of rocks. By the subtraction of certain rocks, through weathering or other forces, the arch manifests itself. This is like Michelangelo’s famous remark that all you have to do to sculpt David is chip away the parts that don’t look like him. But who is the sculptor in the mindless, intelligence-free world of Darwinian evolution? (See “Behe’s Critics’ Scaffolding Falls Down.”) Why, it’s a hidden engineer: “The engineer’s job, then, is not to build an arch out of stones but to remove the excess, the stones that do not participate in the already existing arch,” the authors say. How picturing an intelligent engineer supports their proposal, and fends off the challenge from ID, is anyone’s guess.

How about a biological example? McShea spends some time contemplating vertebrate skulls. The first fish skulls, he says, had lots of parts. Over time, selection winnowed the parts by suturing some together and getting rid of extraneous skull bones. Not quite a bacterial flagellum, but hey. He even argues for another “biological law” that numbers of parts decrease over time, citing Sidor’s 2001 “skull simplification metric” (SSM) for support. Some literary-sounding similes couldn’t hurt, could they? “It is as though a strong selective wind were blowing to the right throughout the SSM space, carrying the group like a plume of smoke toward lower complexity.” Evolution goes from high complexity to low complexity! Therefore: the flagellum.

Clearing the Air
For a scientific paper, this one is remarkably free of logical and evidential rigor. It’s essentially a collection of imaginative suppositions, propped up by weak examples that are irrelevant to the issue at hand: can the origin of the bacterial flagellum be explained by undirected natural processes? What does counting skull bones have to do with it? What do cellular automata programmed by intelligent designers have to do with it?
The authors admit their notion is merely a proposal for others to flesh out. They end with two graphs contrasting Darwinian gradualism with their “complexity by subtraction” model. Gradualism is like Dawkins’s Mount Improbable: selection ascends it by small steps. In their model, complexity rises rapidly to a peak, drops as parts are subtracted, then levels off. Both models are thoroughly Darwinian in the sense that they both start from zero; no intelligence allowed.

We conclude by noting a perhaps-obvious consequence of the complexity-by-subtraction view and by posing some questions, rhetorically, in the hope of inspiring others to pursue answers. The consequence is that, if complexity-by-subtraction is the rule in evolution, then the complexity of functional biological devices is merely residual. And that residual complexity is to some degree a secondary effect of the route taken, not necessarily favored in its own right. Indeed, what is favored is streamlined simplicity. If functional structures are complex, it may be in part because they start that way, because initial complexity is easy.

“It may be.” Note how the word “could” appears 17 times in this imaginative proposal.

Bottom Line
Advocates of intelligent design can take heart at this paper. That scientists at the NSF-funded National Center for Evolutionary Synthesis would seriously entertain this absurd hypothesis is a sign of general desperation. Darwinists know what they’re up against, they recognize the power of the challenge posed by ID, and they have no rigorous answers.

Image: Wikicommons.

Evolution News

Evolution News & Science Today (EN) provides original reporting and analysis about evolution, neuroscience, bioethics, intelligent design and other science-related issues, including breaking news about scientific research. It also covers the impact of science on culture and conflicts over free speech and academic freedom in science. Finally, it fact-checks and critiques media coverage of scientific issues.

Share

Tags

__k-reviewComputational Sciences