Evolution Icon Evolution
Intelligent Design Icon Intelligent Design

BIO-Complexity Article Offers an Objective Method for Weighing Darwinian Explanations

Darwinian

George D. Montañez has a new paper out in the journal BIO-Complexity, “A Unified Model of Complex Specified Information.” Skeptics of evolutionary theory have argued that proposed Darwinian processes don’t rise to the level of plausible detailed explanations for the emergence of complex life. The paper by Dr. Montañez, a computer scientist at Harvey Mudd College, offers an objective way of demonstrating this directly.

What Does It Mean for ID?

The Abstract summarizes:

A mathematical theory of complex specified information is introduced which unifies several prior methods of computing specified complexity. Similar to how the exponential family of probability distributions have dissimilar surface forms yet share a common underlying mathematical identity, we define a model that allows us to cast Dembski’s semiotic specified complexity, Ewert et al.’s algorithmic specified complexity, Hazen et al.’s functional information, and Behe’s irreducible complexity into a common mathematical form. Adding additional constraints, we introduce canonical specified complexity models, for which one-sided conservation bounds are given, showing that large specified complexity values are unlikely under any given continuous or discrete distribution and that canonical models can be used to form statistical hypothesis tests, by bounding tail probabilities for arbitrary distributions. 

There is some heavy mathematics in the article and it undoubtedly needs a bit translation for the lay reader. What does it mean for the intelligent design community? Here’s a start.

In (Relative) Layman’s Terms

The paper gives a detailed mathematical theory of what specified complexity is, what it does and how it can be used to rule out proposed explanations. It defines a common and rigorous foundation for all specified complexity models. It demonstrates that specified complexity must be rare, and it shows how to create new specified complexity models for domains of interest (such as converting irreducible complexity into a quantitative form of specified complexity, as shown in the paper). 

Given the explicit connection made between specified complexity and statistical hypothesis tests (by way of p-values), we can reverse the relationship to provide a minimum probabilistic baseline that any proposed explanation must exceed to be considered a plausible explanation, setting an objective quantitative bar (relative to a simple uniform distribution) by which we can measure the causal adequacy of any naturalistic explanation. As Dr. Montañez notes in the paper, meeting this requirement “is the entry fee for a probabilistic mechanism even to enter the tournament of competing explanations.”

The Fields Are Ripe

Here are ten takeaways from the paper:

1) The paper gives a general mathematical definition of specified complexity, for which all other examined previous work in specified complexity can be shown to be special cases. This is called “common form” in the paper.

2) Adding a single constraint to common form models results in what Montañez calls “canonical form,” a form which has important properties, such as functioning as a statistical hypothesis test statistic, very similar to a p-value. It is shown that every p-value hypothesis test has an equivalent canonical specified complexity hypothesis test, and that every canonical specified complexity model can be used to bound tail probabilities in the exact same way a p-value can (and in some cases where p-values cannot be used). Montañez gives examples of how to do specified complexity hypothesis testing, such as giving a table of specified complexity cutoff values for desired alpha rejection levels.

3) Because canonical form is now defined, it puts previous research in a clear light, such as showing that a particular form of specified complexity (called the “algorithmic significance method” (Milosavljević, 1993)) has been used in machine learning and bioinformatics for over 25 years! This was brought to light by an ID critic (pointing out a similarity to algorithmic specified complexity). However, now that we have canonical specified complexity we can show it isn’t just similar, it is an actual mathematical specified complexity model. Specified complexity has therefore already found direct applications outside of ID for over a quarter century.

4) You can create your own specified complexity models! Using any form of specification function you can come up with (that is nonnegative and applied to finite domains), the paper gives a recipe for coming up with new specified complexity models. For example, if you think functional coherence is important, then you can define Coherence Specified Complexity using the recipe given.

5) The paper defines a quantitative model of irreducible complexity, called “quantitative irreducible complexity,” as a canonical model.

6) The paper shows Robert Hazen’s functional information (Hazen, 2007) and William Dembski’s semiotic specified complexity (Demsbki, 2005) are common form models, and defines canonical variants of both.

7) Montañez provides many mathematical results (implications) for common form and canonical models.

8) Winston Ewert et al.’s algorithmic specified complexity (Ewert, 2012) is shown to be a canonical model, so the paper gives new theoretical results for that model.

9) The paper gives an intuitive explanation for why high levels of specified complexity must be rare. Basically, there is a conservation property on “specification mass.” This means that when your specification function does not allow everything to be highly specified (namely, it actually measures something nontrivial), then as a direct result any specified complexity model using that specification function can only have large values for a small subset of possibilities.

10) Rather than being a dead-end for study, specified complexity represents a rich area for theoretical and empirical exploration, with enough “low-hanging fruit” for a single researcher to produce over 20 pages of mathematical results using such models. 

In short, the laborers may be few, but the fields are ripe. Keep an eye on this space for more on this important new contribution.

Bibliography

Demsbki, W. (2005). Specification: The Pattern That Signifies Intelligence. Philosophia Christi, 7 (2): 299–343.

Ewert, W. et al. (2012). Algorithmic Specified Complexity. Engineering and Metaphysics. 

Hazen, R. et al. (2007). Functional information and the emergence of biocomplexity. Proceedings of the National Academy of Sciences , 104.suppl 1: 8574-8581.

Milosavljević, A. (1993). Discovering Sequence Similarity by the Algorithmic Significance Method. Proc Int Conf Intell Syst Mol Biol, (pp. 284–291).

Photo credit: Photo by Reuben Teo on Unsplash.