###### Intelligent Design

# Measuring Surprise — A Frontier of Design Theory

The sunlight shines bright on the cold winter’s morning as you begin your trek towards the retreat. Snow covers the ground and steam from your breath rises ahead of you. Accompanying you is Bertrand, your Russell terrier, who runs ahead of you jumping in the snow. Chasing a bird, he climbs over a hill as you call after him, but he is too focused on the pursuit to heed you.

Clumsily chasing after him you come upon a strange looking stone protruding from one of the rock faces. Its odd shape catches your eye, as does its relatively smooth surface. There appear to be runes carved its surface, though you aren’t sure, since you don’t recognize the symbols or know of any literate ancient cultures from the area.

You decide to leave the stone as you found it, but mark its location and pull a notepad from your backpack to sketch the stone with its symbols. Bertrand, tired from his chase, joins you and begins digging nearby, where he unearths what appears to be a piece of aged metal, again with symbols you do not recognize. The symbols differ from those carved in the rock, are more refined, and almost appear to be numeric.

Gently moving more earth, you discover a second piece of twisted metal, and you add drawings of these pieces to your sketchbook, resisting the urge to take the pieces with you. After sketching, you continue your trek towards your retreat. On arriving, you contact the local university about your discovery, helping them to locate the artifacts on the following day.

You’ve come to the retreat to study. You’ve brought several books from your office, along with a manuscript on the subject of complex specified information. As you read the manuscript, you begin applying the ideas to your discovery in the hills. What could have created the carvings?

The carvings look sustained (there are many of them) and deliberate, unlike creases created by splitting and pitting of surfaces over ages. You’re no geologist, but you are also no stranger to rock surfaces, possessing a mature mental model of the types of patterns that can be expected to appear on stone faces. The patterns are geometric but irregular, complex and without any apparent repetition, unlike other geological anomalies such as the Giant’s Causeway of Ireland.

The runes were most likely carvings, made by people in some unknown past. Could you compute some estimates to how likely a series of runes like this (or in any other symbol system) would be to appear as a process of weathering? That seems like a challenging task, but the metal pieces present perhaps a less formidable challenge, since you are almost certain they represent numbers.

You set out to discover whether you can quantify your intuition that the carvings are special, using the tool of specified complexity.

## Unlikely Yet Structurally Organized

What is specified complexity? Almost a decade before the discovery of the structure of the DNA molecule, physicist Erwin Schrödinger predicted that hereditary material must be stored in what he called an *aperiodic crystal*, stable yet without predictable repetition, since predictable repetition would greatly reduce its information carrying capacity (Schrödinger 1944).

Starting from first principles, he reasoned that life would need an informational molecule that could take on a large number of possible states without strong bias towards any one particular state (thus making individual states *improbable*), yet needed structural stability to counteract the forces of Brownian motion within cells (thus making the molecule match a functional specification of being *structurally organized*).

This combination of unlikely objects that simultaneously match a functional specification later came to be known as *specified complexity* (Dembski 1998; Dembski 2001; Dembski 2002; Dembski 2005; Ewert, Dembski, and Marks II 2012). Specified complexity has been proposed as a signal of design (Dembski 1998; Dembski 2001; Dembski 2002). An object exhibiting specified complexity is unlikely to have been produced by the probabilistic process under which it is being measured and it is also specified, matching some independently given pattern called a *specification*. More precisely, the degree to which an object meets some independently defined criterion in a way that not many objects do is the degree to which the object can be said to be specified.

Because complex objects typically contain many parts, each of which makes the overall probability of the object being encountered less likely, the improbability aspect has historically been referred to as the *complexity* of the object (though, improbability would perhaps be more fitting). Therefore, specified complex objects are those that are both unlikely and functionally specified, often having to meet minimum thresholds in both categories.

## Quantifying Surprise

Specified complexity allows us to measure how surprising random outcomes are, in reference to some probabilistic model. But there are other ways of measuring surprise. In Shannon’s celebrated information theory (Shannon 1948), improbability alone can be used to measure the surprise of observing a particular random outcome, using the quantity of *surprisal*, which is simply the negative logarithm (base 2) of the probability of observing the outcome, namely,

-log

_{2}p(x)

where *x* is the observed outcome and *p(x)* is the probability of observing it under some distribution *p*. Unlikely outcomes generate large surprisal values, since they are in some sense unexpected.

But let us consider a case where all events in a set of possible outcomes are equally very unlikely. (This can happen when you have an extremely large number of equally possible outcomes, so that each of them individually has a small chance of occurring.)

Under these conditions, asking “what is the probability that an unlikely event occurs?” yields the somewhat paradoxical answer that it is guaranteed to occur! *Some* outcome must occur, and since each of them is unlikely, an unlikely event (with large surprisal) is guaranteed to occur. Therefore, surprisal alone cannot tell us how likely we are to witness an outcome that surprises us.

As a concrete example, consider any sequence of one hundred coin flips generated by flipping a fair coin. Every sequence has an equal probability of occurring, giving the same surprisal for each possible sequence. Therefore a sequence of all heads has the exact same surprisal as a random sequence of one hundred zeros and ones, even though the former is surely more surprising than the latter under a fair coin model.

We need another way to capture what it means for an outcome to be special and surprising, one that would allow us to say a sequence of all heads generated by a fair coin is surprising, but a sequence of randomly mixed zeros and ones is not. Specified complexity provides a mathematical means of doing so, by combining a surprisal term with a specification term, allowing us to precisely determine how surprising it is to witness an outcome of one hundred heads in a row assuming a fair coin.

## Diving into Specified Complexity

How does specified complexity allow us to do this? A recently published paper in *BIO-Complexity*, “A Unified Model of Complex Specified Information” by machine learning researcher George D. Montañez, offers some insight. For a reader-friendly summary see, “*BIO-Complexity *Article Offers an Objective Method for Weighing Darwinian Explanations.”

The paper, which is mathematical in nature, ties together several existing models of specified complexity and introduces a canonical form for which objects exhibiting large specified complexity values are unlikely (surprising!) under any given distribution. Montañez builds on much previous work, fleshing out the equivalence between specified complexity testing and p-value hypothesis testing introduced by A. Milosavljević (Milosavljević 1993; Milosavljević 1995) and later William Dembski (Dembski 2005), and giving bounds on the probability of encountering large specified complexity values for existing specified complexity models.

The paper defines new canonical specified complexity model variants, and gives a recipe for creating specified complexity models using specification functions of your choice. It lays out a framework for reasoning quantitatively about what it means for a probabilistic outcome to be genuinely surprising, and explores what implications this has for technology and for explanations of observed outcomes.

We’ll have more to say about this important paper, which represents a frontier for the theory of intelligent design. Stay tuned.

## Bibliography

Dembski, William A. 1998. *The Design Inference: Eliminating Chance Through Small Probabilities*. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511570643.

———. 2001. “Detecting Design by Eliminating Chance: A Response to Robin Collins.” *Christian Scholar’s Review* 30 (3): 343–58.

———. 2002. *No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence*. Lanham: Rowman & Littlefield.

———. 2005. “Specification: The Pattern That Signifies Intelligence.” *Philosophia Christi* 7 (2): 299–343. https://doi.org/10.5840/pc20057230.

Ewert, Winston, William A Dembski, and Robert J Marks II. 2012. “Algorithmic Specified Complexity.” *Engineering and Metaphysics*. https://doi.org/10.33014/isbn.0975283863.7.

Milosavljević, Aleksandar. 1993. “Discovering Sequence Similarity by the Algorithmic Significance Method.” In *ISMB*, 284–91.

———. 1995. “Discovering Dependencies via Algorithmic Mutual Information: A Case Study in Dna Sequence Comparisons.” *Machine Learning* 21 (1-2): 35–50.

Schrödinger, Erwin. 1944. *What Is Life? The Physical Aspect of the Living Cell and Mind*. Cambridge: Cambridge University Press.

Shannon, Claude Elwood. 1948. “A Mathematical Theory of Communication.” *Bell System Technical Journal* 27 (3): 379–423.

*Photo credit: A stone carved with ancient runes, by **Lindy Buckley**, via **Flickr** (cropped).*