Editor’s note: This is Part 1 in a series. Read Part 2 here.
In a public square where the case for intelligent design is typically mocked as unworthy of serious engagement, it’s refreshing to find an exception to this disappointing rule. This past June provided such an exception, as the channel Moot Points hosted a conversation between Stephen Meyer and philosopher James Croft to discuss Meyer’s Return of the God Hypothesis. Michael Shermer likewise set a good-faith example by hosting Meyer on his podcast. Could it be that this signals a broader shift in the landscape of Christian-atheist dialogue? Good news for theistic ID proponents, if so!
A History of Engagment
Dr. Croft is Brit-born but St. Louis-based, where he leads the local Ethical Society and busies himself with humanist political activism. However, he has a history of interest and engagement on ID, having previously debated philosopher-physicist David Glass on the Unbelievable? show in 2015. Although he believes philosophical debates ultimately have little bearing on the practical problems raised by our day’s pressing issues, he still enjoys occasionally wearing his “philosophy PhD hat” in dialogues like this. He is also a polished speaker and presenter, which made him a worthy opponent for Meyer. Unfortunately, technical difficulties plagued what might otherwise have been a smoother dialogue.
Croft ultimately had the worse of the argument on substance, as I intend to show over several forthcoming posts. Low-profile and low-tech as it was, this exchange, followed up by Croft’s helpfully organized Substack summary, brought out a surprising number of subtle points that are worth addressing, even belatedly. So, let’s get into it.
A Delicate Distinction
Croft begins his Substack by explaining what he believes Meyer is not arguing. Meyer is not arguing that we can “prove” design, or, equivalently, “prove” the impossibility of natural causes. In other words, the form of Meyer’s argument is not deductive. Rather, his inference to the best explanation (IBE) is abductive.
By contrast, Croft says, ID pioneer William Dembski does attempt in his books to argue that we are “logically required to conclude that [complex specified information] was not the result of an unguided process.” As Croft frames the argument, Dembski is not just saying that unguided processes are less likely than design, he is attempting to reason his way mathematically to the conclusion that they decidedly can’t be a sufficient cause. He believes it’s important to draw out this distinction between the two approaches, since he sees them being conflated.
This is a delicate point. Is it fair? A little mathematical background may be helpful here: In probabilistic terms, a logically impossible event is an event with probability zero. Now, technically, Dembski does not conclude that chance causes have probability zero. However, he does conclude their probability is as close to zero as makes no difference — to make this precise, a number at most 10-150, which means you have 150 zeroes before you reach a non-zero number, thus falling below what Dembski calls the “universal probability bound.” It’s not that events that improbable never happen. It’s just that they aren’t specified events.
Granted, Dembski has been challenged (even by ID allies such as David Berlinski) for, as the critics see it, leaving the notion of “specification” under-described. I will not dive into that debate now. But the gist of Dembski’s argument is that specified events of sufficiently low probability, i.e., below the universal bound, do not occur by chance. Functionally, one treats events below that cut-off as though they had probability zero.
So, while Croft’s wording fudges things a bit, at the end of the day I think he is more or less right to draw this contrast. Dembski’s statistical “knockout” approach to the design question does indeed have a different flavor from Meyer’s inference to the best explanation. Though, whether or not that approach was successful, Croft’s passing credentialist hand-waving that the idea of complex specified information has been “totally rejected by the mainstream academic community” certainly doesn’t advance the discussion. Still, it’s not unreasonable for him to distinguish Dembski and Meyer, and in fact it’s helpful to note the differences in their approaches.
Meyer’s Argument Summarized
With this out of the way, Croft next distills the flow of Meyer’s argument down to a few key points. He calls this a “syllogism,” though it’s not properly a syllogism, because it’s not a deductively valid argument (in the sense that the conclusion necessarily follows from its premises). But that’s a small point. Here’s how he lays it out:
1. Current scientific theories are unable to explain, and are unlikely to explain, certain observed phenomena, such as the origin of life.
2. All these phenomena exhibit characteristics which are also and only found in phenomena we know to be the result of intelligent agency. Therefore,
3. These phenomena were most likely also the result of intelligent agency.
This is a decent summation, although Premise 1 could be improved by the modifier “naturalistic” before “scientific theories.” Meyer doesn’t take issue with scientific theories because they are current or scientific. Rather, he is specifically contending that purely naturalistic theories are insufficient to the task of explaining our data. This point is small but worth drawing out, given that naturalism is a major area of contention in their disagreement.
Moreover, a further intermediary step could be added between 2 and 3 that would sharpen the reasonableness of the inference. The way Croft expands on this move tacitly implies that Meyer is making a simple argument from analogy:
Note that 2. appeals to a similarity between things we know to have been the product of intelligent agency (he gives examples like computer code and radio communications) and life, and suggests that because both share this particular feature, it is reasonable to think they had a similar cause.
This isn’t quite right. It’s not just that certain characteristics we observe in biological phenomena are also observed in phenomena we know to be always and only the result of intelligent agency. It’s that intelligent agency provides a good explanation for these characteristics of biological phenomena. And in fact, it is the only known explanation for that shared feature in the case of computer code, radio communications, etc. Again, a subtle point, but worth making: It’s not the mere presence of similarity, but rather the specific character of the similarity that calls for explanation.
The Magic Ratio
After this distillation, Croft makes his first key assertion: This inference is “illegitimate” unless it can be shown “that the feature is only in principle a feature of designed things.” But Meyer doesn’t need to demonstrate a claim this strong. He only needs to show that designed things are overwhelmingly likely to have this feature — that the extrapolation of currently known naturalistic causes explains it poorly, while intelligence explains it well. If this can be demonstrated, then in the Bayesian probability terms Meyer employs, the probability of a feature given design (let’s call this P(F|D)) is greater than the probability of the feature given not design (call this P(F|~D)). This means that the fraction P(F|D)/P(F|~D), which Bayesians call the “likelihood ratio,” is top-heavy.
This concludes my analysis of Croft’s introductory setup. I’ll begin unpacking the meat of his counter-argument next time. Stay tuned!