What happens when a professor of neurosurgey who is a Darwin-skeptic and just happens to be a brain surgeon visits a popular Darwinist blog? He leaves with unanswered questions. Last week Rob Crowther highlighted how Dr. Michael Egnor visited Time magazine’s science blog where a reporter admitted his Darwinist bias and was unable to answer Egnor’s question: “how much new information can Darwinian mechanisms generate?” Egnor is professor of neurosurgery and pediatrics at State University of New York, Stony Brook and an award-winning brain surgeon who has been named one of New York’s best doctors by New York Magazine. Egnor recently took his questions to P.Z. Myers’ popular science blog Pharyngula, where Egnor continues–unanswered–to press Darwinists for how Darwinian mechanisms produce new information.
I did a PubMed search just now. I searched for ‘measurement’, and ‘information’ and ‘random’ and ‘e coli’. There were only three articles, none of which have any bearing on my question. The first article, by Bettelheim et al, was entitled ‘The diversity of Escherichia coli serotypes and biotypes in cattle faeces’. searched for an actual measurement of the amount of new information that a Darwinian process can generate, and I got an article on ‘cattle faeces’. I love little ironies.
Did the Darwinists respond to Egnor’s question? Most tried to explain how there can be an increase in Shannon information, but as Egnor explained, “Shannon information is not relevant to biological information.” Egnor points out: “Your example of Labbe’s paper on gene duplication is, presumably, not to be taken too seriously. If you count copies as new information, you must have a hard time with plagiarism in your classes. All that the miscreant students would have to say is ‘It’s just like gene duplication. Plagiarism is new information- you said so on your blog!'” One Darwinist tried to respond by calling him names–“EGNORamu[s].” As of now–about 3 days since Egnor’s comment–there have been only 4 responding comments, none of which explain how Darwinian mechanisms produce specified complexity. But don’t take my word for it: Read the whole thread here and decide for yourself if anyone answered Egnor. The text of Egnor’s full comment is below:
Thanks for your comments on my correspondence with Mike Lemonick, and I appreciate the comments by the folks on this blog (most of them, at least!).
I’m not an enemy of evolutionary biology. I believe in evolution as much as you do, in the sense that living things have changed over time. I just don’t think that the evidence supports the view that all biological complexity arose by the process of random variation (and natural selection). I think that some aspects of living things, particularly the specified information in biological molecules, are more reasonably explained as the consequence of design.
Obviously, this raises profound philosophical and theological questions. If some aspects of living things show evidence of design, where does the design come from? This philosophical conundrum isn’t anything new in science. We don’t know where the laws of physics come from either, but that doesn’t preclude the scientific inference that there are laws of physics. Newton’s demonstration of a ‘clockwork’ physical world that adhered to mathematical laws played an important role in the rise of Deism in the 18th century. The religious implications of Newton’s work didn’t place Newtonian mechanics outside science.
I am not an evolutionary biologist, and my research (on cerebrospinal fluid dynamics and cerebral blood flow) is certainly not closely related to evolutionary biology. There isn’t any area of medicine that makes much routine use of evolutionary biology, except perhaps microbiology, and most of microbiology is molecular and cellular biology. Doctors don’t deal much with evolutionary biology, since eugenics went out of fashion. So I’m not an expert. My questions shouldn’t present much of a challenge to you.
How much new specified information can random variation and natural selection generate? Please note that my question starts with ‘how much’- it’s quantitative, and it’s quantitative about information, not literature citations. I didn’t ask ‘how many papers can I generate when I go to PubMed and type ‘gene duplication evolution’. I asked for a measurement of new specified information, empirically determined, the reference(s)in which this measurement was reported, and a thoughtful analysis as to whether this ‘rate of acquisition’ of new specified information by random heritable variation and natural selection can account for the net information measured in individuals of the species in which the measurement was made. Mike Lemonick was wrong that this isn’t an important question in evolutionary biology. This is the central question.
Your example of Labbe’s paper on gene duplication is, presumably, not to be taken too seriously. If you count copies as new information, you must have a hard time with plagiarism in your classes. All that the miscreant students would have to say is ‘It’s just like gene duplication. Plagiarism is new information- you said so on your blog!’.
Duplication of information isn’t the generation of new information. No one doubts that living things can copy parts of themselves. You have presented no evidence that the process of (slightly imperfect) copying is the source of all that can be copied and the source of what actually does the copying. I was hoping (but not really expecting) that experts like you could do more than Mike Lemonick did. Lemonick just threw out cant, and you just gave me citation chaff. No measurements of actual new information.
There is obviously a threshold of the information-generating power of RM + NS. If I were to leave a culture of S. Aureus, mixed with a few drops of penicillin, in an incubator, and came back a few weeks later to find penicillin-resistant organisms growing in the culture, I would have no problem with theory that that RM+NS accounted for it.
Yet, if I found cockroaches crawling in the bacterial culture after two weeks, I wouldn’t accept RM+NS as an explanation for the cockroaches in the culture. They came from somewhere else. They didn’t evolve from the bacteria, in two weeks.
So what’s the threshold, quantitatively? It seems to be a threshold of information generating capability. But the information in living things is specified; it does things, specific things. In that sense, it differs completely from Shannon information, which is a measure of randomness and the extent to which a message can be compressed. Shannon information is not relevant to biological information..
Regarding your PubMed literature search, I must not have used the words ‘Information’, ‘Measurement, and ‘Random’ often enough in my discussion with Mike Lemonick, and you thought I said ‘gene’ ‘duplication’ and ‘evolution’. I understand; we all make mistakes. If you actually want to answer my question, type ‘information’, and (not ‘or’!) ‘measurement’, and ‘random’, and the name of the species in which you wish to look for experimental measurement of information generation by random processes.
I did a PubMed search just now. I searched for ‘measurement’, and ‘information’ and ‘random’ and ‘e coli’. There were only three articles, none of which have any bearing on my question. The first article, by Bettelheim et al, was entitled ‘The diversity of Escherichia coli serotypes and biotypes in cattle faeces’.
I searched for an actual measurement of the amount of new information that a Darwinian process can generate, and I got an article on ‘cattle faeces’. I love little ironies.