I previously responded to an article by Vincent Torley on the origin of life by correcting the errors in his understanding of thermodynamics and in the state of origins research. Today, I will correct mistakes related to information theory, and I will identify the fundamentally different approaches by ID advocates and critics toward assessing evidence.
The first issue relates to the comparison of the sequencing of amino acids in proteins to the letters in a sentence. This analogy is generally disliked by design critics since it so clearly reveals the powerful evidence for intelligence from the information contained in life. It also helps lay audiences see past the technobabble and misdirection often used to mislead the public, albeit unintentionally.
Torley’s criticism centers on the claim that sequences of amino acids in life demonstrate functional but not semantic information.
Dr. Miller, like Dr. Axe, is confusing functional information (which is found in living things) with the semantic information found in a message…functional information is much easier to generate than semantic information, because it doesn’t have to form words, conform to the rules of syntax, or make sense at the semantic level.
Unfortunately, this assertion completely contradicts the opinion of experts in the field such as Shen and Tuszynski.
Protein primary structures have the same language structure as human languages, especially English, French, and German. They are both composed of several basic symbols as building blocks. For example, English is composed of 26 letters, while proteins are composed of 20 common amino acids. A protein sequence can be considered to represent a sentence or a paragraph, and the set of all proteins can be considered to represent the whole language. Therefore, the semantic structure is similar to a language structure which goes from “letters” to “words,” then to “sentences,” to “chapters,” “books,” and finally to a “language library.”
The goals of semantic analysis for protein primary structure and that for human languages are basically the same. That is, to find the basic words they are composed of, the meanings of these words and the role they play in the whole language system. It then goes on to the analysis of the structure of grammar, syntax and semantics.
In the same way letters combine to form meaningful sentences, the amino acids in proteins form sequences that cause chains to fold into specific 3D shapes which achieve such functional goals as forming the machinery of a cell or driving chemical reactions. And sentences combine to form a book in the same way multiple proteins work in concert to form the highly integrated cellular structures and to maintain the cellular metabolism. The comparison is nearly exact.
A second issue Torley raises is the question of the rarity of protein sequences. In particular, he argues that the research of Doug Axe, which demonstrated extreme rarity, was invalid. Criticisms against Axe’s work have been addressed in the past, but the probability challenge is so great that such a response is unnecessary. The most essential early enzymes would have needed to connect the breakdown of some high-energy molecule such as ATP with a metabolic reaction which moves energetically uphill. One experiment examined the likelihood of a random amino acid sequence binding to ATP, and results indicated that the chance was on the order of one in a trillion. Already, the odds against finding such a functional sequence on the early Earth is straining credibility. However, a useful protein would have required at least one other binding site, which alone squares the improbability, and an active site which properly oriented target molecules and created the right chemical environment to drive and interconnect two reactions — the breakdown of ATP and the target metabolic one. The odds of a random sequence stumbling on such an enzyme would have to have been far less than 1 in a trillion trillion, clearly beyond the reach of chance.
The challenge for nucleotide based enzymes (ribozymes) is equally daunting. Stumbling across a random sequence that could perform even one of the most basic reactions also requires a search library in the trillions. So, any multistage process would also be beyond the reach of chance. A glimmer of hope was offered by Jack Szostak when he published a paper that purported to show RNA could self-replicate without the aid of any enzyme. Unaided self-repliation would have greatly aided the search process. However, he later retracted the paper after the results could not be reproduced.
The problem has since been shown to be even worse. In particular, Eugene Koonin determined that the probability of an RNA-to-protein translation system forming through random arrangements of nucleotides is less than 1 in 101000 which would equate to an impossibility in our universe. His solution to this mathematical nightmare was to propose a probabilistic deus ex machina. He actually argued for the existence of a multiverse which would contain a virtually infinite number of Earth-like planets. We just happen to reside in a lucky universe on the right planet where life won a vast series of lotteries.
The next issue relates to the problem of explaining how a protein sequence was encoded into RNA or DNA using a genetic code where each amino acid corresponds to sets of three nucleotides known as codons. The key challenge is finding a causal process for the encoding when no physical or chemical connection exists between a given amino acid and its corresponding codons. Torley argues that a connection does exist. He quotes from Dennis Venema who stated that certain codons bind directly to their amino acids. Unfortunately, this claim is false. Venema was referencing the research by Michael Yarus, but he misinterpreted it. Yarus states that no direct physical connect exists between individual amino acids and individual codons. He instead argues for correlations in chains of nucleotides (aptamers) between amino acids and codons residing where the latter binds to the former. However, Koonin argued that correlations only existed for a handful of amino acids, and they were the least likely ones to have formed on the early Earth.
Torley references the article where Koonin dismisses Yarus’s model, but he misinterprets him by implying that the code could be partly explained by some chemical connection. Koonin does reference the possibility of the evolution of the modern translation system being aided by chemical attractions between amino acids and pockets in tRNA. But he states that the sequences in those pockets would have been “arbitrary,” so they would not relate to the actual code. As a result, no physical explanation exists for the encoding of amino acid sequences into codons, nor can the decoding process either be explained or directly linked to the encoding process. Such a linkage is crucial since the encoding and decoding must use the same code. However, without any physical connection, the code must have preexisted the cell particularly since both processes would have had to have been instantiated around the same time. The only place a code can exist outside of physical space is in a mind.
In my responses to Torley I have addressed several problems with his interpretation of specific experiments. However, a more fundamental issue is the differences between our overall approaches to evaluating evidence, which I will illustrate with an analogy. Imagine that a boxing match is scheduled between Daniel Radcliffe, actor who played Harry Potter, and Manny Pacquiao, former world boxing champion. You learn that the fight will take place in three days and Radcliffe recently broke his leg and two arms in a skiing accident. You tell your friend that you are certain Pacquiao will win. Your friend then says that you are mistaken since Radcliffe will simply heal his body with a flick of his magical wand and then turn Pacquiao into a rat. You suddenly realize that your friend is conceiving of the fight in the imaginary world of Hogwarts from the fantasy series.
The same difference in perspectives exists between ID proponents and materialist scientists. The former wish to focus on experiments that attempt to accurately model conditions on the early Earth and on actual physical processes that have been demonstrated. In contrast, the latter wish to focus on highly orchestrated experiments which have no connection to realistic early conditions and on physical processes that reside only in the imaginations of researchers or in artificial worlds created through simulations. For instance, Torley references an article that proposes hydrogen peroxide could have assisted in generating homochiral mixtures of nucleotides, but the author fully acknowledges that his ideas are purely speculative. Likewise, Koonin describes a scenario of how the protein translation system could have evolved, but nearly every step is only plausible if intelligently guided. In other words, he is constantly smuggling in design without giving due credit. To accept any of these theories requires blind faith in the materialist philosophical assumptions.
At the end of his article, Torley navigates out of the stormy seas of scientific analysis into the calmer waters of philosophical discourse which is his specialty. He argues that one can never prove design. On this point he is correct, if by prove one means demonstrating with mathematical certainty. The ID program does not claim to offer the type of absolute proof a mathematician would use to demonstrate the truth of the Pythagorean Theorem. Instead, we are arguing that the identification of design is an inference to the best explanation which can be made with the same confidence one would have in identifying design in the pattern of faces on Mount Rushmore or in a signal from space which contained the schematics of a spaceship.
The skeptic could always argue that some materialistic explanation might eventually be found to explain those patterns, so design cannot be proven. Yet, the identification of design is still eminently reasonable. The evidence for design in the simplest cell is unambiguous since it contains energy conversion technology, advanced information processing, and automated assembly of all of its components, to name just a few features. The real issue is not the evidence but whether people’s philosophical assumptions would allow them to deny the preposterous and embrace the obvious.
Image source: Wikimedia Commons.