Evolution Icon Evolution
Life Sciences Icon Life Sciences

The Evolution-Lobby’s Useless Definition of Biological Information

Links to our 8-Part Series, “The NCSE, Judge Jones, and Citation Bluffs About the Origin of New Functional Genetic Information”:

Part 1: Judge Jones’s Misguided NCSE-Scripted Kitzmiller Ruling and the Origin of New Functional Genetic Information
Part 2 (This Article): The Evolution-Lobby’s Useless Definition of Biological Information
Part 3: The Evolution-Lobby’s Misguided Definition of “New”
Part 4: Finding Darwin in All the Wrong Places
Part 5: How to Play the Gene Evolution Game
Part 6: Asking the Right Questions about the Evolutionary Origin of New Biological Information
Part 7: Assessing the NCSE’s Citation Bluffs on the Evolution of New Genetic Information
Part 8: The NCSE’s Citation Bluffs Reveal Little About the Evolutionary Origin of Information

Read the Full Article: “The NCSE, Judge Jones, and Citation Bluffs About the Ori
gin of New Functional Genetic Information”

For the NCSE/Ken Miller/Judge Jones to claim that there is an explanation or “the origin of new genetic information by evolutionary processes,” they must equivocate on the definitions of both the words “information” and “new.” Following the NCSE, Judge Jones probably would define information as “Shannon information,” which means mere complexity. Under this definition, a functionless stretch of randomly garbled junk DNA might have the same amount of “information” as a fully functional gene of the same sequence-length. For example, under Shannon information, which the NCSE would claim is “the sense used by information theorists,” the following two strings contain identical amounts of information:

  • String A:
  • String B:

Both String A and String B are composed of exactly 54 characters, and each string has exactly the same amount of Shannon information–about 254 bits. 9 Yet clearly String A conveys much more functional information than String B, which was generated using a random character generator.10 For obvious reasons, Shannon complexity has a long history of being criticized as an unhelpful metric of functional biological information. After all, biological information is finely-tuned to perform a specific biological function, whereas random strings are not. A useful measure of biological information must account for the function of the information, and Shannon information does not take function into account.

Some leading theorists recognize this point. In 2003, Nobel Prize winning origin of life researcher Jack Szostak wrote in a review article in Nature lamenting that the problem with “classical information theory” is that it “does not consider the meaning of a message” and instead defines information “as simply that required to specify, store or transmit the string.”11 According to Szostak, “a new measure of information — functional information — is required” in order to take account of the ability of a given protein sequence to perform a given function. Likewise, a paper in the journal Theoretical Biology and Medical Modelling observes:

[N]either RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon’s classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information–functional information–is required.12

In 2007 Szostak co-published a paper Proceedings of the National Academy of Sciences with Carnegie Institution origin of life theorist Robert Hazen and other scientists furthering these arguments. Attacking those who insist on measuring biological complexity using the outmoded tools of Shannon information, the authors wrote, “A complexity metric is of little utility unless its conceptual framework and predictive power result in a deeper understanding of the behavior of complex systems.” Thus they “propose to measure the complexity of a system in terms of functional information, the information required to encode a specific function.”13

Stephen C. Meyer follows this approach, writing in a peer-reviewed scientific paper that it is useful to adopt “‘complex specified information’ (CSI) as a synonym for ‘specified complexity’ to help distinguish functional biological information from mere Shannon information–that is, specified complexity from mere complexity.”14 Meyer’s suggested definition of “specified complexity” is useful in describing functional biological information. Specified complexity is a concept derived from the mainstream scientific literature and is not an invention of critics of neo-Darwinism. In 1973, origin of life theorist Leslie Orgel distinguished specified complexity as the hallmark of biological complexity:

[L]iving organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. 15

Orgel thus captures the fact that specified complexity, or CSI, requires both an unlikely sequence and a specific functional arrangement. Specified complexity is a much better measure of biological complexity than Shannon information, a point which the NCSE must resist because it’s much harder to generate specified complexity via Darwinian processes than mere Shannon complexity.

By wrongly implying that Shannon information is the only “sense used by information theorists,” the NCSE avoids answering more difficult questions like how the information in biological systems becomes functional, or in its own words, “useful.” Rather, the NCSE seems more interested in addressing simplistic, trivial questions like how one might add additional characters to a string, or duplicate a string, without regard for the all important question of whether those additional characters convey some new functional message. Since biology is based upon functional information, Darwin-skeptics are interested the far more important question of, Does neo-Darwinism explain how new functional biological information arises?

References Cited:
[9.] This calculation uses a 26 letter English alphabet that is not case-sensitive and, as seen in the strings, does not use spaces.
[10.] String B was generated using a random character generator from the website Random.org.
[11.] Jack W. Szostak, “Molecular messages,” Nature, Vol. 423:689 (June 12, 2003).
[12.] Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, “Measuring the functional sequence complexity of proteins,” Theoretical Biology and Medical Modelling, Vol. 4:47 (2007) (internal citations removed).
[13.] Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak, “Functional information and the emergence of biocomplexity,” Proceedings of the National Academy of Sciences, USA, Vol. 104:8574–8581 (May 15, 2007).
[14.] Stephen C. Meyer, “The origin of biological information and the higher taxonomic categories,” Proceedings of the Biological Society of Washington, Vol. 117(2):213-239 (2004).
[15.] Leslie E. Orgel, The Origins of Life: Molecules and Natural Selection, pg. 189 (Chapman & Hall: London, 1973).