Evolution Icon Evolution
Intelligent Design Icon Intelligent Design
Life Sciences Icon Life Sciences

Does Life Use a Non-Random Set of Amino Acids?

Jonathan McLatchie

An interesting research paper was recently published in the journal Astrobiology by Gayle K. Philip and Stephen J. Freeland, which asks, “Did Evolution Select a Nonrandom Alphabet of Amino Acids?”

The article notes that there exists no strict limitation on the character or type of amino acids which can be used in living systems. Indeed, biology could conceivably have used a different amino acid alphabet, and there appears to be a fairly wide range from which it could have chosen. But is there anything special — is there anything unique or unusual — about the set of 20 amino acids (some organisms use one or two additional amino acids) that life does use? And, if there is, how might this fundamentally non-random contingency be explained?

There are two broad categories of natural (dysteleological) explanation. Those two broad categories are (i) chance, and (ii) necessity. Those two explanatory principles may be applied singularly or in combination. In addition, some things in our world are not the result of either chance or necessity. Those phenomena reflect the activity of agent causality.

Intelligent agents also have the capacity to interact with the material world, and produce outcomes of which neither chance nor necessity are capable. Fortunately, agents routinely leave characteristic tell-tale patterns which are indicative of their activity, and which can be used to infer their explanatory role in a phenomenon of interest.

So, which of our explanatory devices is most appropriate with regards the demonstrably non-random selection of amino acids which are present in nearly every facet of living systems?

The abstract of the paper reflects:

The last universal common ancestor of contemporary biology (LUCA) used a precise set of 20 amino acids as a standard alphabet with which to build genetically encoded protein polymers. Considerable evidence indicates that some of these amino acids were present through nonbiological syntheses prior to the origin of life, while the rest evolved as inventions of early metabolism. However, the same evidence indicates that many alternatives were also available, which highlights the question: what factors led biological evolution on our planet to define its standard alphabet? One possibility is that natural selection favored a set of amino acids that exhibits clear, nonrandom properties–a set of especially useful building blocks. However, previous analysis that tested whether the standard alphabet comprises amino acids with unusually high variance in size, charge, and hydrophobicity (properties that govern what protein structures and functions can be constructed) failed to clearly distinguish evolution’s choice from a sample of randomly chosen alternatives. Here, we demonstrate unambiguous support for a refined hypothesis: that an optimal set of amino acids would spread evenly across a broad range of values for each fundamental property. Specifically, we show that the standard set of 20 amino acids represents the possible spectra of size, charge, and hydrophobicity more broadly and more evenly than can be explained by chance alone. [emphasis added]

The authors compared the coverage of the standard alphabet of 20 amino acids for “size, charge, and hydrophobicity with equivalent values calculated for a sample of 1 million alternative sets (each also comprising 20 members) drawn randomly from the pool of 50 plausible prebiotic candidates.”

The results?

The authors noted that:

…the standard alphabet exhibits better coverage (i.e., greater breadth and greater evenness) than any random set for each of size, charge, and hydrophobicity, and for all combinations thereof. In other words, within the boundaries of our assumptions, the full set of 20 genetically encoded amino acids matches our hypothesized adaptive criterion relative to anything that chance could have assembled from what was available prebiotically.

The authors are thus quick to dismiss the chance hypothesis as a non-viable option. In their concluding remarks, they note,

Whether we consider a starting point of genetic coding within (i) the pool of prebiotically plausible amino acids, (ii) the end point of the standard alphabet relative to this prebiotic pool of candidates, or (iii) the process by which evolution escaped these prebiotic boundaries, we see a consistent, unambiguous pattern; random chance would be highly unlikely to represent the chemical space of possible amino acids with such breadth and evenness in charge, size, and hydrophobicity (properties that define what protein structures and functions can be built). Further analysis indicated that, even under this simple criterion, any selection of an optimal amino acid alphabet is likely to include some of those found within contemporary genetic coding.

The significance of this extends further, for the researchers also go after the eight prebiotically plausible amino acids which are found among the 20 which are currently exhibited in biological proteins. They compared the properties of these amino acids with alternative sets of eight drawn randomly, establishing — once again — the fundamentally non-random nature of those utilized.

But if random chance doesn’t cut it, what about necessity? Since law-like processes only produce predictable or regular patterns, necessity cannot be invoked on its own to explain the presence of specified irregularity. Thus, if neither chance nor necessity are of value on their own, what about their combination? Does that fare any better? The paper, of course, assumes that a fundamentally non-random alphabet entails the pre-eminence of selection over (but in combination with) the role of random chance. But can this duality deliver the goods? As I note in a previous article on a related topic, it is extremely difficult to envision an evolving genetic code which wouldn’t wreack havoc on the organism. A change in genetic code would result in a change in amino acids in every polypeptide made by the cell, and it seems implausible that this would be a selectable trait. Actually, a pool of biotic amino acids substantially less than 20 is liable to substantially reduce the variability of proteins synthesised by the ribosomes. And prebiotic selection is unlikely to sift the variational grist for this trait of amino-acid-optimality prior to the origin of self-replicative life (in many respects, “prebiotic selection” is somewhat oxymoronic). The synthesis of some of the amino acids (particularly the eight which are considered essential) is also quite complex, and it is difficult to envision such pathways being constructed without foresight of their utility.

Fraught with these problems (and there are many more), it seems that the burden of evidence must here lie with he who seeks to establish the adequacy of chance and selection in finely-tuning this pool of amino acids. And this isn’t by any means the only feature which appears to have been delicately optimised either. The genetic code is also finely tuned in order to minimise the detrimental effects of mutations (for details, see my previous article).

If chance and necessity are seemingly inadequate, either on their own or in co-operation, what about the causal powers of agent causality? Such delicately balanced and finely-tuned parameters are routinely associated with purposive agents. Agents are uniquely endowed with the capacity of foresight, and have the capacity to visualise and subsequently actualise a complex and finely-tuned information-rich system, otherwise unattainable by chance and law. If, in every other realm of experience, such features are routinely attributed to intelligent causes, and we have seen no reason to think that this intuition is mistaken, are we not justified in positing and inferring that these systems we are finding in biology also originated at the will of a purposive conscious agent? Let me conclude by quoting from Stephen C. Meyer’s groundbreaking book, Signature in the Cell (page 452):

Everywhere in our high-tech environment we observe complex events, artifacts, and systems that impel our minds to recognize the activity of other minds: minds that communicate, plan, and design. But to detect the presence of mind, to detect the activity of intelligence in the echo of its effects, requires a mode of reasoning — indeed, a form of knowledge — that science, or at least official biology, has long excluded. If living things — things that we manifestly did not design ourselves — bear the hallmarks of design, if they exhibit a signature that would lead us to recognize intelligent activity in any other realm of experience, then perhaps it is time to rehabilitate this lost way of knowing and to rekindle our wonder in the intelligibility and design of nature that first inspired the scientific revolution.

Jonathan McLatchie

Fellow, Center for Science and Culture
Dr. Jonathan McLatchie holds a Bachelor's degree in Forensic Biology from the University of Strathclyde, a Masters (M.Res) degree in Evolutionary Biology from the University of Glasgow, a second Master's degree in Medical and Molecular Bioscience from Newcastle University, and a PhD in Evolutionary Biology from Newcastle University. Currently, Jonathan is an assistant professor of biology at Sattler College in Boston, Massachusetts. Jonathan has been interviewed on podcasts and radio shows including "Unbelievable?" on Premier Christian Radio, and many others. Jonathan has spoken internationally in Europe, North America, South Africa and Asia promoting the evidence of design in nature.