Over the past few years, organic chemist James Tour has detailed how origin of life (OOL) researchers have made no progress for the greater part of a century in explaining how natural processes could generate the first self-replicating cell despite highly misleading claims to the contrary (here, here, here). At the same time, I have written on how such dismal progress was entirely predictable based upon challenges from the fields of thermodynamics and information theory (here, here, here, here). I consolidated and expand my analysis in an article that was recently published in the journal Inference. The piece is highly technical and mathematical, but the basic argument can be quickly summarized with only a marginal loss of technical accuracy.
To begin, the driving tendency of most physical systems is to move toward either higher entropy, lower energy, or both. The concept of entropy is often crudely associated with the level of a system’s disorder (think messy room versus neat room), but a more precise description relates it to Shannon’s Measure of Information. In the context of OOL, large molecules with higher-energy chemical bonds will tend to break down into smaller molecules with lower-energy bonds. And, highly complex, specified structures will tend to break apart into a disorganized mixture of simple molecules, tars, and other biologically inert products.
The entropy and energy are often combined by physicists and chemists into what is defined as the free energy. This quantity is of particular importance since natural processes tend to move toward lower free energy. The free energy also equates with the energy that is available to do work. For instance, the energy in a battery can be easily accessed to run a clock, so it contains high free energy. In contrast, the molecules in the air on a hot summer day have a great deal of energy, but that energy cannot be directly accessed to drive a car up a hill. As such, the hot air has low free energy. The challenge is that the genesis of a cell requires simple chemicals on the ancient earth to coalesce into an arrangement of both low entropy and high energy. In other words, life’s origin requires molecules to move toward significantly higher free energy, a task that is nearly always physically impossible.
Many OOL researchers have responded to this challenge by arguing that a system driven far from equilibrium could self-organize into a functional cell (here, here). The basic hope is that some new physical principles could overcome the barriers mandated by classical thermodynamics. That hope was accentuated by the research of MIT physicist Jeremy England. He purportedly developed a “physics theory for life” based on what are termed fluctuation theorems. His research was portrayed by the media as holding the key to solving the mystery of life’s origin.
My article demonstrates that the fluctuation theorems actually prove the exact opposite. Namely, systems driven far from equilibrium also tend to move toward greater entropy, and they tend to release heat into the environment. The origin of life requires the reverse: energy must be extracted from the environment in such a way as to raise a local collection of molecules’ free energy and, in the process, assemble them into a highly specified, low-entropy state. Mathematically, the fluctuation theorems reveal that the origin of a cell is just as fantastically improbable away from equilibrium as near it.
Engines and Information
Only one solution exists to overcome the thermodynamic challenges, and that is the presence of the following:
- Engines that convert some external source of energy into a form useful to power cellular operations.
- Information that directs that energy toward driving a highly targeted set of chemical reactions that manufacture, maintain, and direct the cell.
In cells, engines include such molecular machines as ATP synthase, and the information is embodied, nonexclusively, in the long chains of amino acids known as proteins and in DNA. The information in a protein corresponds to its precise amino-acid sequence, which is analogous to the ordering of letters in a sentence. The specified sequence causes a newly formed chain to fold into the specific three-dimensional shape that allows it to serve as a component in a cellular structure or as an enzyme that accelerates a target reaction. The information in DNA stores (encodes) the proteins’ sequences, so more can be manufactured. And, it controls the timing and quantity of their production.
A complete suite of enzymes, or their equivalent, must have existed at the very instantiation of life. For a viable cell — one that would not quickly break apart and decompose into simpler chemicals — needs a highly specific set of interconnected chemical reactions that support the essential operations. Yet the chance of the correct reactions manifesting on the early earth is astronomically small. Moreover, the different reactions require mutually exclusive environmental conditions. In other words, they cannot all occur in the same location at the same time. And, many reactions will not progress spontaneously without outside energy to drive them in the correct direction. Even most of those that could proceed naturally advance too slowly to support cellular activity.
Only enzymes can meet all of these requirements, and only they can overcome all of the hurdles. For enzymes form crevices that create the correct environments for their target reactions, and they dramatically accelerate the reactions’ speed. They can also extract energy from the high-energy molecules generated by the engines and direct that energy toward driving the nonspontaneous reactions forward. Alternative molecules, such as ribozymes, lack these needed capacities (here, here). And, of paramount importance, the required set of enzymes corresponds to a quantity of information that is staggering.
The Minimal Requirements
Cellular biologists have attempted to identify the minimally complex genome: the minimal number of genes and proteins that would be required for the simplest viable cell. Independently, engineers, computer programmers, and mathematicians have attempted to identify the minimal components required for a machine to make a fully functional copy of itself (i.e., self-replicate). The results from these separate studies have converged to identify components that must have existed at life’s origin (here, here, here), which include the following:
- Large repositories of information and information processing.
- Manufacturing centers that construct all of the essential pieces.
- Assembly and installation processes for all components.
- Energy production and distribution machinery.
- Automated repair and replacement of parts.
- A boundary with control gates that allow specific materials in and out.
- Global communication and coordination with feedback control systems.
- Sensing of environment and calculation of needed responses.
- Self-replication, which draws upon nearly all other essential functions.
These minimal requirements entail several hundred enzymes and over a million bits of information. That amount is roughly equivalent to the information contained in Charles Dickens’s A Christmas Carol.
The Evidence for Design
Many scientists have come to recognize the essential role of information in life. For instance, OOL researchers Sarah Walker and Paul Davies have described the need for “information to gain causal purchase over matter.” In other words, preexistent information is required to direct the arrangement of molecules. And chemical and biomolecular engineer Yaşar Demirel has demonstrated the essential role of information in achieving and maintaining a cell’s low-entropy state. Yet no natural process can generate large quantities of meaningful information (here, here).
I elucidate the implications of this evidence in The Mystery of Life’s Origin: the Continuing Controversy as follows:
In summary, the formation of the original cell cannot plausibly be explained by any undirected process. In addition, its minimal requirements demonstrate unmistakable signs of intelligence. In any other context, the identification of a nanotechnology vessel capable of energy production, information processing, and the other identified requirements would immediately be recognized as a product of design by any reasonable criteria. In particular, cellular structures and operations demonstrate unmistakable evidence of foresight, coordination, and goal direction, which are telltale signs for intelligent agency.