Evolution Icon Evolution
Intelligent Design Icon Intelligent Design

Origin of Life Is Not Reducible to Physics

Image credit: Геральт - geralt / 21281 images on Pixabay site, CC0, via Wikimedia Commons.

Yesterday, we critiqued a proposal by Eugene V. Koonin and three colleagues who presented an expanded theory of evolution as “multilevel learning.” (See, “Evolution Is Not Like Physics.”) The proposal commits the fallacy of equating the properties of biological “laws of evolution” with those of physics, and borders on vitalism, which undermines their goal of naturalizing evolution. The proposal was published in two papers in PNAS last month. This time, we look at the second paper that takes their proposal to the special case of the origin of life. Their attempt to incorporate thermodynamics into a highly negentropic process is sure to provoke interest.

From Vanchurin, Wolf, Koonin, and Katsnelson, “Thermodynamics of evolution and the origin of life”:

We employ the conceptual apparatus of thermodynamics to develop a phenomenological theory of evolution and of the origin of life that incorporates both equilibrium and nonequilibrium evolutionary processes within a mathematical framework of the theory of learning. The threefold correspondence is traced between the fundamental quantities of thermodynamics, the theory of learning, and the theory of evolution. Under this theory, major transitions in evolution, including the origin of life, represent specific types of physical phase transitions. [Emphasis added.]

How Can Nature Learn?

Perceptive readers will want to know how they deal with several well-known issues: (1) probability, (2) entropy increase, and (3) harmful byproducts. The authors have already presented their view of the universe as a “neural network” in which natural selection operates at multiple levels, not just in biology. The only neural networks that any human has observed coming into existence were designed by a mind. How, then, can physical nature learn things?

Under this perspective, all systems that evolve complexity, from atoms to molecules to organisms to galaxies, learn how to predict changes in their environment with increasing accuracy, and those that succeed in such prediction are selected for their stability, ability to persist and, in some cases, to propagate. During this dynamics, learning systems that evolve multiple levels of trainable variables that substantially differ in their rates of change outcompete those without such scale separation.

The vitalistic tendencies in this proposal become evident where they claim that nonliving entities are able to predict, train, and compete. They are further evident when the environment can select them according to specific criteria. How do Koonin and his colleagues know this happens? Just look around: there are atoms, stars, and brains that survived the competition by natural selection. Their existence confirms the theory. This is like the anthropic principle supporter who says, “If the universe weren’t this way, we wouldn’t be here to talk about it.”  

To deal with the entropy problem, the authors say that learning decreases entropy. They add a second variable Q to the entropy equation that allows them to overcome the problem. “Q is the learning/generalized force for the trainable/external variables q.”

In the context of evolution, the first term in Eq. 3.1 represents the stochastic aspects of the dynamics, whereas the second term represents adaptation (learning, work). If the state of the entire learning system is such that the learning dynamics is subdominant to the stochastic dynamics, then the total entropy will increase (as is the case in regular, closed physical systems, under the second law of thermodynamics), but if learning dominates, then entropy will decrease as is the case in learning systems, under the second law of learning: The total entropy of a thermodynamic system does not decrease and remains constant in the thermodynamic equilibrium, but the total entropy of a learning system does not increase and remains constant in the learning equilibrium.

Very clever; introduce a magic variable that allows the theory to avoid the consequences of the second law. Entropy increases overall (which must happen) but can stabilize or decrease locally in an evolving system, like a warm little pond.

The maximum entropy principle states that the probability distribution in a large ensemble of variables must be such that the Shannon (or Boltzmann) entropy is maximized subject to the relevant constraints. This principle is applicable to an extremely broad variety of processes, but as shown below is insufficient for an adequate description of learning and evolutionary dynamics and should be combined with the opposite principle of minimization of entropy due to the learning process, or the second law of learning (see Thermodynamics of Learning and ref. 17). Our presentation in this section could appear oversimplified, but we find this approach essential to formulate as explicitly and as generally as possible all the basic assumptions underlying thermodynamics of learning and evolution.

Special Pleading with Handwaving 

If this sounds like special pleading with handwaving, watch how they take a wrong turn prior to this by ascribing vitalistic properties to matter:

The crucial step in treating evolution as learning is the separation of variables into trainable and nontrainable ones. The trainable variables are subject to evolution by natural selection and, therefore, should be related, directly or indirectly, to the replication processes, whereas nontrainable variables initially characterize the environment, which determines the criteria of selection.

Assume a replication process. It’s like a can opener. It allows them to visualize endless things most beautiful emerging from the can if they had the opener. Theoretically, trainable variables q overcome the increasing entropy generated by the nontrainable variables x if the probability distribution p(x|q) favors q. “We postulate that a system under consideration obeys the maximum entropy principle but is also learning or evolving by minimizing the average loss function U(q),” they say. Natural selection, or learning, does that. Therefore, life can emerge naturally. 

Convinced? They derive their conclusions with some whiz-bang calculus, but clearly if a magic variable q is inserted, the derivation becomes unreliable even if the operations are sound. For instance, if you define q as “a miracle occurs,” then of course you can prove that life is an emergent property of matter. At that point, further sub-definitions of q into different categories of miracles fail to provide convincing models of reality. Watch them define learning as a decrease in entropy:

If the stochastic entropy production and the decrease in entropy due to learning cancel out each other, then the overall entropy of the system remains constant and the system is in the state of learning equilibrium… This second law, when applied to biological processes, specifies and formalizes Schrödinger’s idea of life as a “negentropic” phenomenon. Indeed, learning equilibrium is the fundamental stationary state of biological systems. It should be emphasized that the evolving systems we examine here are open within the context of classical thermodynamics, but they turn into closed systems that reach equilibrium when thermodynamics of learning is incorporated into the model.

Further handwaving is seen in their definition of “evolutionary temperature” as “stochasticity in the evolutionary process” and “evolutionary potential” as “a measure of adaptability.” Does anyone really want to proceed hearing them compare a population of organisms to an ideal gas?

The origin of life can be identified with a phase transition from an ideal gas of molecules that is often considered in the analysis of physical systems to an ideal gas of organisms that is discussed in the previous section.

A Cameo by Malthus

Reality left the station long ago. Malthus makes a cameo appearance: “Under the statistical description of evolution, Malthusian fitness is naturally defined as the negative exponent of the average loss function, establishing the direct connection between the processes of evolution and learning.” Learning solves every problem in evolution: even thermodynamics! Tweaking Dobzhansky, they say, “[n]othing in the world is comprehensible except in the light of learning.”

The key idea of our theoretical construction is the interplay between the entropy increase in the environment dictated by the second law of thermodynamics and the entropy decrease in evolving systems (such as organisms or populations) dictated by the second law of learning.

What is this “second law of learning”? It’s Vanchurin’s idea that variables can be defined as ones that “adjust their values to minimize entropy.” A miracle happens! Minds can do this; but matter? Sure. It’s bound to happen.

The origin of life scenario within the encompassing framework of the present evolution theory, even if formulated in most general terms, implies that emergence of complexity commensurate with life is a general trend in the evolution of complex systems. At face value, this conclusion might seem to be at odds with the magnitude of complexification involved in the origin of life [suffice it to consider the complexity of the translation system] and the uniqueness of this event, at least on Earth and probably, on a much greater cosmic scale.Nevertheless, the origin of life appears to be an expected outcome of learning subject to the relevant constraints, such as the presence of the required chemicals in sufficient concentrations. Such constraints would make life a rare phenomenon but likely far from unique on the scale of the universe. The universe is sometimes claimed to be fine-tuned for the existence of life. What we posit here is that the universe is self-tuned for life emergence.

We’re Here, Aren’t We?

Koonin’s colleagues never get around to solving the extreme improbabilities for getting the simplest building blocks of life by chance. They never discuss harmful cross-reactions, which are certain to occur due to known chemical laws. And they wave the entropy problem away by inserting magic variables that they define as systems that “adjust their values to minimize entropy.” These systems also magically possess memories! How do they know that? Well, neural networks have them, and life has them. Genes must have evolved to be the carriers of long-term memory. After all, we’re here, aren’t we?

Evidently, the analysis presented here and in the accompanying paper is only an outline of a theory of evolution as learning. The details and implications, including directly testable ones, remain to be worked out.

Indeed.