Evolution Icon Evolution

Haarsma’s Pykaryotes — Another Failed Evolution Simulation

Reticulated_Python_at_Little_Rays_Reptile_Zoo.jpg

Loren Haarsma is a member of the physics faculty at Calvin College. If his name sounds familiar that may be because he is the husband of BioLogos president Deborah Haarsma. Writing in the Journal of Theoretical Biology, Dr. Haarsma recently published a paper with an intriguing title, “Simulating evolution of protein complexes through gene duplication and co-option.” In the paper he presents a model of evolution name Pykaryotes. The name is a play on the Python programming language and prokaryotic/eukaryotic cells.

With Pykaryotes, Haarsma seeks to demonstrate the evolution of irreducible complexity and provide a compelling model for the evolution of complex structure by co-option and gene duplication. Broadly speaking, however, there are three issues with his paper and model. First, he gives a false history for a key term, “interlocking complexity.” Second, the model assumes that co-option is easy, and bypasses simulating any actual co-option. Third, it allows even a random collection of proteins to be functional.

Now to the details. The job of the Pykaryotes is to collect chemicals from their environment. The more chemicals collected, the more fit the Pykaryote and the more offspring the Pykaryote has. The Pykaryotes may also combine a string of chemicals to create a protein. Most of the time this is useless, and a waste of the chemicals used to create the protein. However, approximately 3 percent of the time (the percentage can actually be modified, this is simply the default), the protein is functional. It acts to accelerate the Pykaryote’s chemical collection, thereby improving its fitness.

The proteins can themselves be combined into complexes. Any two proteins have a 5 percent (again, the default) probability of binding together and thus forming a complex. The complex itself has a 3 percent default probability of being functional. That is, 3 percent of complexes will improve the chemical collection twice as much as that of a single protein. Thus, by forming a complex, the organism can perform even better than with a protein.

Proteins can also bind to existing complexes. This happens with the same 5 percent probability. This produces even bigger complexes of three, four, or five proteins. Haarsma limited the maximum size of a complex to five to reduce the computational costs of his simulation. A complex with three proteins is three times as effective a single protein, a complex with four proteins four times as effective, etc.

In the runs shown in Haarsma’s paper, over the course of a thousand generations the Pykaryotes are able to evolve complexes of five proteins. Evolution gradually builds up the larger complexes by adding onto the smaller complexes in a model demonstrating co-option.

Haarsma presents this as demonstrating the evolution of “interlocking complexity,” as he writes:

In living cells, many protein complexes are made from several different proteins, and all of the parts must be present in order for the complex to function, a term which Hermann Muller (1918) called “interlocking complexity.”

What is “interlocking complexity”? This definition sounds a lot like irreducible complexity. Haarsma attributes the term to geneticist Hermann Muller in a 1918 paper, but Muller does not use the term “interlocking complexity,” and the word complexity does not even appear in the cited paper. Haarsma is incorrectly attributing this term to Muller.

The term actually derives from a 2006 page at the website Talk Origins: “The Mullerian Two-Step: Add a part, make it necessary.” The author of that page, Douglas Theobald, proposed the term “Mullerian interlocking complexity,” in place of irreducible complexity. Theobald was proposing a new term to replace irreducible complexity, but this has since “devolved” into a claim that Muller coined the term “interlocking complexity,” which is not the case.

Haarsma initially gives the above definition for interlocking complexity, but he quickly replaces it with another:

Thus in our model the interlocking complexity of a protein complex is defined as the number of proteins in the complex which do not, themselves, have functions.

This definition, used throughout the rest of the paper, has nothing to do with irreducible complexity. Whether or not the parts of a system have alternate uses is orthogonal to whether or not a system is irreducibly complex.

If Haarsma is attempting to address irreducible complexity, there is a simple reason why his model cannot be an example of irreducible complexity. The first half of the definition of irreducible complexity states that is “a single system composed of several well-matched, interacting parts that contribute to the basic function.” It is common, especially amongst critics, to ignore this half of the definition, but no system can be claimed to be irreducibly complex unless it fulfills both halves of the definition.

An irreducibly complex system functions because the various parts of the system each interact, fulfilling a particular role in bringing about the basic function of the entire system. We can see this interaction in various simulations, such as the interactions of instructions in an Avida program, the interaction of a perceptron with binding sites in Ev, etc. The functionality of those systems exists due to that interaction. But in Haarsma’s case, there is no interaction — functionality is simply randomly assigned.

Why does this matter? In Haarsma’s model if you take any protein complex and try to add a random protein, .15 percent of the time the protein will bind and produce a brand new function increasing the chemical intake of the Pykaryote. This probability remains the same regardless of how large the complex is. That figure, .15 percent, may sound small, but it is more than one in a thousand, an event that will easily occur repeatedly in the timeframe of the simulation. This means that in the Pykaryotes model, co-option is easy, and that is why evolution is able to evolve these large complexes.

But is this realistic? Whether or not Haarsma has irreducible complexity in mind, whether or not his model is workable depends on whether or not co-option really is this easy. Intelligent design proponents have argued the opposite, that co-option is actually rather difficult. When you have several well-matched parts that interact with each other to produce a function, simply adding a new part is not going to produce a brand new functional system. Instead, creating a new function will require rearranging and adjusting the existing parts in order to create a new system with new functionality.

Pykaryotes simply assumes that co-option is easy. Other models that assign functionality based on the actual interaction of parts can claim that they show co-option working in a real problem. Those problems may not correspond to real biological problems, but at least they are real problems. They may be harder or easier than real biological problems, but they give some indication of how well co-option works in a real problem space. Because Pykaryotes do not exhibit real functionality produced by interaction of parts, they tell us nothing about how feasible co-option is to produce such functionality.

Thus far we have looked at a single complex and attempted to add a single protein to it. But in fact, the Pykarote model works with collections of proteins. If an organism already has twenty proteins, then when a new protein arrives it will bind to an average of one of them. However, those proteins may also bind to each other. The resulting complexes may again bind to any of the other proteins, possibly producing larger complexes which might again bind to other proteins.

Intuitively, one would expect larger complexes to be rarer. However, there are many more ways to produce a large complex than a small one. As a result, given the relatively high level of binding and sufficient proteins, there will be more large complexes in a completely random selection of proteins than small ones. Given enough large complexes, some of them will be functional.

This means that in the Pykarotes model, even a random collection of proteins is likely to contain large functional complexes. The problem is that for evolution to do what it claims, it has to do much better than random chance. Most evolution simulations seek to solve a problem far beyond the reach of random chance. The entire point of the exercise here is to show that natural selection can do what random chance cannot. But in Pykarotes, gene duplication and co-option manage to do what could have been done by random chance.

That’s evolution not doing better than random chance. That is, in short, not a success. It’s a failure.

Photo: Reticulated python, by Pomades [CC BY-SA 3.0], via Wikimedia Commons.

Winston Ewert

Senior Fellow, Senior Research Scientist, Software Engineer
Winston Ewert is a software engineer and intelligent design researcher. He received his PhD from Baylor University in electrical and computer engineering. He specializes in computer simulations of evolution, genomic design patterns, and information theory. A Google alum, he is a Senior Research Scientist at Biologic Institute and a Senior Fellow of the Bradley Center for Natural and Artificial Intelligence.

Share

Tags

Computational SciencesResearchscience