Education
Evolution
Intelligent Design
No ID Research? Let’s Help Out This Iowa State Student
Responding to students who attack intelligent design is tricky. You want to explain to them that they’re wrong but sympathy is also called for, since it’s likely that they are less malicious than some in the professional world of academia and are, instead, merely misinformed. It was during my own college years that I took many courses in which I had the opportunity to study evolution, and I spent a lot of personal time investigating the debate over ID and Darwinian evolution. So I don’t have a hard time feeling for students, whether they are for or against ID.
As I recently explained here, I appreciate the passion that students have for their beliefs and the struggles they undergo as they try to figure things out. So even when I encounter a student who has been badly misinformed by some professor or cranky anti-ID website, I try to have compassion. Most students are still very much a work in progress. (Of course, this can also be very true of people who are long past college!)
At the same time, students sometimes promote false ideas with great confidence in public venues — strongly believing that they actually know what they are talking about. So the record needs to be corrected.
With that in mind, let’s turn to a recent op-ed in the Iowa State Daily, “Heckle: Unintelligent design part 3: Misrepresentation of evolution,” by Michael Heckle, a columnist at the Iowa State University (ISU) campus newspaper who says he is studying journalism, media, and communication. From his Facebook profile, he looks like a nice enough guy — he likes Star Wars, Star Trek, South Park, pizza, and the Blue Angels — he could have been a friend of mine in college. He’s also a fan of Facebook groups like “Troll Atheist,” “God Doesn’t Matter,” “We Love F***ing Atheism,” “Imagine No Religion Conventions,” “The Devil,” “Atheist Uprising,” and similar groups. Well, to each his own. I had some college friends with tastes like that too, people I have remained close with throughout my life.
In any case, Mr. Heckle has a lot of false notions about ID. I don’t know where he got them. There are many possibilities. Indeed, there are a lot of intolerant, anti-ID faculty members at ISU. In 2005, 120 professors signed a statement calling for intellectual uniformity at ISU on the ID question. They urged that “all faculty members” on the campus “reject efforts to portray Intelligent Design as science” because of the “negative impact” of the fact that “Intelligent Design … has now established a presence … at Iowa State University.”
Though they denied it, the writers and signers of this petition sought to create a hostile work environment for Guillermo Gonzalez, who at the time was an untenured assistant professor of physics and astronomy at ISU. Gonzalez was denied tenure in 2006 and much evidence showed that he faced intolerance because of his support for ID.
Whatever the source of his ideas, Mr. Heckle wrongly claims that intelligent design is an argument for “God.” He denies ID’s actual claim — that we can make a positive case for intelligent causation based upon finding complex and specified information in nature. As for irreducible complexity, he wrongly believes that the Type 3 Secretory System could have been an evolutionary precursor to the bacterial flagellum, and he misunderstands how we test irreducible complexity by looking at the functionality of the final system, not testing for trivial functions of sub-parts. These are common misunderstandings by professors who attack straw-man versions of intelligent design, rather than the actual theory advocated by proponents.
It’s telling when someone attacks an argument you haven’t actually made rather than the argument that you are making. We’ve dealt with those distortions many times in the past. But here is Mr. Heckle’s most egregiously inaccurate criticism:
So far, there has been no research done by intelligent design advocates that has led to any sort of scientific discovery.
You have to deny mountains of research and evidence to say that. Intelligent design advocates have done a great deal of research, leading to numerous scientific discoveries. Let’s help out this student by reviewing some prominent ones, amounting to only a portion of that overall research. (For a complete listing of pro-ID peer-reviewed publications, see: Peer-Reviewed Articles Supporting Intelligent Design.)
Biologic Institute and Biological Fine-Tuning
First, there’s the research of Biologic Institute. Directed by molecular biologist Douglas Axe, Biologic Institute conducts both bench experiments and theoretical research. Some of the areas of research include:
- Building computer models that test the ability of unguided mechanisms versus intelligent causes to produce new information.
- Investigating the ways that humans go about designing complex structures so scientists can recognize the hallmarks of design.
- Examining both physical and biological constraints required for life. This includes studying the properties of stars that make Earth-like planets possible and probing the requirements of amino acid sequences that produce functional proteins and molecular machines.
That last point is where Biologic has focused its research. Scientists at Biologic Institute are discovering that protein sequences are rich in complex and specified information, and finely tuned to perform their functions.
Axe has devoted much of his career to ID research. As a post-doc at the University of Cambridge, he performed mutational sensitivity tests on enzymes to measure the likelihood that a string of amino acids will generate a functional protein. In 2000 and 2004, he published this research in the Journal of Molecular Biology. He discovered that functional protein folds may be as rare as 1 in 1077 amino acid sequences.1 He describes the significant implications of those numbers:
I reported experimental data used to put a number on the rarity of sequences expected to form working enzymes. The reported figure is less than one in a trillion trillion trillion trillion trillion trillion. Again, yes, this finding does seem to call into question the adequacy of chance, and that certainly adds to the case for intelligent design.
To put the matter in perspective, Axe’s results indicate that the odds of unguided processes generating a functional protein fold are less than the odds of someone blindly firing an arrow into the Milky Way galaxy, and hitting one pre-selected atom.
If functional protein sequences are rare, then natural selection will be unable to take proteins from one functional sequence to the next without getting stuck in maladaptive or non-beneficial stages. This suggests that the evolution of new proteins might require many mutations before any benefit is gained. This is an important discovery because it shows that in order to generate new proteins, some process is required that can “look ahead” and find the rare functional sequences that yield new proteins.
It also shows that proteins are “multi-mutation” features that require many nucleotides to be “just right” before yielding an advantage. Can such “multi-mutation” features arise by unguided processes? ID proponents are doing peer-reviewed research to address that very question.
Waiting for Multiple Mutations
In 2004, biochemist Michael Behe and physicist David Snoke published research in the journal Protein Science. They reported the results of computer simulations of the evolution of protein-protein interactions. Vital to virtually all cellular processes, these interactions require a specific “hand in glove” fit, where multiple amino acids must be properly ordered to allow the three-dimensional connection. The simulations showed that the Darwinian evolution of a simple bond between two proteins would be highly unlikely to arise in populations of multicellular organisms if it required two or more mutations to function. They concluded that “the mechanism of gene duplication and point mutation alone would be ineffective…because few multicellular species reach the required population sizes.”2
In 2008, Behe and Snoke’s would-be critics tried to refute them in the journal Genetics, but found that to obtain only two specific mutations via Darwinian evolution “for humans with a much smaller effective population size, this type of change would take > 100 million years.” The critics admitted this was “very unlikely to occur on a reasonable timescale.”3
In 2010, Axe published another peer-reviewed research paper that seemed to confirm Behe and Snoke’s results. He presented calculations modeling the evolution of bacteria evolving a structure that required multiple mutations to yield any benefit.4
Axe’s model made assumptions that were exceedingly generous in favor of Darwinian evolution. He assumed the existence of a huge population of asexually reproducing bacteria that could replicate quickly — perhaps nearly three times per day — over the course of billions of years. Despite the fact that bacteria have some of the highest known mutation rates, even here molecular adaptations requiring more than six mutations to function would not arise in the history of the earth.
Collectively, this research made important discoveries. First, it established a maximum limit to how many mutations could arise before conferring some advantage upon multicellular organisms — and the answer here seems to be two mutations. Second, it established a maximum limit to how many mutations could arise before conferring some advantage upon all kinds organisms — and here the limit seems to be six mutations. To put it another way, if some trait in multicellular organisms requires more than two mutations before conferring an advantage, it could not arise by Darwinian evolution over the entire 4.5 billion year history of the earth. If some trait requires more than six neutral mutations before giving an advantage, it could not arise via Darwinian processes in any kind of organism over the whole history of the earth. But there’s another element to Axe’s research: if those mutations are disadvantageous, then the limit again drops to two mutations — in any kind of organism — that could arise before giving some advantage.
Empirical research by ID proponents shows that there are indeed many features that breach these limits. In 2011, Ann Gauger and Douglas Axe published a paper in BIO-Complexity, “The Evolutionary Accessibility of New Enzymes Functions: A Case Study from the Biotin Pathway.”5 They reported results of their laboratory experiments trying to convert one enzyme (Kbl2) to perform the function of a very similar enzyme (BioF2). Because these proteins are both members of the GABA-aminotransferase-like (GAT) family, and are believed to be very closely related, this is the sort of evolutionary conversion that evolutionists say ought to be easily accomplished under the standard co-option model. However, after trying multiple combinations of different mutations, they found otherwise:
We infer from the mutants examined that successful functional conversion would in this case require seven or more nucleotide substitutions.
This presents a serious problem for Darwinian evolution since it exceeds the six-mutation-limit established by Axe. Gauger and Axe concluded:
[E]volutionary innovations requiring that many changes would be extraordinarily rare, becoming probable only on timescales much longer than the age of life on earth. Considering that Kbl2 and BioF2 are judged to be close homologs by the usual similarity measures, this result and others like it challenge the conventional practice of inferring from similarity alone that transitions to new functions occurred by Darwinian evolution.
Their 2011 study thus provided a “disproof of concept” of the co-option model of evolution — a very important scientific discovery. However, they only looked at two proteins within a family of closely related proteins. What if other proteins in the family are more easily convertible? This research was expanded in a 2014 study, but before we get to that we must describe an important discovery made in 2010 by Gauger and biologist Ralph Seelke of the University of Wisconsin, Superior.6 Their research team broke a gene in the bacterium E. coli required for synthesizing the amino acid tryptophan. When the bacteria’s genome was broken in just one place, random mutations were capable of “fixing” the gene. But even when only two mutations were required to restore function, Darwinian evolution seemed to get stuck, unable to regain full function. The reason why Darwinism failed is important.
Although one mutation restored partial function, the function was very slight. In the experiments, the gene was usually deleted before the second mutation could occur and restore full function. Essentially, it was more advantageous for the organism to delete a weakly functional gene than to continue to express it in the hope that it would “find” the mutations that fixed the gene and restored full function. This means that carrying a weakly functional gene is disadvantageous, and that it’s more advantageous to just get rid of a gene duplicate that isn’t contributing very much to the success of the organism. So this research made the important discovery that a fundamental step of the co-option model — duplicating a gene — very likely requires a deleterious mutation.
Now let’s return to the sequel to Axe and Gauger’s 2011 study. In 2014, in a landmark peer-reviewed paper published in BIO-Complexity, “Enzyme Families-Shared Evolutionary History or Shared Design? A Study of the GABA-Aminotransferase [GAT] Family,” Axe, Gauger, and biologist Mariclair Reeves presented studied additional proteins in the same family.7 They showed that these proteins too are not amenable to an evolutionary conversion to perform the function of BioF2. They tested proteins that are closer to BioF2, or more distant from BioF2, than the enzyme they tested in their prior study (Kbl2). Their research suggests at least four mutations would be required for this conversion. But some of these mutations (such as duplication) would initially impose a disadvantage, which according to Axe’s 2010 research means that fewer mutations would be allowed under reasonable evolutionary timescales. In fact, the math shows that it would take some 1015 years for the necessary mutations to arise to co-opt a protein to function like BioF2 — over 100,000 times longer than the age of the earth! They thus conclude:
Based on these results, we conclude that conversion to BioF2 function would require at least two changes in the starting gene and probably more, since most double mutations do not work for two promising starting genes. The most favorable recruitment scenario would therefore require three genetic changes after the duplication event: two to achieve low-level BioF2 activity and one to boost that activity by overexpression. But even this best case would require about 1015 years in a natural population, making it unrealistic. Considering this along with the whole body of evidence on enzyme conversions, we think structural similarities among enzymes with distinct functions are better interpreted as supporting shared design principles than shared evolutionary histories.
These results are converging on the conclusion that there is too much complex and specified information in many proteins and enzymes to be generated by unguided Darwinian processes on a reasonable evolutionary timescale. Some guided process that can “look ahead” and find the necessary rare sequences to yield functional proteins is necessary. This is a major discovery with implications for how we understand the origin of complex biological features, but also for how we design drugs and engineer solutions to diseases.
Losing Function
In 2010, Michael Behe published a peer-reviewed paper that added a nail to the coffin of neo-Darwinian evolution. Writing in the Quarterly Review of Biology, he argued that most adaptations at the molecular level “are due to the loss or modification of a pre-existing molecular function.”8
After reviewing a number of experimental studies of evolution in bacteria and viruses, Behe found that organisms are more likely to evolve by losing a biochemical function than by gaining one. Behe concluded that “the rate of appearance of an adaptive mutation that would arise from the diminishment or elimination of the activity of a protein is expected to be 100-1000 times the rate of appearance of an adaptive mutation that requires specific changes to a gene.”
To explain the meaning of this, let’s take a hypothetical order of insects, the Evolutionoptera, with 1 million species. Ecologists find that the extinction rate among Evolutionoptera is 1000 species per millennium, and the speciation rate (the rate at which new species arise) during the same period is 1 species. At these rates, every thousand years 1000 species of Evolutionoptera will die off, while one new species will develop — a net loss of 999 species. In 1,000,001 years there should be no species of Evolutionoptera left on earth.
If Behe is correct, then molecular evolution faces a similar problem. If a loss- or modification-of-function adaptation is 100-1000 times more likely than a gain-of-function adaptation, then logic dictates that eventually an evolving population will run out of molecular functions to lose or modify.
Neo-Darwinian evolution cannot forever rely on examples of loss or modification-of-function mutations to explain molecular evolution. At some point, there must be a gain of function. Behe’s paper suggests that if Darwinian evolution is at work, it removes functions much faster than it creates them. This is a major discovery because it shows that a mechanism like Darwinian evolution could never be the cause of new complex features.
Something else must be generating the information for new molecular functions. Again, what process can generate the rare sequence-specific information we see in life?
The Evolutionary Informatics Lab
Another ID lab focuses on answering that precise question. As the website of the Evolutionary Informatics Lab puts it, “Evolutionary informatics … points to the need for an ultimate information source qua intelligent designer.”
The lab’s founders, William Dembski and Robert Marks, have some of the strongest credentials in the ID movement. With PhDs in both mathematics and philosophy, Dembski is one of the leading lights of ID. Marks is Distinguished Professor of Electrical and Computer Engineering at Baylor University and has over 250 scientific publications to his name, including many in the field of evolutionary computing.
The lab got off to a rough start in 2007 when Baylor administrators learned that Marks was doing ID-friendly research on the campus. A Baylor dean e-mailed Marks with the order that he “disconnect this [lab’s] web site immediately.”
Before the thought police were done, Baylor forced the Evolutionary Informatics Lab not just to remove its website from university servers, but also to return a five-figure grant. Universities aren’t known for turning down free money — but apparently the censors at Baylor preferred giving up $30,000 if the resulting research might support intelligent design.
Despite these setbacks, the lab has attracted graduate student researchers and to date has published six peer-reviewed articles in mainstream science and engineering journals. In their papers, Dembski and Marks have developed a system for studying evolutionary algorithms — computer programs of digital organisms that, according to ID critics, show that Darwinian processes can create new information.
Dembski and Marks have quantitatively measured the amount of “active information” smuggled into the evolutionary simulation by the programmer to allow it to achieve its goal. Their analyses support “No Free Lunch” theorems — that is, the notion that without intelligent input there can be no gain in complex and specified information.
Thus far, Dembski, Marks and their team have identified sources of active information in evolutionary algorithms — “Avida,” “Ev,” and other programs — that have been widely touted as refuting ID. Their research has discovered that these programs do not truly model blind and unguided Darwinian processes, but cheat because they were pre-programmed by their designers to reach their digital evolutionary goals. Their peer-reviewed technical papers include:
- Winston Ewert, William A. Dembski, Robert J. Marks II, “Measuring meaningful information in images: algorithmic specified complexity,” IET Computer Vision, Vol. 9 (6): 884-894 (December, 2015): This paper shows that algorithmic specified complexity is a useful measure of meaningful functional information by testing it by looking at random, redundant, and meaningful computer images.
- Winston Ewert, W. A. Dembski and Robert J. Marks II, ” Algorithmic Specified Complexity in the Game of Life,” Systems, Man, and Cybernetics: Systems, IEEE Transactions, Vol. 45(4): 584-594 (April, 2015): This paper shows that algorithmic specified complexity is a useful measure of meaningful functional information by testing it by looking at designed and non-designed features that appear in Conway’s Game of Life.
- Winston Ewert, “Complexity in Computer Simulations,” BIO-Complexity, Vol. 2014 (1): Computer scientist Winston Ewert reviews the literature claiming to evolve irreducible complexity through evolutionary computer simulations and finds that “Behe’s concept of irreducible complexity has not been falsified by computer models.”
- Winston Ewert, William A. Dembski and Robert J. Marks II, “On the Improbability of Algorithmically Specified Complexity,” Proceedings of the 2013 IEEE 45th Southeastern Symposium on Systems Theory (SSST), Baylor University, March 11, 2013, pp. 68-70: This paper develops algorithmic specified complexity (ACS) as an improved method of quantifying specification and detecting design.
- Winston Ewert, William A. Dembski, Robert J. Marks II, “Active Information in Metabiology,” BIO-Complexity, Vol. 2013 (4): The authors analyze “metabiology,” a gene-centered model of evolution developed by computer scientist and mathematician Gregory Chaitin, but find the program deviates from biological reality, requiring informational inputs donated by an intelligent source — called “active information” — and does not truly demonstrate that unguided processes can produce new information.
- Winston Ewert, W. A. Dembski and Robert J. Marks II, “Conservation of Information in Relative Search Performance,” Proceedings of the 2013 IEEE 45th Southeastern Symposium on Systems Theory, Baylor University, March 11, 2013, pp. 41-50:
- This paper analyzes “No Free Lunch” theorems developed by William Dembski and finds that searches like Darwinian evolution, on average, can’t ever outperform a random search.
- Winston Ewert, William A. Dembski, Ann K. Gauger, Robert J. Marks II, “Time and Information in Evolution,” BIO-Complexity, 2012 (4): This paper responds to a 2010 paper in Proceedings of the U.S. National Academy of Sciences titled “There’s plenty of time for evolution,” by Herbert S. Wilf and Warren J. Ewens. The paper finds biologically unrealistic simplifications in the model, which mean Wilf and Ewens’s “conclusion that there’s plenty of time for evolution is unwarranted.”
- Winston Ewert, W. A. Dembski and Robert J. Marks II, “Climbing the Steiner Tree — Sources of Active Information in a Genetic Algorithm for Solving the Euclidean Steiner Tree Problem,” BIO-Complexity, 2012 (1): In this paper, researchers at the Evolutionary Informatics Lab argue that intelligence is necessary to solve problems like the Steiner tree problem.
- George Montañez, Winston Ewert, W. A. Dembski and Robert J. Marks II, “A Vivisection of the ev Computer Organism: Identifying Sources of Active Information,” BIO-Complexity, 2010 (3): This paper shows that some cause other than Darwinian mechanisms is required to produce new information in Thomas Schneider’s “ev” program. The evolutionary simulation is rigged by an intelligent programmer to produce its outcomes.
- William A. Dembski and Robert J. Marks II, “The Search for a Search: Measuring the Information Cost of Higher Level Search,” Journal of Advanced Computational Intelligence and Intelligent Informatics, 14 (5):475-486 (2010): This paper argues that without information about a target, any blind search trying to find a non-trivial target is bound to fail.
- Winston Ewert, George Montañez, William Dembski and Robert J. Marks II, “Efficient Per Query Information Extraction from a Hamming Oracle,” 42nd South Eastern Symposium on System Theory, pp. 290-297 (March, 2010): This paper analyzes that Richard Dawkins’s “METHINKSITISLIKEAWEASEL” evolutionary algorithm and finds that it starts with large amounts of active information — that is, information intelligently inserted by the programmer to aid the search.
- Winston Ewert, William Dembski and Robert J. Marks II, “Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism,” Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics, pp. 3047-3053 (Oct., 2009): This paper analyzes the computer simulation of evolution Avida. It shows that Avida’s programmers smuggle in “active information” to allow their simulation to find its evolutionary targets.
- William A. Dembski and Robert J. Marks II, “Bernoulli’s Principle of Insufficient Reason and Conservation of Information in Computer Search,” Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics, pp. 2647-2652 (Oct., 2009): This paper contends that in all searches — including Darwinian ones — information is conserved such that “on average no search outperforms any other.”
- William A. Dembski and Robert J. Marks II, “Conservation of Information in Search: Measuring the Cost of Success,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 39(5):1051-1061 (Sept., 2009): This article challenges the ability of Darwinian processes to create new functional genetic information. It finds that attempts to model Darwinian evolution via computer simulations, such Richard Dawkins’s famous “METHINKSITISLIKEAWEASEL” exercise, start off with, as Dembski and Marks put it, “problem-specific information about the search target or the search-space structure.”
Collectively, this research rigorously demonstrates that intelligence, not unguided Darwinian mechanisms, is required to generate new information.
A Positive Case for Design
ID research reveals fine-tuning and high levels of complex and specified information at many levels of nature — from the cosmic architecture of the universe down to complex biological features like proteins and molecular machines. But how does this make for a positive case for design?
ID uses a positive argument, based upon finding in nature the type of information and complexity that, in our experience, comes from intelligence. ID theorists begin by observing how intelligent agents act when they design things (e.g., intelligent agents generate high CSI). They use those observations to make positive predictions about what we should observe in nature if a structure was designed (e.g., designed objects will contain high CSI). Experiments and studies of nature can test those predictions (e.g., testing for high CSI). Stephen Meyer explains:
Our experience-based knowledge of information-flow confirms that systems with large amounts of specified complexity (especially codes and languages) invariably originate from an intelligent source — from a mind or personal agent.
Yet the past fifty years of biological research have found that life is fundamentally based upon:
- A vast amount of complex and specified information encoded in a biochemical language.
- A computer-like system of commands and codes that processes the information.
- Irreducibly complex molecular machines and multi-machine systems.
Where, in our experience, do language, complex and specified information, programming code, and machines come from? They have only one known source: intelligence.
A Broad Research Program
This discussion here represents just a portion of ID research that is taking place. Other scientists around the world are publishing peer-reviewed pro-ID scientific papers and making scientific discoveries. Indeed, the ID movement has published dozens of peer-reviewed technical papers, and all technical research papers are expected to describe new results. By definition, new results are discoveries.
ID researchers have published their discoveries in a variety of relevant technical venues, including peer-reviewed scientific journals, peer-reviewed scientific books from mainstream university presses, trade-press books, peer-edited scientific anthologies, peer-edited scientific conference proceedings and peer-reviewed philosophy of science journals and books. Some of these scientific journals include Protein Science, Journal of Molecular Biology, Theoretical Biology and Medical Modelling, Journal of Advanced Computational Intelligence and Intelligent Informatics, Quarterly Review of Biology, Cell Biology International, The Open Cybernetics and Systemics Journal, Frontiers in Bioscience, Floriculture and Ornamental Biotechnology, Rivista di Biologia/Biology Forum, Baylor University Medical Center Proceedings, Life, Proceedings of the 2013 IEEE 45th Southeastern Symposium on Systems Theory, Physics of Life Reviews, Complexity, Frontiers in Genetics, Perspectives in Biology and Medicine, Annual Review of Genetics, and many others. At the same time, pro-ID scientists have presented their research at conferences worldwide in fields such as genetics, biochemistry, engineering, and computer science.
The best thing that Mr. Heckle could do is to understand the actual arguments for ID proponents and critique those arguments rather than attacking straw men that he’s being taught by his professors or reading about on the Internet. If you’re a student interested in learning more about ID, a great place to start is the Student’s Guide to Intelligent Design.
References
[1.] Douglas D. Axe, “Extreme Functional Sensitivity to Conservative Amino Acid Changes on Enzyme Exteriors,” Journal of Molecular Biology, Vol. 301:585-595 (2000); Douglas D. Axe, “Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds,” Journal of Molecular Biology, Vol. 341:1295-1315 (2004).
[2.] Michael Behe and David Snoke, “Simulating Evolution by Gene Duplication of Protein Features That Require Multiple Amino Acid Residues,” Protein Science, 13: 2651-2664 (2004).
[3.] Rick Durrett and Deena Schmidt, “Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution,” Genetics, 180:1501-1509 (2008).
[4.] Douglas Axe, “The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations,” BIO-Complexity, 2010 (4).
[5.] Ann Gauger and Douglas Axe, “The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway,” BIO-Complexity, 2011 (1): 1-17.
[6.] Ann Gauger, Stephanie Ebnet, Pamela F. Fahey, and Ralph Seelke, “Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness,” BIO-Complexity, 2010 (2): 1-9.
[7.] Mariclair A. Reeves, Ann K. Gauger, and Douglas D. Axe, “Enzyme Families-Shared Evolutionary History or Shared Design? A Study of the GABA-Aminotransferase Family,” BIO-Complexity, Vol. 2014 (4).
[8.] Michael J. Behe, “Experimental Evolution, Loss-of-Function Mutations, and ‘The First Rule of Adaptive Evolution,'” The Quarterly Review of Biology, Vol. 85(4):1-27 (December 2010).
Image: Marston Hall, Iowa State University, by Jamo2008 [GFDL or CC BY-SA 3.0], via Wikimedia Commons.