Evolution Icon Evolution

New Papers Explore the Utility of Active Information

Casey Luskin

William Dembski and Robert J. Marks developed the concept of active information to measure the extent to which a search function appears pre-programmed to find some target. Inspired by the theory of intelligent design, this metric has proved useful in exposing when genetic algorithms don’t truly model the power of Darwinian evolution, but rather “cheat” due to a programmer’s guidance, leading to a predetermined outcome. As explained here, active information works as follows:

Exogenous Information (I) represents the difficulty of a search in finding its target with no prior information about its location. Active Information (I+) is the amount of information smuggled in by intelligence to aid the search algorithm in finding its target. Endogenous Information (Is) then measures the difficulty the search will have in finding its target after the addition of Active Information. Thus, I+ = I – Is.

Two new papers in the journal BIO-Complexity show how active information is useful in new areas, helping us to better understand evolution and its limits. 

“Measuring Active Information”

In recent years proponents of non-Darwinian evolution have advanced ideas about “natural genetic engineering,” in which organisms can induce targeted, beneficial mutations in their own genomes in response to selection pressure. Last month in BIO-Complexity, computer scientist Jonathan Bartlett published an article, “Measuring Active Information in Biological Systems.” He addressed an important related question: How can we determine if a mutation is random and undirected, or if it was directed? In addressing this question, he finds new applications for the concept of active information. 

Active information tells us how much knowledge a search function has embedded in it about the location of the target. In the context of studying the effects of mutations on an organism, Bartlett explains: “What active information measures is the alignment of the genome itself to the problem of finding viable genetic solutions to selection pressures.” Thus, in some cases a mutation may be completely “random,” meaning that it occurred due to mechanisms that were not preprogrammed to help the organism solve a problem. In other cases, a mutation may not be entirely “random,” meaning that preprogrammed mechanisms internal to the organism directed the mutation to provide some potential benefit. Bartlett explains that non-random, directed mutations are essentially a reflection of the presence of active information in a genome to produce beneficial mutations:

This is wholly compatible with Behe’s “First Rule of Adaptive Evolution,” which states that evolution will “break or blunt any functional coded element whose loss would yield a net fitness gain.” [16] The question that is posed by active information is a separate one. Does the genome contain information about what changes are likely to yield benefit? It may be that the most likely way to yield benefit is to blunt or break some particular system. If active information is present, then the blunting and breaking will be measurably tilted towards blunting and breaking systems that are likely to yield selection benefit by doing so.

The goal of active information is not to be a universal quantification of all aspects of information in biology, but rather to assess the narrow question of the information that cells contain that assist in their own evolution.

A Well-Defined System

Bartlett notes that because living organisms tend to optimize across many variables over different timescales, measuring the amount of information could be difficult. However, he explains that the well-defined system of the adaptive immune system provides an environment where active information measurements can be readily calculated. He uses this observation to produce a general model for calculating active information in genomic mutations:

The methodology described for the somatic hypermutation system can be generalized to any mutational system for which the following are reasonable parameters:

  • The cell reduces the mutation space to an area that still fully contains (or almost fully contains) the solution space.
  • The number of mutations that are required are small enough so that they can be reasonably thought of as the smallest mutation to accomplish the effect.

Lastly, Bartlett applies his method to an example offered by proponents of Darwinian evolution to supposedly demonstrate the power of random mutation and natural selection. The example is Richard Lenski’s well-known E. coli Long Term Evolution Experiment (LTEE) and the evolution of the Cit+ phenotype (the ability of E. coli to uptake and metabolize citrate). As Bartlett explains, the first time Lenski and his team observed the evolution of the Cit+ phenotype, it required 31,500 generations to appear. However, in their paper, Hofwegen et al. (2016) witnessed the same trait arise in only about 12 generations and 30 days because of selection pressures. Bartlett predicts that the trait arose due to active information in the genome, responding to selection and thereby predisposing E. coli to evolve such a trait. Bartlett finds:

E. coli contributes approximately 12.4 additional bits of information towards the search for the Cit+ mutation when under selection. This number is relative to the ordinary predisposition of E. coli to produce this mutation when not under selection, which has not been determined.

Bartlett shows that it is not random mutation alone that generates such complex traits in E. coli. What this indicates is that classical Darwinian evolution is not the mechanism at work here. Instead, preprogrammed mechanisms are designed to allow an organism to rapidly adapt to increase selection pressures. Were these preprogrammed mechanisms intelligently designed? That’s a separate question for another day, but what Bartlett has shown is that Darwinism didn’t produce this feature; something far smarter did. Intelligent design ideas are bearing fruit in our understanding of how evolution works. 

“Generalized Active Information”

A second paper in BIO-Complexity published just this week, “Generalized Active Information: Extensions To Unbounded Domains,” by Daniel Andres Diaz-Pachon and Robert J. Marks, further explores the utility of the concept of active information. They first respond to a criticism of active information made by Olle Häggström. The Swedish mathematician claimed that “there is absolutely no a priori reason to expect that the ‘blind forces of nature’ should produce a fitness landscape distributed [uniformly].” They reply by observing, “It is not that out-of-equilibrium explanations are not allowed, it is that they must be accounted for.” 

They then explain that active information can help us detect instances where probabilities depart from expected uniform distributions:

Active information can be viewed as a generalized instantiation of anomaly detection otherwise known as novelty filtering. The status quo of probabilistic uniformity is set and any significant deviation is flagged as novel. The degree of deviation from normalcy is measured by the active information. … [A]ctive information is the difference of the information for an event under equilibrium and nonequilibrium.

As they observe, “Active information can also be seen as a statistical complexity measure.” That is because it meets criteria previously laid out by mathematicians for building such metrics, including the fact that “Active information determines the information gap between the search of a target by pure chance and the input of an expert/dumb programmer.” In light of these results, they predict that active information can be applied to build a useful model of population genetics.

Photo: From Richard Lenski’s terrific LTEE, by Brian Baer and Neerja Hajela [CC BY-SA 1.0], via Wikimedia Commons.