Evolution Icon Evolution
Intelligent Design Icon Intelligent Design

Probabilistic Programming and Conservation of Information

book-being-as-communion-3d.jpgMIT scientists have improved on a new kind of computer language that enables “probabilistic programing,” Larry Hardesty writes for MIT News. The new algorithms allow a programmer to write in fifty lines of code what used to take thousands. Hardesty emphasizes that this advance came at a high cost. And interestingly that is just what William Dembski shows in the chapter on “Conservation of Information” in his newest book, Being as Communion.

Let’s say you want to write facial recognition software. This task, which humans perform so effortlessly, has long been a challenge for artificial intelligence. Traditional attempts have been deterministic: trying to specify all the variables. Probabilistic programming, by contrast, iterates attempts at a match till an acceptable level of success is achieved. “It means canvassing lots of rival possibilities and selecting the one that seems most likely,” Hardesty explains. An MIT-developed language called Picture uses this method:

In a probabilistic programming language, the heavy lifting is done by the inference algorithm — the algorithm that continuously readjusts probabilities on the basis of new pieces of training data. In that respect, Kulkarni and his colleagues had the advantage of decades of machine-learning research. Built into Picture are several different inference algorithms that have fared well on computer-vision tasks. Time permitting, it can try all of them out on any given problem, to see which works best. [Emphasis added.]

The problem fits exactly what Dembski describes in his discussion of “search for a search” (S4S). In his previous technical book, No Free Lunch, Dembski proved from the “No Free Lunch” (NFL) theorems that no evolutionary algorithm is superior to blind search, once all the costs of obtaining additional information are factored in. For instance, the cost of a search for a treasure map meets or exceeds the cost of digging around an island at random. The treasure hunter first has to pay the cost of searching for the right treasure map — a search for a search. The cost of S4S has to be cashed out in information from some other source or by trial and error; thus, information is conserved.

In the current case, the MIT computer scientists are also conducting a search for a search: finding a method for recognizing faces out of all possible algorithms that attempt the task. Notice that the researchers “had the advantage of decades of machine-learning research.” This had reduced their task to “several different inference algorithms that have fared well on computer-vision tasks.” In other words, they didn’t find Picture’s algorithm by blind search. They first had to pay the cost of all that prior information, just like the treasure hunter had to either receive or test information to find a reliable treasure map.

The article might seem to suggest that you can still find a successful target for free by letting the machine learn the information:

Moreover, Kulkarni says, Picture is designed so that its inference algorithms can themselves benefit from machine learning, modifying themselves as they go to emphasize strategies that seem to lead to good results. “Using learning to improve inference will be task-specific, but probabilistic programming may alleviate re-writing code across different problems,” he says. “The code can be generic if the learning machinery is powerful enough to learn different strategies for different tasks.”

But do you see where the cost of information snuck in? The machine will never “learn” anything by blind search without the intelligent programmer providing it with the right information on what constitutes a success. Information must also be supplied in both the hardware and software from outside the machine. Conservation of Information is not violated.

How does this principle intersect with evolutionary theory? In both books, Dembski shows that no evolutionary algorithm improves on blind search, once the accounting is done. Darwinian biologists mistakenly think that natural selection is an algorithm that drives organisms to higher fitness. The mathematical models that attempt to show this, however, all violate Conservation of Information by inserting intelligently designed information into the model.

The well-known analogy to evolution from Richard Dawkins’s book The Blind Watchmaker is a case in point. Dawkins got a computer algorithm to write the Shakespearean line “methinks it is like a weasel” in a finite number of iterations, starting from a random sequence. What he failed to factor in was the information he himself provided as to what the target was, and what constituted a success. The same oversight plagues all evolutionary algorithms and models, such as Avida. Accounting for the extra information proves that no evolutionary algorithm is superior to blind search.

Dembski writes:

The point to realize is that those whose job it is to find the right optimization procedure (e.g., operations research people) must, for a given search, try to choose one search strategy that works well to the exclusion of other search strategies that work less well. Such a choice of strategy identifies one possibility to the exclusion of others within a matrix of possibility (the matrix, in this instance, consisting of search strategies, or classes of different types of searches). It follows that such a choice of search strategy entails an input of information into the search in the exact actualize-exclude informational sense emphasized throughout this book. (Being as Communion, pp. 149-150)

What about biology? Don’t organisms succeed at their searches for fitness when they adapt to changing environments? Of course they do, but only because of the information programmed into them. It’s analogous to the cost of the hardware, software, and algorithms in a computer. Such information is explainable by a mind, but is exceedingly improbable by blind search.

MIT’s advance in probabilistic programming is commendable, but it illustrates the tremendous informational investment that went into being able to reduce programs to fifty lines of efficient code: decades of research, testing, and insight. It even cost a lot of money:

To make machine-learning applications easier to build, computer scientists have begun developing so-called probabilistic programming languages, which let researchers mix and match machine-learning techniques that have worked well in other contexts. In 2013, the U.S. Defense Advanced Research Projects Agency, an incubator of cutting-edge technology, launched a four-year program to fund probabilistic-programming research.

When hearing reports about the success of evolutionary algorithms that appear to explain the spontaneous emergence of complex organisms, be sure to look for the information snuck in the side door. Information always comes at a cost.

Being as Communion is the third of Dembski’s books that, after The Design Inference and No Free Lunch, present the theory of intelligent design with rigorous logic and math. It’s also the shortest and most accessible to the layperson.

The fundamental stuff of the universe is not matter and energy, Dembski argues, but information. If you’re interested in learning more about Conservation of Information, search, probability, and related concepts, check out this engaging book that not only explains the principles clearly and cogently, but defends a startling new way of looking at reality.

Evolution News

Evolution News & Science Today (EN) provides original reporting and analysis about evolution, neuroscience, bioethics, intelligent design and other science-related issues, including breaking news about scientific research. It also covers the impact of science on culture and conflicts over free speech and academic freedom in science. Finally, it fact-checks and critiques media coverage of scientific issues.

Share

Tags

__k-reviewComputational SciencesMathematicsScience