Evolution Icon Evolution
Intelligent Design Icon Intelligent Design

You Can’t Ascribe Intelligence to an Unguided Process

There’s a new movement afoot among some evolutionary biologists to co-opt the term “intelligent design” for Darwin. It’s not the same as Dawkins’s famous line that “Biology is the study of complicated things that give the appearance of having been designed for a purpose.” In this newer view, it’s not just an appearance; it’s a reality. The process of evolution itself is learning how to get smarter as it goes.

Richard A. Watson at the University of Southampton proposes this idea at The Conversation in a piece titled, “Intelligent design without a creator? Why evolution may be smarter than we thought.” This is not your grandfather’s Darwinism:

Charles Darwin’s theory of evolution offers an explanation for why biological organisms seem so well designed to live on our planet. This process is typically described as “unintelligent” — based on random variations with no direction. But despite its success, some oppose this theory because they don’t believe living things can evolve in increments. Something as complex as the eye of an animal, they argue, must be the product of an intelligent creator.

I don’t think invoking a supernatural creator can ever be a scientifically useful explanation. But what about intelligence that isn’t supernatural? Our new results, based on computer modelling, link evolutionary processes to the principles of learning and intelligent problem solving — without involving any higher powers. This suggests that, although evolution may have started off blind, with a couple of billion years of experience it has got smarter. [Emphasis added.]

With its provocative title, the article mischaracterizes intelligent design as a supernatural explanation involving “higher powers.” In reality, ID is a scientific theory that only appeals to types of causes now known to be in operation, based on our uniform experience with causes that can produce complex specified information. Inferences to design can be made without any appeal to the “supernatural,” as in the case of inferring intelligence as the cause of Mt. Rushmore.

That said, Watson does seem to have looked into ID, or at least the scholarship of our colleague, the science historian Michael Flannery. Watson writes:

Alfred Russel Wallace (who suggested a theory of natural selection at the same time as Darwin) later used the term “intelligent evolution” to argue for divine intervention in the trajectory of evolutionary processes. If the formal link between learning and evolution continues to expand, the same term could become used to imply the opposite.

That’s an interesting connection, but as far as we’re aware, it wasn’t Wallace who introduced the term “intelligent evolution” in this context. Professor Flannery did so in the title of a book, seeking in a brief phrase to summarize Wallace’s thinking and what it implies. Still, it’s commendable that Watson appears to have done some extra reading to broaden his horizon.

What about the contention the evolution gets smarter over time? Watson’s argument begins with the analogy of neural networks that “learn” to make connections that lead to greater rewards. Those, needless to say, are designed (see “Designless Logic: Is a Neural Net a Budding Brain?“). Can he make the transition to mindless processes, or is this another case of Darwin comparing artificial selection to natural selection?

But what about evolution, can it get better at evolving over time? The idea is known as the evolution of evolvability. Evolvability, simply the ability to evolve, depends on appropriate variation, selection and heredity — Darwin’s cornerstones. Interestingly, all of these components can be altered by past evolution, meaning past evolution can change the way that future evolution operates.

The notion of evolvability has been around for some time, Watson notes. In fact, Michael Behe has given a more rigorous definition of it in an article here at Evolution News. What’s new in the notion of evolvability is the application of learning theory. Watson hopes this will give it a “much needed theoretical foundation.” In his research, he has worked to compare genes in regulatory networks with synapses in neural networks.

Our work shows that the evolution of regulatory connections between genes, which govern how genes are expressed in our cells, has the same learning capabilities as neural networks. In other words, gene networks evolve like neural networks learn. While connections in neural networks change in the direction that maximises rewards, natural selection changes genetic connections in the direction that increases fitness. The ability to learn is not itself something that needs to be designed — it is an inevitable product of random variation and selection when acting on connections.

The exciting implication of this is that evolution can evolve to get better at evolving in exactly the same way that a neural network can learn to be a better problem solver with experience. The intelligent bit is not explicit “thinking ahead” (or anything else un-Darwinian); it is the evolution of connections that allow it to solve new problems without looking ahead.

As an example of what he means, he discusses limbs. Random variation might change each limb separately, but if a regulatory network changed them all together, the next solution would be easier. Say, for instance, height would increase fitness. Having an upstream regulator change all four limbs together is an easier problem than changing them separately. This is how evolution could evolve to “learn” better ways to solve problems over time.

Watson is now ready to show how this kind of “intelligent design” is purely natural and requires no “divine intervention.”

So, when an evolutionary task we guessed would be difficult (such as producing the eye) turns out to be possible with incremental improvement, instead of concluding that dumb evolution was sufficient after all, we might recognise that evolution was very smart to have found building blocks that make the problem look so easy.

To recap, Watson says that evolution is “smarter than we thought.” It’s not a clunky, blind opportunist tinkering at random. It can learn. It can find easier ways to increase fitness, and therefore get better at evolving over time. No intervening intelligence is required; as evolution learns, the organism becomes more evolvable. The more difficult problems (like arriving at an eye) are bound to be solved.

Notice, however, that Watson’s “neural networks” are already designed entities. Once again, a Darwinian evolutionist has snuck information in the side door while talking about the magical power of material forces to produce rabbits from hats (see “Arrival of the Fittest: Natural Selection as an Incantation“). Gene regulatory networks, Stephen Meyer has shown, require more information to rewire. As for Watson’s example of synchronized limb evolution, that’s a post-hoc rationalization. If animals had four very different limbs, it’s likely he would be ready with a good story about how evolution “learned” to do that.

If “learning” was a law of nature for material phenomena, we should expect to find all kinds of nonliving systems acting similarly. Picture a flow of water on a mildly sloping plain. The water will find the easiest way down, and will “learn” to flow that way, carving a channel deeper over time. But does that increase its “fitness”? Are the limestone terraces we discussed here more fit than random limestone blocks? Is wind that “learns” to drop sand grains on a dune more fit than wind that scatters sand across a beach? Intuitively, something seems amiss. This kind of thinking would make Mars more fit because of its canyons and dunes.

Information is a concept unfortunately lacking in Watson’s proposal. Clearly, to build an animal from matter would require vast increases in information. The genome of a Cambrian body plan is extraordinarily more information-rich than that of amino acids in a primordial soup. A human brain, we recently pointed out, has the memory capacity of the World Wide Web. Can material substances “learn” to build libraries of complex specified information? Only in a Darwinist’s dreams. Our uniform experience locates that ability in free-acting minds with real intelligence, not in material forces.

The only way Watson can tell his story is by personifying evolution, endowing it with learning ability. This is equivalent to calling a canyon smarter as it gets deeper, or the wind intelligent as it learns to pile sand higher. In all our experience, though, whenever we find a material entity employing complex specified information to act in an intelligent way (as in a computer or robot), we know that intelligence was its ultimate cause. It is a logical inference to ascribe an intelligent cause to intelligence in animals as well.

Unless they are willing to relegate their own intelligence to mindless material forces, advocates of the “evolvability” theory are well advised to avoid shooting themselves in the foot. How can Watson trust his own mind, if it is the product of mindless matter? Particles and forces are dumb. They do not act with goals, thinking through concepts to arrive at logical conclusions. They do not learn things. The concept of learning implies pre-existing intelligence.