Editor’s note: Evolution News is delighted to welcome Brendan Dixon as a new contributor. He is a software architect who joined Biologic Institute in 2006 to design and build the software for the Stylus project. He has worked for Microsoft, IBM, and Apple.
For centuries, the apparent perfection of [nature’s] designs was taken as self-evident proof of divine creation. Charles Darwin himself expressed amazement that natural selection could produce such variety and complexity. Even today, creationism and intelligent design thrive on intuitive incredulity that an unguided, unconscious process could produce such intricate contraptions.
We now know that intuition fails us, with feathers, eyes and all living things the product of an entirely natural process. But at the same time, current ways of thinking about evolution give a less-than-complete picture of how that works. Any process built purely on random changes has a lot of potential changes to try. So how does natural selection come up with such good solutions to the problem of survival so quickly, given population sizes and the number of generations available?
By way of an answer, the article highlights recent work showing a relationship between gene regulatory networks (GRNs) and computer-based neural networks. Both consist of “nodes” with weighted connections between them. As such, GRNs and neural networks are mathematically equivalent. The significance of this equivalence is that much recent success in Artificial Intelligence, as when Google’s AlphaGo beat the number two seeded Go champion, relies on heavily neural networks and so-called “deep” learning. The article, and the paper it highlights, suggest that much the same way neural networks can learn useful patterns, GRNs too could learn patterns that then would form the basis for rapid growth in complexity, perhaps even given just slight tweaks to the network via mutation.
In this way, we’re told, evolution “learns” as it goes along, all without the necessity of a designer. In fact, Richard Watson at the University of Southampton is quoted as saying, “The observation that evolutionary adaptations look like the product of intelligence isn’t evidence against Darwinian evolution — it’s exactly what you should expect.”
What should we make of this proposal? I find the equivalence between GRNs and neural networks both fascinating and not wholly surprising. All idealized networks look similar and share similar properties. Such networks have long been a staple in computer science. But, as always, the hard work is in the details. Those details often get left behind once researchers start playing with their computer models.
The researchers whose work is reported in the article successfully trained their network to reliably produce images, much like a GRN would produce a phenotype — or, more accurately, affect the phenotype since almost all organisms contain multiple GRNs contributing to the whole. They found that, once they removed “selection pressure” (by which they mean that they stop rejecting network weights that did not lead to, in some way, their target images), the networks remembered the images and could continue to produce them even after mild mutations.
I am willing to take their result as it stands. But I question their model as I question nearly all computer models: What parts of reality did it leave behind (since no model accurately reflects that which it models) and what impact do those parts have? These questions matter all the more in a non-linear system, such as a GRN or neural network, whose behavior can be inherently unpredictable and often suffers from the “Butterfly Effect” where small changes in initial conditions, or in what the model fails to capture, has massive effects.
I also wonder about the superficial similarity between GRNs and trained neural networks from another angle: When we train a neural network we do so, as Google-owned DeepMind did for AlphaGo, by feeding a single neural network (or, in the case of AlphaGo, a connected set of neural networks) millions of data sets and guiding it along the way. GRNs, on the other hand, exist within an organism that lives in a single place at a single time exposed to a singular (roughly speaking) set of selection pressures. That is, GRNs, if trained at all, must be trained over time and space. There is no single instance of the GRN to receive the data, but, rather, many instances spread throughout the organisms and across the generations.
Worse, whatever is “learned” by that organism must be passed along to the children. So, unless the paper’s authors subscribe to some form of Lamarckism, which is exceedingly unlikely, without a clear inheritance path all the training value is lost.
The similarity between GRNs and neural networks is instructive on this point: Neural networks were invented in the 1940s with the roots of the more modern form dating to the late 1980s. But those attempts failed. Why? Because they lack the data sufficient to train the network. AlphaGo succeeded by training their single network with millions and millions of Go boards (more or less). Anything less than that and AlphaGo would have failed to win. Since GRNs are distributed across time and space, no one network can receive the necessary data to succeed. No matter how similar they might be to neural networks, without concentrated training they’ll learn and remember nothing on their own.
On the other hand, if the GRN were designed, then we’d expect to see it with the correctly weighted connections to properly do its job, even in the absence of training. And that is what we see.