Evolution Icon Evolution

Some Possible Reasons for the Limited Success of Evolutionary Algorithms

Photo credit: Arif Riyanto via Unsplash.

Editor’s note: Dr. Yampolskiy is Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. In this series, he asks: “What Can and Can’t Darwin’s Algorithm Compute?” See previous posts in the series here:

A number of possible explanations for “Why don’t we evolve software?” could be considered. I have tried to be as comprehensive as possible below, but it is possible that I have not considered some plausible explanations. 

Some Possible Explanations

  • Incompetent programmers. It is theoretically possible, but highly unlikely, that out of thousands of scientists working on evolutionary computation all failed to correctly implement the Darwinian algorithm. 
  • Non-representative algorithms. Some56 have suggested that evolutionary algorithms do not accurately capture the theory of evolution, but of course, that would imply that the theory itself is not specified in sufficient detail to make falsifiable predictions. If, on the other hand, such more detailed specifications are available to GP believers, it is up to them to implement them as computer simulations for testing purposes. However, no successful examples of such work is known and the known ones have not been successful in evolving software. 
  • Inadequate fitness functions. Fitness function for a complex software product is difficult to outline and specify and may be as complex as (or even more complex than) the software we want to evolve as it has to consider all the possible use cases and pass all unit tests. This may be the Achilles heel of GP, but it is also an objection to the feasibility of programming in general and GP in particular, as both have to convert software specification into the source code. If human programmers and biological evolution succeed with such constraints, so should Darwinian simulations.
  • The Halting problem. Alan Turing proved61 that it is impossible to determine if an arbitrary program halts, but this is also a problem for human programmers and could be address easily by placing time limits on considered solutions. 
  • Program correctness. If we require evolved software to be provably correct this would present a problem as GP doesn’t verify produced designs, but only tests them against specific unit tests. Likewise, we can’t rely on automated software verification as it is still an unsolved problem4 in the general case. This is not really a problem as a majority of human-written software is never proven to be correct and only a small portion of software engineering process relies of formal specification and Test Driven Development.
  • Inappropriate solutions. Literature on EA is full of examples62 of the surprising creativity of Darwinian algorithms resulting in solutions that match the letter of design specifications but not the spirit. This is similar to human-produced software and numerous examples of ways in which such software fails the goals of the initial design.63
  • Insufficient complexity of the environment (not enough data, poor fitness functions). It is possible that the simulated environment is not complex enough to generate high complexity outputs in evolutionary simulations. This does not seem correct as the Internet presents a highly complex landscape in which many self-modifying computer viruses roam.64 Likewise, a virtual world such as Second Life and many others present close approximations to the real world and are certainly more complex than the early Earth was. “A skeptic might insist that an abstract environment would be inadequate for the evolution …, believing instead that the virtual environment would need to closely resemble the actual biological environment in which our ancestors evolved. Creating a physically realistic virtual world would require a far greater investment of computational resources than the simulation of a simple toy world or abstract problem domain (whereas evolution had access to a physically realistic real world ‘for free’). In the limiting case, if complete microphysical accuracy were insisted upon, the computational requirements would balloon to utterly infeasible proportions.”39 Requiring more realistic environmental conditions may results in an increase in necessary computational resources, a problem addressed in the next bullet. Eiben and Smith suggest that evolvable hardware may provide complexity which is currently missing from software-only emulations: “The use of real hardware overcomes the principal deficiency of software models, which lack the richness of matter that is a source of challenges and opportunities not yet matched in artificial algorithms.”41
  • Insufficient resources (Compute, Memory). From the history of computer science, we know of many situations (speech recognition, NN training), where we had a correct algorithm, but insufficient computational resources to run it to success. It is possible that we simply don’t have hardware powerful enough to emulate evolution. I will address this possibility in my next post.
  • Software design is not amenable to evolutionary methods. The space of software designs may be discrete with no continuous path via incremental fitness to the desired solutions. This is possible, but it implies that the original goals of GP are unattainable and misguided. In addition, since a clear mapping exists between solutions to problems and animals as solutions to environmental problems this would also imply that the current explanation for the origin of the species is incorrect.65 
  • Darwinian algorithm is incomplete or wrong. Lastly, we have to consider the possibility that the inspiration behind evolutionary computation, the Darwinian algorithm itself, is wrong or at least incomplete. If that were true, computer simulations of such an algorithm would fail to produce results comparable to observations we see in nature and a search for an alternative algorithm would need to take place. This would be an extraordinary claim and would require that we discard all the other possible explanations from this list. 

Lessons from History

Perhaps, we can learn some answers from similar historical conundrums. The earliest work on artificial neurons was done in 1943 by McCulloch and Pitts66 and while research on Artificial Neural Networks (ANN) continued67, until 2010 it would have been very logical to ask: “Why don’t artificial neural networks perform as well as natural ones?” Today, deep neural networks frequently outperform their human counterparts68, 69. But it may still be helpful to answer this question about NN, to see how it was resolved. Stuhlmüller succinctly summarizes the answer given by Ghahramani:  “Why does deep learning work now, but not 20 years ago, even though many of the core ideas were there? In one sentence: We have more data, more compute, better software engineering, and a few algorithmic innovations …”70 

Tomorrow, “Neuroevolution Methods Show Significant Success.”

References:

  1. Turing, A., On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 1936. 2(42): p. 230-265.
  2. Lehman, J., et al., The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. arXiv preprint arXiv:1803.03453, 2018.
  3. Yampolskiy, R.V. and M. Spellchecker, Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures. arXiv preprint arXiv:1610.07997, 2016.
  4. Nachenberg, C., Computer virus-antivirus coevolution. Communications of the ACM, 1997. 40(1): p. 46-51.
  5. Yampolskiy, R.V., On the origin of synthetic life: attribution of output to a particular algorithm. Physica Scripta, 2016. 92(1): p. 013002.
  6. McCulloch, W.S. and W. Pitts, A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 1943. 5(4): p. 115-133.
  7. LeCun, Y., Y. Bengio, and G. Hinton, Deep learning. Nature, 2015. 521(7553): p. 436-444.
  8. Yampolskiy, R.V., Artificial Superintelligence: a Futuristic Approach. 2015: Chapman and Hall/CRC.
  9. Yampolskiy, R., Artificial Intelligence Safety and Security. 2018: CRC Press.
  10. Stuhlmüller, A., 50 things I learned at NIPS 2016, in NIPS. 2016: Available at: https://blog.ought.com/nips-2016-875bb8fadb8c.