Evolution Icon Evolution
Neuroscience & Mind Icon Neuroscience & Mind

Neuroevolution Methods Show Significant Success

Image credit: mikemacmarketing / photo on flickr / CC BY (https://creativecommons.org/licenses/by/2.0).

Editor’s note: Dr. Yampolskiy is Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. In this series, concluding today, he asks: “What Can and Can’t Darwin’s Algorithm Compute?” See previous posts in the series here:

My analysis of the relevant literature shows that no one has succeeded at evolving non-trivial software from scratch. In other words, the Darwinian algorithm works in theory, but does not work in practice, when applied in the domain of software production. The reason we do not evolve software is that the space of working programs is very large and discrete. While hill-climbing-heuristic-based evolutionary computations are excellent at solving many optimization problems, they fail in the domains of non-continuous fitness.71 This is also the reason we do not evolve complex life or novel engineering designs. 

We can conclude that 1) Simulations of evolution do not produce comparably complex artifacts. 2) Running evolutionary algorithms longer leads to progressively diminishing results. 

Similar Conclusions

Others have come to similar conclusions: 

It seems reasonable to assume that the number of programs possible in a given language is so inconceivably large that genetic improvement could surely not hope to find solutions in the “genetic material” of the existing program. The test input space is also, in the words of Dijkstra, “so fantastically high” that surely sampling inputs could never be sufficient to capture static truths about computation.16

Again:

[C]omputing science is — and will always be — concerned with the interplay between mechanized and human symbol manipulation, usually referred to as “computing” and “programming” respectively. An immediate benefit of this insight is that it reveals “automatic programming” as a contradiction in terms.75

More specifically: 

Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or plane. In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts.76

Even John Koza himself acknowledges that it would be highly surprising if his approach could work: 

Anyone who has ever written and debugged a computer program and has experienced their brittle, highly non-linear, and perversely unforgiving nature will probably be understandably skeptical about the proposition that the biologically motivated process sketched above could possibly produce a useful computer program.15

My Challenge to the EA Community

I challenge the EA community to prove me wrong by producing an experiment that evolves non-trivial code from scratch and without human help. That would be the only way in which my findings could be shown to be incorrect.

On the positive side, the fact that it seems impossible to evolve complex code implies that we are unlikely to be able to directly evolve highly sophisticated artificially intelligent agents, which may present significant risk to our safety and security.77-83 Just imagine what would have happened if the very first time we ran a simulation of evolution on a computer, it produced a superintelligent agent. I have shown that Programming as a problem is AI Complete84: if GP can solve Programming that would imply that GP = AGI (Artificial General Intelligence). But we see no experimental evidence for such a claim. In fact, it is more likely that once we have AGI, it could be used to create an intelligent fitness function for GP and so make it possible to evolve code. Genetic Programming will not be the cause of Artificial Intelligence, but a product of it.

On the Other Hand, There Is Neuroevolution

Conversely, neuroevolution methods for optimizing neural networks (weights, parameters and deep learning architectures) show significant success in a number of domains and remain a strong contender for the creation of AGI.85 Karpathy suggests that neural networks represent the future of software engineering, since they allow the substitution of software based on difficult-to-write source code with automated optimization based on code evaluation: 

The “classical stack” of Software 1.0 is what we’re all familiar with  —  it is written in languages such as Python, C++, etc. It consists of explicit instructions to the computer written by a programmer. By writing each line of code, the programmer identifies a specific point in program space with some desirable behavior. …In contrast, Software 2.0 can be written in much more abstract, human unfriendly language, such as the weights of a neural network. No human is involved in writing this code because there are a lot of weights (typical networks might have millions)… Instead, our approach is to specify some goal on the behavior of a desirable program (e.g., “satisfy a dataset of input output pairs of examples”, or “win a game of Go”), write a rough skeleton of the code (e.g. a neural net architecture), that identifies a subset of program space to search, and use the computational resources at our disposal to search this space for a program that works. In the specific case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent….It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data (or more generally, identify a desirable behavior) than to explicitly write the program….Software 1.0 is code we write. Software 2.0 is code written by the optimization based on an evaluation criterion (such as “classify this training data correctly”). It is likely that any setting where the program is not obvious but one can repeatedly evaluate the performance of it (e.g.  —  did you classify some images correctly? do you win games of Go?) will be subject to this transition, because the optimization can find much better code than what a human can write.86

Explaining Hardness

Previously, a number of attempts have been made to understand what makes certain problems hard in the context of evolutionary algorithms87, typically taking inspiration from computational complexity and subdividing all problems into GA-Easy (search space grows linearly with the size of input) and GA-Hard (search space grows exponentially with the size of input) for the problem encoded in the minimum chromosomal length.88, 89 Others, in their quest to explain Hardness, have looked at the presence in the search space of deceptive attractors which guide the search away from global maxima.90

I, likewise, suggest the existence of two broad classes of problems, based on whatever we are optimizing an existing design or trying to discover a novel design. If a particular class of problems can be represented as an optimization problem with a smooth fitness landscape, it is likely to be solvable by EAs. However, most software engineering/design problems seem to have a discontinuous fitness landscape and so are resistant to the evolutionary hill-climbing paradigm, with GP not able to outperform random search on such problems. Whether it is possible to convert such design problems into optimization problems remains an open questions and is similar in difficulty and consequences to the famous P VS NP91 problem. 

While it seems unlikely that evolutionary algorithms will be able to solve design problems directly, it may be possible to use them to evolve advanced artificial intelligences (Darwin Machines92 in evolvable hardware93, 94) via neuroevolution. Such AIs in turn may be able to solve design problems. Stanley et al. also argue that neuroevolution may be a likely path to strong artificial intelligence

[B]ecause of its complexity, AGI itself is only possible to discover through an open-ended process that generates more and more complex brain-like structures indefinitely. Furthermore, open-endedness may require more than only neural networks to evolve — brains and bodies evolve together in nature, and so can morphologies evolve along with neural networks in artificial systems, providing a form of embodiment. In the long run, open-endedness could be the fuel for generating the architectures and learning algorithms that ultimately reach human-level intelligence.85

That means any safety concerns we have about advanced artificial intelligence remain as an unfortunate side-effect of research into neuroevolution. They are not eliminated due to the fundamental limits of the Darwinian algorithm.

Acknowledgments

An earlier version of this series was published as “Why We Do Not Evolve Software? Analysis of Evolutionary Algorithms.” Evolutionary Bioinformatics. Volume 14: 1–11. 2018. © Roman V. Yampolskiy.

References

  1. Mishra, M., et al., A Study on the Limitations of Evolutionary Computation and other Bio-inspired Approaches for Integer Factorization. Procedia Computer Science, 2015. 62: p. 603-610.
  2. Hsu, F.-H., Behind Deep Blue: Building the computer that defeated the world chess champion. 2004: Princeton University Press.
  3. Silver, D., et al., Mastering the game of Go with deep neural networks and tree search. nature, 2016. 529(7587): p. 484-489.
  4. Silver, D., et al., Mastering the game of go without human knowledge. Nature, 2017. 550(7676): p. 354.
  5. Dijkstra, E.W., On the cruelty of really teaching computing science. Communications of the ACM, 1989. 32(12): p. 1398-1404.
  6. Mewada, S., P. Sharma, and S. Gautam, Exploration of Fuzzy System With Applications, in Handbook of Research on Promoting Business Process Improvement Through Inventory Control Techniques. 2018, IGI Global. p. 479-498.
  7. Sotala, K. and R.V. Yampolskiy, Responses to catastrophic AGI risk: a survey. Physica Scripta, 2015. 90(1): p. 018001.
  8. Babcock, J., J. Kramár, and R. Yampolskiy. The AGI containment problem. in International Conference on Artificial General Intelligence. 2016. Springer.
  9. Pistono, F. and R.V. Yampolskiy. Unethical Research: How to Create a Malevolent Artificial Intelligence. in 25th International Joint Conference on Artificial Intelligence (IJCAI-16). Ethics for Artificial Intelligence Workshop (AI-Ethics-2016). 2016.
  10. Majot, A.M. and R.V. Yampolskiy. AI safety engineering through introduction of self-reference into felicific calculus via artificial pain and pleasure. in IEEE International Symposium on Ethics in Science, Technology and Engineering. May 23-24, 2014. Chicago, IL: IEEE.
  11. Ramamoorthy, A. and R. Yampolskiy, Beyond Mad?: The Race for Artificial General Intelligence. ITU Journal: ICT Discoveries, 2017.
  12. Yampolskiy, R.V. Taxonomy of Pathways to Dangerous Artificial Intelligence. in AAAI Workshop: AI, Ethics, and Society. 2016.
  13. Yampolskiy, R. and J. Fox, Safety engineering for artificial general intelligence. Topoi, 2013. 32(2): p. 217-226.
  14. Yampolskiy, R.V., AI-Complete, AI-Hard, or AI-Easy – Classification of Problems in AI, in The 23rd Midwest Artificial Intelligence and Cognitive Science Conference. April 21-22, 2012: Cincinnati, OH, USA.
  15. Stanley, K.O., et al., Designing neural networks through neuroevolution. Nature Machine Intelligence, 2019. 1: p. 24-35.
  16. Karpathy, A., Software 2.0, in Medium. November 11, 2017: Available at: https://medium.com/@karpathy/software-2-0-a64152b37c35.
  17. Eiben, Á., P.-E. Raué, and Z. Ruttkay, GA-easy and GA-hard constraint satisfaction problems, in Constraint Processing. 1995, Springer. p. 267-283.
  18. Rylander, B. and J.A. Foster. GA Hard Problems. in GECCO. 2000.
  19. Rylander, B. and J. Foster. Genetic Algorithms, and Hardness. in WSEAS International Conference on Evolutionary Computation, Tenerife Playa, Canary Islands, Spain. 2001. Citeseer.
  20. Forrest, S. and M. Mitchell, What makes a problem hard for a genetic algorithm? Some anomalous results and their explanation. Machine Learning, 1993. 13(2-3): p. 285-319.
  21. Cook, S., The P versus NP problem. The millennium prize problems, 2006: p. 87-104.
  22. de Garis, H. Evolvable hardware genetic programming of a Darwin machine. in Artificial neural nets and genetic algorithms. 1993. Springer.
  23. Higuchi, T., et al. Evolving hardware with genetic learning: A first step towards building a Darwin machine. in Proc. of the 2nd International Conference on Simulated Adaptive Behaviour. 1993. MIT Press.
  24. De Garis, H., et al., Building an artificial brain using an FPGA based CAM-Brain Machine. Applied Mathematics and Computation, 2000. 111(2-3): p. 163-192.