Editor’s note: Dr. Marks is Distinguished Professor of Electrical & Computer Engineering, Baylor University. He is an IEEE Fellow and an OSA Fellow. This article does not necessarily represent the views of and has not been reviewed or approved by Baylor University.
Recently, as noted already here at ENV, a computer program pretending to be a 13-year-old Ukrainian boy named Eugene Goostman was hailed as the first to pass the Turing test (TT). Eugene fooled many into believing they were having a web chat with a human.
The TT is meant to display artificial intelligence (AI). But there is Strong and Weak AI and successful passing of the TT requires only Weak AI. Strong AI seeks a manmade machine capable of displaying the intellectual abilities associated with humans. Passing the Lovelace test requires demonstration of Strong AI, e.g.,writing a great novel or proving Fermat’s last theorem without the creator of the machine setting up all the dominoes to knock down.
Alan Turing’s test1 only demonstrates weak AI. By the Church-Turing thesis, all modern computers are variations of the Turing machine. Passing the Turing test today is not a big deal. It means only that computers are getting faster — not smarter. Deep Blue beating Garry Kasparov at chess and the computer program Watson winning at Jeopardy! are examples of what computationally powerful computers can do.
Alan Turing hoped that the computer would someday display all the intellectual capabilities of humans. Turing lost his close friend Christopher Morcom to bovine tuberculosis while both were still in their teens. Turing lost his faith in Christianity, embraced atheism, and began a quest to show that human intelligence was materialistic.2 Turing investigated the science of computation, hoping to show someday that man was nothing more than a machine. Turing’s genius resonated with his motivation and he is today considered the undisputed father of modern computer science. His contributions are taught to all computer science students. But Turing’s larger goal to show his Turing machine was capable of matching man’s intellect has failed. Bringsjord, Bello and Ferrucci3 summarize the current state of the TT nicely:
[T]hough progress toward Turing’s dream is being made, it’s coming only on the strength of clever but shallow trickery. For example, the human creators of artificial agents that compete in present-day versions of [the] TT know all too well that they have merely tried to fool those people who interact with their agents into believing that these agents really have minds.
The statement, published in 2001, sounds as if it were written to describe the Eugene Goostman program of today.
The ability of a computer to demonstrate Strong AI was announced dead long before Turing suggested his test. Ren�e Descartes expressed doubts about AI in 1637:
[W]e can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.4
Roger Penrose, probably best known for sharing credit with Stephen Hawking for the Penrose-Hawking Singularity Theorem governing the physics of formation of black holes, also does not believe computers will ever display Strong AI.5,6 Using G�odelian arguments, Penrose argues that the human ability of creativity is not possible for a computer. G�odel’s requirement states that consistent formal systems based on foundational axiomatic rules, like the computer and its programs, have limits to what they can do. According to Penrose, humans surpass this limit with their ability to innovate and create. Penrose believes there must be a materialistic explanation and looks to quantum mechanics for an answer. (His conjecture does not concern so-called quantum computers. Quantum computers have the same AI limitations as Turing machines.)
If Penrose is right and we find out how to make machinery akin to what is between our ears, strong AI might be possible. With current computers and current models of quantum computers, though, this is almost certainly not possible.
Here are a few others statements expressing doubt about the computer’s ability to create Strong AI.
- "…no operation performed by a computer can create new information." Douglas G. Robertson
- "The [computing] machine does not create any new information, but it performs a very valuable transformation of known information." Leon Brillouin
- "Either mathematics is too big for the human mind or the human mind is more than a machine." Kurt G�del
and, of course, my favorite:7
- "Computers are no more able to create information than iPods are capable of creating music." Robert J. Marks II
The limitations invoked by the law of conservation of information in computer programming have been a fundamental topic of investigation by Winston Ewert, William Dembski and me at the Evolutionary Informatics Lab. We have successfully and repeatedly debunked claims that computer programs simulating evolution are capable of generating information any greater than that intended by the programmer.8,9,10,11,12,13
The Lovelace Test and Intelligent Design
If the TT doesn’t demonstrate Strong AI, what test does? Bringsjord, Bello and Ferrucci14 suggest the Lovelace test, named after Augusta Ada King, the Countess of Lovelace.15 The Countess is a member of the bevy that believes computers will never be creative.
Computers can’t create anything. For creation requires, mini- mally, originating something. But computers originate nothing; they merely do that which we order them, via programs, to do.
This brings us to the Lovelace test concerning creation of information by a machine (my paraphrase):
Strong AI will be demonstrated when a machine’s creativity is beyond the explanation of its creator.
Creativity should not be confused with surprise or the lack of an explanation facility. Some of our recent work in evolutionary development of swarming16,17 displays surprising behavior but, in retrospect, the results can be explained by examination of the computer program we wrote. Layered perceptron neural networks18 lack explanation facilities, but behave in the manner the programmer intended. The Lovelace test demands innovation and creativity beyond this level.19 In a G�odelian sense, Strong AI must create beyond the developmental level allowed by its foundational axioms.
Here’s Bringsjord, Bello and Ferrucci’s more formal definition of the
Lovelace test (LT)20:
DefLT | Artificial agent A, designed by H , passes LT if and only if
- A outputs o;
- A‘s outputting o is not the result of a fluke hardware error, but rather the result of processes A can repeat;
- H (or someone who knows what H knows, and has H ‘s resources — for example, the substitute for H might he a scientist who watched and assimilated what the designers and builders of A did every step along the way) cannot explain how A produced o.
When the Lovelace test is applied to humans, A, there is reference to an intelligent designer, H , in Bringsjord et al.‘s definition. Rigorously, a complete understanding of the brain is required by science before humans can unequivocally claim to pass the Lovelace test. Yet human creativity beyond programming and experience is undeniable.
"Flash of Genius"
Does creativity and innovation exist beyond programming and experience? In his book Psychology of Invention, mathematician Jaques Hadamard21 describes his own creative mathematical thinking as wordless and sparked by mental images that reveal the entire solution to a problem.22 Penrose agrees.23 He says mathematical solutions can wordlessly appear in his mind. It may take days to work out the details even though the solution is clearly understood.
The great mathematician Friedrich Gauss24 describes a similar instance of his own creative experience:
Finally, two days ago, I succeeded not on account of my hard efforts, but by the grace of the Lord. Like a sudden flash of lightning, the riddle was solved. I am unable to say what was the conducting thread that connected what I previously knew with what made my success possible.
The Lovelace test was imposed by the U.S. Patent Office for a while. A "flash of creative genius"25 was required for patentability.26 Regarding patents, the 1941 Supreme Court ruled:27
The new device [to be patented], however useful it may be, must reveal the flash of creative genius, not merely the skill of the calling. If it fails, it has not established its right to a private grant on the public domain.
A machine that exhibits the Supreme Court’s "flash of creative genius" or, as Gauss called it, "a sudden flash of lightning" displays Strong AI. This "Flash of Genius" claimed by mathematicians Penrose, Gauss, and Hadamard is also experienced by creative artists such as composers and novelists. Innovation neither anticipated nor explainable by the creator of the machine is required. This is the foundation of Lovelace test. Humans appear to pass the Lovelace test. Whether a manmade machine can do so is still an open question.
(1) Turing, A., "Computing machinery and intelligence," Mind LIX (236):
433460, (October 1950).
(2) Paul Gray, "Computer Scientist: Alan Turing," Time Magazine, March 29, 1999.
(3) Bringsjord, Selmer, Paul Bello, and David Ferrucci. "Creativity, the Turing Test, and the (better) Lovelace test," Minds and Machines 11:3-27, 2001.
(4) Ren� Descartes, Discourse on Method and Meditations on First Philosophy, New Haven & London: Yale University Press. p. 3435 (1996).
(5) Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press, 1999.
(6) Penrose, Roger. Shadows of the Mind. Vol. 204. Oxford: Oxford University Press, 1994.
(7) Stephen C. Meyer, Signature in the Cell: DNA and the Evidence for
Intelligent Design, HarperOne (2009), p. 292.
(8) William A. Dembski and Robert J. Marks II, "Conservation of Information in Search: Measuring the Cost of Success," IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, Vol. 5, No. 5, September 2009, pp. 1051-1061.
(9) William A. Dembski and Robert J. Marks II, "The Search for a Search: Measuring the Information Cost of Higher Level Search," Journal of Advanced Computational Intelligence, Vol. 14, No. 5, 2010, pp. 475-486.
(10) William A. Dembski and R.J. Marks II, "Bernoulli’s Principle of Insuf- ficient Reason and Conservation of Information in Computer Search," Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics. San Antonio, TX, USA. October 2009, pp. 2647-2652.
(11) William A. Dembski and Robert J. Marks II, "Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information" in Bruce Gordon and William Dembski, editors, The Nature of Nature (Wilmington, Del.: ISI Books, 2011), pp. 360-399.
(12) William A. Dembski, Winston Ewert, Robert J. Marks II, "A General Theory of Information Cost Incurred by Successful Search," in Biological Information: New Perspectives, Cornell University, edited by R.J. Marks II, M.J. Behe, W.A. Dembski, B.L. Gordon, J.C. Sanford (World Scientific, Singapore, 2013), pp. 26-63.
(13) Winston Ewert, William A. Dembski and Robert J. Marks II, "Conservation of Information in Relative Search Performance," Proceedings of the 2013 IEEE 45th Southeastern Symposium on Systems Theory (SSST), Baylor University, March 11, 2013, pp. 41-50.
(14) Bringsjord, Selmer, Paul Bello, and David Ferrucci. "Creativity, the Turing Test, and the (better) Lovelace test," Minds and Machines 11:3-27, 2001.
(15) DOD’s Ada computer program is named after Ada Lovelace. Lovelace is considered by some to be the first computer programmer.
(16) Winston Ewert, Robert J. Marks II, Benjamin B. Thompson, Albert Yu, "Evolutionary Inversion of Swarm Emergence Using Disjunctive Combs Control" IEEE Transactions on Systems, Man & Cybernetics: Systems, Vol. 43, No. 5, September 2013, pp. 1063-1076.
(17) Jon Roach, Winston Ewert, Robert J. Marks II and Benjamin B. Thompson, ‘Unexpected Emergent Behaviors From Elementary Swarms," Proceedings of the 2013 IEEE 45th Southeastern Symposium on Systems Theory (SSST), Baylor University, March 11, 2013, pp. 41-50.
(18) Russell D. Reed and R.J. Marks II, Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, (MIT Press, Cambridge, MA, 1999).
(19) If you want a fun online application of impressive weak AI, go to SCIgen and enter your name along with the name of some friends. Hit Generate and then PDF. SciGEN will write a random paper for you including figures and phony references listing you as coauthor. For non-experts, the result reads intelligently and might pass the TT, but not the Lovelace test. The program is doing exactly what the programmers intended.
(20) Bringsjord, Selmer, Paul Bello, and David Ferrucci. "Creativity, the
Turing Test, and the (better) Lovelace test," Minds and Machines 11: 3-27, 2001.
(21) Engineers will recognize the Hadamard transform.
(22) Hadamard, Jacques, An Essay on the Psychology of Invention in the Mathematical Field, New York: Dover Publications (1954).
(23) Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford University Press (1999).
(24) Gauss’s namesakes are legion. They include (a) the Gauss (metric unit of magnetic field); (b) Gaussian elimination (solving simultaneous linear equations); (c) Gauss’s Law for magnetism (one of Maxwell’s equations); (d) Gaussian noise, and many more.
(25) Flash of Genius was the title of a book later made into a movie about the patent dispute concerning the invention of the intermittent windshield wiper.
(26) Those who follow patent law will not be surprised that the policy was eventually rejected by Congress in 1952.
(27) United States Supreme Court, Cuno Engineering Corp. v. Automatic
Devices Corp., 314 U.S. 84 (1941).
Image: Alan Turing Memorial, Manchester, England; Bernt Rostad/Flickr.