Intelligent Design Icon Intelligent Design
Neuroscience & Mind Icon Neuroscience & Mind

For AI to Be Creative, Here’s What It Would Take

Photo credit: Micha L. Rieser, via Wikimedia Commons.

Editor’s note: We are delighted to present an excerpt from Chapter 2 of the new book Non-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.

Selmer Bringsjord and his colleagues have proposed the Lovelace test as a substitute for the flawed Turing test. The test is named after Lady Ada Lovelace (1815-1852).

Bringsjord defined software creativity as passing the Lovelace test if the program does something that cannot be explained by the programmer or an expert in computer code. Computer programs can generate unexpected and surprising results. Results from computer programs are often unanticipated. But the question is, does the computer create a result that the programmer, looking back, cannot explain?

When it comes to assessing creativity (and therefore consciousness and humanness), the Lovelace test is a much better test than the Turing test. If AI truly produces something surprising which cannot be explained by the programmers, then the Lovelace test will have been passed and we might in fact be looking at creativity. So far, however, no AI has passed the Lovelace test. There have been many cases where a machine looks as if it were creative, but on closer inspection, the appearance of creative content fades. 

Here are a couple of examples.

AlphaGo

A computer program named AlphaGo was taught to play GO, the most difficult of all popular board games. AlphaGo was an impressively monumental contribution to machine intelligence. AI already had mastered tic-tac-toe, then the more complicated game of checkers, and then the still more complicated game of chess. Conquest of GO remained an unmet goal of AI until it was finally achieved by AlphaGo.  

In a match against (human) world champion Lee Sedol in 2016, AlphaGo made a surprising move. Those who understood the game described the move as ingenious and unlike anything a human would ever do. 

Were we seeing the human attribute of creativity in AlphaGo beyond the intent of the programmers? Does this act pass the Lovelace test? 

The programmers of AlphaGo claim that they did not anticipate the unconventional move. This is probably true. But AlphaGo is trained to play GO by the programmers. GO is a board game with fixed rules in a static never-changing arena. And that’s what the AI did, and did well. It applied programmed rules within a narrow, rule-bound game. AlphaGo was trained to play GO and that’s what it did. 

So, no. The Lovelace test was not passed. If the AlphaGo AI were to perform a task not programmed, like beating all comers at the simple game of Parcheesi (pictured above), the Lovelace test would be passed. But as it stands, AlphaGo is not creative. It can only perform the task it was trained for, namely playing GO. If asked, AlphaGo is unable to even explain the rules of GO.

This said, AI can appear smart when it generates a surprising result. But surprise does not equate to creativity. When a computer program is asked to search through a billion designs to find the best, the result can be a surprise. But that isn’t creativity. The computer program has done exactly what it was programmed to do.

The Sacrificial Dweeb

Here’s another example from my personal experience. The Office of Naval Research contracted Ben Thompson, of Penn State’s Applied Research Lab, and me and asked us to evolve swarm behavior. Simple swarm rules can result in unexpected swarm behavior like stacking Skittles. Given simple rules, finding the corresponding emergent behavior is easy. Just run a simulation. But the inverse design problem is a more difficult one. If you want a swarm to perform some task, what simple rules should the swarm bugs follow? To solve this problem, we applied an evolutionary computing AI. This process ended up looking at thousands of possible rules to find the set that gave the closest solution to the desired performance. 

One problem we looked at involved a predator-prey swarm. All action took place in a closed square virtual room. Predators, called bullies, ran around chasing prey called dweebs. Bullies captured dweebs and killed them. We wondered what performance would be if the goal was maximizing the survival time of the dweeb swarm. The swarm’s survival time was measured up to when the last dweeb was killed.

After running the evolutionary search, we were surprised by the result: The dweebs submitted themselves to self-sacrifice in order to maximize the overall life of the swarm. 

This is what we saw: A single dweeb captured the attention of all the bullies, who chased the dweeb in circles around the room. Around and around they went, adding seconds to the overall life of the swarm. During the chase, all the other dweebs huddled in the corner of the room, shaking with what appeared to be fear. Eventually, the pursuing bullies killed the sacrificial dweeb, and pandemonium broke out as the surviving dweebs scattered in fear. Eventually another sacrificial dweeb was identified, and the process repeated. The new sacrificial dweeb kept the bullies running around in circles while the remaining dweebs cowered in a corner.

The sacrificial dweeb result was unexpected, a complete surprise. There was nothing written in the evolutionary computer code explicitly calling for these sacrificial dweebs. Is this an example of AI doing something we had not programmed it to do? Did it pass the Lovelace test? 

Absolutely Not

We had programmed the computer to sort through millions of strategies that would maximize the life of the dweeb swarm, and that’s what the computer did. It evaluated options and chose the best one. The result was a surprise, but does not pass the Lovelace test for creativity. The program did exactly what it was written to do. And the seemingly frightened dweebs were not, in reality, shaking with fear; humans tend to project human emotions onto non-sentient things. They were rapidly adjusting to stay as far away as possible from the closest bully. They were programmed to do this.

If the sacrificial dweeb action and the unexpected GO move against Lee Sedol do not pass the Lovelace test, what would? The answer is, anything outside of what code was programmed to do. 

Here’s an example from the predator-prey swarm example. The Lovelace test would be passed if some dweebs became aggressive and started attacking and killing lone bullies — a potential action we did not program into the suite of possible strategies. But that didn’t happen and, because the ability of a dweeb to kill a bully is not written into the code, it will never happen. 

Likewise, without additional programming AlphaGo will never engage opponent Lee Sedol in trash talk or psychoanalyze Sedol to get a game edge. Either of those things would be sufficiently creative to pass the Lovelace test. But remember: the AlphaGo software as written could not even provide an explanation of its own programmed behavior, the game of GO.