Intelligent Design Icon Intelligent Design
Neuroscience & Mind Icon Neuroscience & Mind

Can Artificial Intelligence Be Creative?

Image: Lady Ada Lovelace (1815–1852), via Wikimedia Commons.

Editor’s note: We are delighted to present an excerpt from Chapter 2 of the new book Non-Computable You: What You Do that Artificial Intelligence Never Will, by computer engineer Robert J. Marks, director of Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.

Some have claimed AI is creative. But “creativity” is a fuzzy term. To talk fruitfully about creativity, the term must be defined so that everyone is talking about the same thing and no one is bending the meaning to fit their purpose. Let’s explore what creativity is, and it will become clear that, properly defined, AI is no more creative than a pencil.

Creativity = Originating Something New

Lady Ada Lovelace (1815–1852), daughter of the poet George Gordon, Lord Byron, was the first computer programmer, writing algorithms for a machine that was planned but never built. She also was quite possibly the first to note that computers will not be creative — that is, they cannot create something new. She wrote in 1842 that the computer “has no pretensions whatever to originate anything. It can do [only] whatever we know how to order it to perform.”

Alan Turing disagreed. Turing is often called the father of computer science, having established the idea for modern computers in the 1930s. Turing argued that we can’t even be sure that humans create, because humans do “nothing new under the sun” — but they do surprise us. Likewise, he said, “Machines take me by surprise with great frequency.” So perhaps, he argued, it is the element of surprise that’s relevant, not the ability to originate something new.

Machines can surprise us if they’re programmed by humans to surprise us, or if the programmer has made a mistake and thus experienced an unexpected outcome. Often, though, surprise occurs as a result of successful implementation of a computer search that explores a myriad of solutions for a problem. The solution chosen by the computer can be unexpected. The computer code that searches among different solutions, though, is not creative. The creativity credit belongs to the computer programmer who chose the set of solutions to be explored. One could give examples from computer searches for making the best move in the game of GO and for simulated swarms. Both results are surprising and unexpected, but there is no creativity contributed from computer code.

The Flawed Turing Test

Alan Turing, an atheist, wanted to show we are machines and that computers could be creative. Turing equated intelligence with problem solving, did not consider questions of consciousness and emotion, and referred to people as “human computers.” Turing’s version of the “imitation game” was proposed to show that computers could duplicate the conversational human. This is why the biographical movie starring Benedict Cumberbatch as Turing was titled The Imitation Game.

How can computers imitate humans, according to Turing? The imitation game (which came to be called the Turing test) simply asks whether, in a conversational exchange using text (that is, an exchange in which the participants are hidden from each other), a sufficiently sophisticated computer can be distinguished from a human. If a questioner gets lucid, human-sounding answers from the computer, and believes the computer is in fact a human typing in answers from another room, then the test has been passed. (Incidentally, the converse of the Turing test is easy. Simply ask it to calculate the cube root of 12 out to 10 significant figures. If the answer is almost immediate, you are talking to a computer.) 

There are those who claim that today’s chatbots have passed the Turing test. Most famously, a computer program known as “Eugene Goostman” purportedly passed. One posted subtitle claiming this reads, “‘Eugene Goostman’ fools 33% of interrogators into thinking it is human, in what is seen as a milestone in artificial intelligence.”

But those making a test often cheat to tweak the outcome. That’s what Goostman’s programmers did. Goostman supposedly was a Ukrainian, which meant English was not his first language, so his poor grammar could be excused. He was supposedly a teenager, so any lack of depth of understanding could be chalked up to his naïve intellect. Likewise, his tendency to be silly and deflect questions could be considered typical teenage behavior. In short, Goostman’s software character was gamed to pass the Turing test.

Here are a couple of examples of an exchange between Goostman and his questioner. Goostman is first asked a question proposed in the first line of the lyrics of the song “Close to You,” popularized by the Carpenters:

Questioner: Why do birds suddenly appear?

Goostman: Just because 2 plus 2 is 5! By the way, what’s your occupation? I mean — could you tell me about your work?

We see here an often-used cheat in chatbots. If an answer is not known, the chatbot responds with an unrelated question to derail the conversation’s direction. 

Here’s another exchange with Eugene Goostman:

Questioner: It is ok, I get sick of sick people. How is your stomach feeling today? Is it upset maybe?

Goostman: I think you can’t smile at all. I bet you work in a funeral agency.

As you can see, Goostman’s answers here are elusively non-responsive.  

Gamed by Programmers

Selmer Bringsjord correctly notes the Turing test is gamed by programmers. “Gamed” here is a nice word for being an elusive cheat. As Bringsjord writes, “Though progress toward Turing’s dream is being made, it’s coming only on the strength of clever but shallow trickery.”

When gaming the system, chatbots can deflect detection by answering questions with other questions, giving evasive answers, or admitting ignorance. They display general intellectual shallowness as regards creativity and depth of understanding.

Goostman answered questions with questions like, “By the way, what’s your occupation?” He also tried to change topics with conversational whiplash responses like “I bet you work in a funeral agency.” These are examples of the “clever but shallow trickery” Bringsjord criticized.

What, then, do Turing tests prove? Only that clever programmers can trick gullible or uninitiated people into believing they’re interacting with a human. Mistaking something for human does not make it human. Programming to shallowly mimic thought is not the same thing as thinking. Rambling randomness (such as the change-of-topic questions Goostman spit out) does not display creativity. 

“I propose to consider the question, ‘Can machines think?’” Turing said. Ironically, Turing not only failed in his attempt to show that machines can be conversationally creative, but also developed computer science that shows humans are non-computable.