Neuroscience & Mind
Could Artificial Intelligence Ever Pass the Van Gogh Test?
That is, the Van Gogh Test for sheer creativity. This past Thursday night, Discovery Institute’s tech summit COSM 2022 presented a live, in-person interview with Federico Faggin, the Italian physicist and computer engineer who co-won the prestigious Kyoto Prize in 1997 for helping develop the Intel 4004 chip.
Faggin was interviewed by technology reporter Maria Teresa Cometto, who asked him to regale the audience with tales about helping to design early microchips. Eventually Faggin recounted a time when he was “studying neuroscience and biology, trying to understand how the brain works,” and came upon a startling realization:
And at one point I asked myself, “But wait a second, I mean these books, all this talk about electrical signals, biochemical signals, but when I taste some chocolate, I mean I have a taste. So where’s the taste of the chocolate coming from? They’re not electrical signals, right? A computer, does it taste this? Does it have a sensation or a feeling for the signals that he has in his memory or in his CPU? Of course not. So where are sensations and feelings coming from?” … And so I discovered what was later called the hard problem of consciousness.
The Big Question
Cometto then asked him the big question: “So can consciousness emerge from a computer or from an artificial intelligence program?”
Without hesitation Faggin offered an unmistakable answer: “No, I can say a definite no. And I can explain it.”
The hard problem of consciousness that Faggin was referencing pertains to the origin of “qualia” — a term exposited by philosopher David Chalmers. According to the Internet Encyclopedia of Philosophy, “Qualia are the subjective or qualitative properties of experiences,” such as what it fees like to see a sunset or prick your finger on a thorn or smell a rose. Faggin explains it this way:
Consciousness is the ability that we have to know through an experience. An inner experience is something that we feel within ourselves. It’s not something out there. It’s within ourselves. We know that. … And we know because we feel what we know. And the feelings are called qualia.
But can a computer experience these feelings — can we program a computer to replicate qualia? According to Faggin, during this period of intellectual exploration he was still a materialist, and believed that the human mind is no more than the brain. Based upon this belief, he embarked on what he eventually realized was an impossible task — the creation of a conscious computer:
It was a personal project at that point to try to figure out how can I make a conscious computer. And in my spare time I was thinking, how can I do that? And the more I tried, the worse it got. I mean, there is no way that you can convert electrical signals into sensations and feelings. They are two different categories…. [T]he feelings, sensations, and feelings, you cannot touch them. You cannot measure them, you cannot feel them. So how has it possible?
How Faggin Would Reply to Blake Lemoine
This recalls the conversation with computer scientist and former Google engineer Blake Lemoine at the AI panel from earlier in the day. According to Lemoine, Google’s LaMDA chatbot “argues that it is sentient because it has feelings, emotions, and subjective experiences. Some feelings it shares with humans in what it claims is an identical way.”
Faggin would reply that there’s a fundamental difference between humans and computers: “We know because we feel,” he said. “A computer knows because it has data.”
Yet Lemoine might reply that enough data could be sufficient to make a computer sentient. AI can learn just as humans can learn, and this repeated feedback-selection-learning process is crucial to developing a sophisticated mind. As he argued in the morning panel, “the training data that these systems have is analogous to the experiences that a human has had in their life that they’ve learned from.”
Even artistic creativity might be learned, Lemoine suggested. Creativity “requires feedback and artists get feedback all the time,” he said. “They produce new stuff, people clap or people boo.”
As a possible counterexample, consider the case of Vincent Van Gogh. According to art history lore, Van Gogh sold only a few paintings (possibly only one verified painting) during his entire lifetime. He has been called a hermit who did not work well with others. As an artist, therefore, he received little “feedback” from sales or critical interaction to tell him what art worked and what didn’t.
Yet his art was undeniably novel and brilliant.
An Inner Love
Van Gogh painted simply because he loved art and created art for the sake of art. There was something within him that drove him to do this, and despite his reclusive nature, that inner love drove him to hone his art to perfection. As Van Gogh reportedly said, “I put my heart and my soul into my work, and have lost my mind in the process.”
Could the kind of AI described by Lemoine ever repeat the life of Vincent Van Gogh? Could a program create, improve, and even perfect a form of art simply because it loves art — not because it was receiving feedback-selection loops as the program ran its course? Could AI be driven by something internal — a love for something rather than by feedback from the external world?
Read the rest at Mind Matters News, published by Discovery Institute’s Bradley Center for Natural and Artificial Intelligence.