Recently a Google engineer named Blake Lemoine made news by claiming that a chatbot he developed was sentient and spiritual, and that it should have all the rights people have. Lemoine claimed the chatbot (named LaMDA, which stands for Language Model for Dialogue Applications) meditates, believes itself to have a soul, has emotions like fear, and enjoys reading. According to Lemoine, Google should treat it as an employee rather than as property and should ask its consent before using it in future research.
“I know a person when I talk to it,” Lemoine said, and he provided a transcript of conversations he’d had with LaMDA on a wide range of topics.
Not for Being Delusional
Many experts, from psychologists to tech gurus, disagreed with Lemoine’s assessment. A Google spokesperson said that ethicists and tech experts investigated LaMDA and concluded that “the evidence does not support [Lemoine’s] claims.” Lemoine was placed on leave — not for being delusional and thinking his own creation had come to life, but for violating confidentiality agreements.
Yes, Lemoine’s chatbot can chat — that’s what chatbots are programmed to do. Nitasha Tiku of the Washington Post explains, “Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.”
In other words, artificial intelligence such as chatbots can spit out human-like conversation, but only because humans program it to do so. Users may engage with chatbots and feel like there’s a mind, a personality, a living being behind the words, but that’s only an illusion created by other people.
Nothing but Algorithms
As Robert J. Marks explains in his new book Non-Computable You, all AI is made up of math — algorithms. So, while AI might mimic human conversation, it doesn’t really converse. Getting a satisfactory answer depends on how the questions are asked; if a question isn’t phrased in a way the AI can process, the answer it gives will be evasive or otherwise rely on cheap trickery. In short, if you’re lonely, AI simply will not be a satisfactory substitute for human companionship.
This raises the interesting question of what it means to be human. Philosophers have approached this topic from various angles. Among other things, humans are sentient, which means we experience emotions; AI does not. Humans have consciousness, which is surprisingly difficult to define, but which AI clearly doesn’t have. Humans have understanding and not just factual knowledge; humans have common sense and the ability to deal with ambiguities; humans are creative. AI meets none of these criteria.
Not Human, Not Now or Ever
Ethicist Wesley J. Smith gives five reasons why artificial intelligence isn’t human:
- It isn’t alive; “inanimate objects are different in kind from living organisms.”
- It doesn’t think; “human thinking is fundamentally different from computer processing.”
- It doesn’t feel; feelings are “emotional states we experience as apprehended through bodily sensations” such as fear caused by an adrenaline rush.
- It’s amoral; humans have free will and thus are moral agents, whereas AI can only follow rules it’s programmed to follow.
- It’s soulless; AI is purely mechanistic, without a mysterious, immaterial, spiritual dimension.
It boils down to this: AI can process lots of data. AI can be programmed to mimic human interactions. But AI is not human — nor will it ever be.
Cross-posted from Salvo Magazine with permission of the author. Robert J. Marks spoke at the recent Dallas Conference on Science and Faith on the question, “Will Thinking Machines Replace Humans?”: