If you ever get hit by a self-driving car, it may be my fault. It will not be only my fault, but I may have contributed to your demise. I’m sorry about that.
My contribution to your unfortunate death arrived through playing a round or two at MIT’s Moral Machine. MIT is using the machine to “crowdsource” opinions on how self-driving cars should respond to possible moral dilemmas.
The Moral Machine is an interactive version of the Trolley Problem, a thought experiment, an artificial moral conundrum first introduced in 1967, meant to tease out values. The Trolley Problem begins with a runaway trolley racing toward you. Five persons lie tied to the tracks. You have access to a lever that will route the trolley to another track, but, just before you flip the lever, you notice someone is tied to the side track as well. This creates the conundrum: Should you do nothing and let five people die, or do something and intentionally kill one person. Either way, someone dies through your action or inaction. The Moral Machine expands the variables of the Trolley Problem, but keeps the basic binary choice and outcome. It presents to you, as a disengaged observer, 13 scenarios a self-driving car may face. In each someone, or some ones, will get hurt, possibly killed. The victims may be in the car or crossing the road in front of the car. They may be old or young, honest or criminal, have the right-of-way or jaywalking, human or animal. The only guarantee is that someone will get hurt, probably killed. And each presents just two choices.
The problems the Moral Machine presents are caricatures of real moral problems. The program reduces everything to a question of who gets hurt. There are no shades of gray or degrees of hurt. It is, as is so often with computers, simply black or white, on or off. None of the details that make true moral decisions hard and interesting remain: Can the car safely swerve to the side of the road? Could the car drag along the side barrier in lieu of brakes? Can the car make a hard turn, creating a spin, to lose momentum? Could the car sound its horn so everyone flees? Is there an emergency brake? Can the engine be used to brake the car? Can the car be forcibly put into reverse? And so on. Real moral decisions, even difficult ones, contain a bundle of conditions, most unique to that moment, that complicate the choice. Whatever it is the Moral Machine presents, it is not representative of the moral choices we make each day. It is not even close to those we make while driving. As Russell Brandom comments:
The test is premised on indifference to death. You’re driving the car and slowing down is clearly not an option, so from the outset we know that someone’s going to get it. The question is just how technology can allocate that indifference as efficiently as possible. That’s a bad deal, and it has nothing to do with the way moral choices actually work. I am not generally concerned about the moral agency of self-driving cars — just avoiding collisions gets you pretty far — but this test creeped me out. If this is our best approximation of moral logic, maybe we’re not ready to automate these decisions at all.
To solve problems in a computer requires that we encode the problem into terms the computer can manage. That means reducing the shades of grey into a selection of discrete choices. MIT’s researchers replaced those shades of grey with a binary choice between gruesome outcomes. If we’re encoding music or video, if we’re summing a column of numbers, if we’re tracking our heart rates, if we’re trying to detect cats in photographs, that loss of information, those shades of gray not captured in the choices, often do not matter. But, in moral choices, it is the immediate circumstances, the thousand little details we handle holistically with our minds, that do matter. Only a strict utilitarian, someone who believes we can weigh and measure each life, placing them in neat order of who should die first and last, would think otherwise.
But even if we accept those limitations in exchange for the benefit increased automation may bring, the Moral Machine has other problems: Whose judgement matters? Crowdsourcing provides
Artificially Intelligent machines, including self-driving cars, lack intelligence. They also lack any sense of morality. It’s on or off. Do or don’t. Hit or miss. Live or die. And whatever they “choose,” the machines will not — with all due respect to Hollywood’s ill-informed depictions — feel bad or good or anything at all. Machines do not “choose,” they do not feel, they do not care, and they cannot make moral choices. We need to stop pretending that humans are computers or that computers can replace humans. Humans make moral judgements; computers do what they’re told, even if it means someone dies.