No! Bad train! Bad! | Photo: Louise Docker, CC BY 2.0
Every time the conversation steers toward self-driving cars, if the conversation goes on for long enough, someone brings up the trolley problem. Here’s one variant, from the article “Why Self-Driving Cars Must Be Programmed to Kill“:
Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?
And as far as I can tell, the answer is simple: screw utilitarianism, because no-one will ever buy a utilitarian car. The rules self-driving cars follow must be in line with the existing social contract that defines our expectations of cars, pedestrians, and roads.
See, whenever this question is asked, people ask it as though it’s a matter of killing the occupant to save a crowd, or killing a crowd to save an occupant, as though it were a simple matter of ten lives versus two, but that’s completely ignoring the system that’s currently in place: expectations.
The question above, in all the discussions I’ve seen of it so far, consistently fails to ask the most important question: are the people in the road supposed to be there? We think absolutely nothing of a train running over someone foolish or unfortunate enough to step in front of it, and a track is — when it comes down to it — little more than a crude form of automation.
Imagine for a moment a train that derails to avoid someone on the tracks — it’s absurd, but why?
Imagine getting on a train that you knew had the potential to derail itself and kill you to avoid killing someone else. Again, it’s absurd — but why?
The reason is that “staying on the tracks” is a part of the contract we form with the train. Trains go on tracks — that’s what they do. We would like a train to avoid hitting things if at all possible, certainly, but we take as an unchangeable variable that the course the train takes is set. That’s the social contract.
Now replace tracks with roads.
Whether the car swerves or not is all about the social contract we form with the road, and therefore the answer to the above question is dependent on the situation. Here’s the TL;DR:
Cars belong on roads, and people do not. If a person in the car is following the rules of the social contract that governs our expectations of cars, roads, and pedestrians, they have an expectation of safety that pedestrians in a road (and not in a crosswalk) do not. The car must therefore be programmed to stop as quickly as possible to avoid the collision, but not to penalize the person obeying the social contract of roads. Cars must also be programmed not to leave the road to avoid an accident unless it can be guaranteed that no pedestrians are on the sidewalk. Cars do not belong on sidewalks. There, pedestrians have the ultimate right of way.
If we were to go with a utilitarian view, maybe more lives would be saved, but you’d have people dying who did nothing counter to the social expectations that underlie the very concept of cars. Imagine a car killing a pedestrian where pedestrians have the right of way, in order to avoid three idiots who wandered into the road where the car has the right of way. It robs the pedestrian of agency in a way that undermines trust in the technology. It wouldn’t be enough that people would stop buying self-driving cars, there’d be calls for a ban on self-driving cars by society writ large — why? Because the cars can’t be trusted to behave the way cars should, and it would put the category of “people who did nothing wrong” somehow below the category of “people who should have known better.”
And that’s a recipe for disaster.
Thanks for reading! Except for the very *very* occasional tip (we take Venmo now!), I only get paid in my own (and your) enthusiasm, so please like This Week In Tomorrow on Facebook, follow me on Twitter @TWITomorrow, and tell your friends about the site!
If you like our posts and want to support our site, please share it with others, on Facebook, Twitter, Reddit — anywhere you think people might want to read what we’ve written. Thanks so much for reading, and have a great week.
Richard Ford Burley is a human, writer, and doctoral candidate at Boston College, as well as an editor at Ledger, the first academic journal devoted to Bitcoin and other cryptocurrencies. In his spare time he writes about science, skepticism, feminism, and futurism here at This Week In Tomorrow.