I was lucky enough to stumble upon this gem today:
You see I’ve always loved the Trolley Dilemma (do you pull the lever and make the train kill one person, if its already careening towards five?) not ‘loved’ loved but I enjoy thinking about right and wrong and how it all fits together.
The modern driver-less car creates a real life trolley dilemma. The cars must be programmed to make decisions precisely like the 1 vs five and much worse. Its delightfully ghoulish and fascinating and insightful all at once. Apparently I’m heavily biased towards humans over dogs (that I knew) abiding by the law (I’m actually not a huge law-abiding kinda guy I’m just lucky that NZ law and my ethics match up) and bigger people over slim (did not see that coming considering I myself am a scarecrow)
But what really gets me about driver-less ethics is that it touches on so many biases. Automatic cars have been shown to be far safer than human drivers (statistically at least) yet any crash via a machine driving has wounded the perception of them (far more than an given crash leading people to believe humans are unsafe drivers). I confess I am just as guilty of this myself, the thought of a driver-less car crashing into me feels almost morally reprehensible even thought I know its a combination of naturalistic, and responsibility fallacies (i.e. believing what is natural is right, cos cars you know so natural, and preferring wrong-doing that has someone you can target with blame)
The fallacy I really enjoy rolling through my head though I don’t know if there is a specific name for it, but the kind of ‘hands-off’ ideal (not hands of the wheel) but where we prefer to not make judgement calls about this sort of thing, like who should driver-less cars crash into, even though such a call is and must be made, its kinda like people who trust their government to conduct military and clandestine operations.
Try out the Moral Machine test yourself, and don’t hesitate to leave your thoughts, would love to hear more moral conundrums.