Red Herrings, Flying Figs – and Self-Driving Cars And Why I Don't Care If My Autonomous Vehicle Saves a Schoolbus

By | 2 January, 2016

Fat Man problem and self-driving carsIt’s all the rage lately, the “morality” issue with self-driving cars or autonomous vehicles: What should my self-driving car do if it sees that an accident is inevitable, and somebody has to die?

The Scenario: The Car Crash That Can’t Be Avoided

Suppose I am the sole occupant in my smart car, cruising along happily on the highway. It’s driving, I’m reading.

Perhaps I’m a nice enough guy, but sadly dying of an incurable cancer, with two months to live. I’m on my way to hospice.

In my lane directly ahead of me is a school bus full of angelic, talented young people, just starting out on the journey of life. They have so much to offer, and so many good years to offer it.

Just behind me is a car speeding out of control. It’s going to rear end me in a few seconds. The laws of physics dictate that a horrendous collision is inevitable.

My beautiful autonomous vehicle does a quick calculation, and it determines that there is just enough time for my car to switch lanes. I’m safe!

Of course, this means that the school bus will get demolished instead.

One final detail: the car software knows that if I just stay in my lane and get hit, the school bus will be spared.

The Red Herring: Me or Them?

Maybe I should give it up for the kids? After all, I don’t have much longer to live, and they have their entire young lives ahead of them. Should I do the “right” thing, and stay in my lane?

Most people would say yes, but here’s the rub – it’s not my decision. There’s no time. It’s up to the car’s software.

Why I Don’t Give a Flying Fig

A lot of people are spinning their wheels on this one. Why is this an irrelevant or even dangerous question?

Well, here’s an interesting fact to chew on: an average of 90 people die every day  in the United States alone in car crashes. According to the World Health Organization, there are on average more than 3400 deaths every day on the roads world-wide.

Autonomous/self-driving vehicles would prevent 93% of these deaths, according to Lloyd’s. That translates to saving 83 lives per days in the US, and 3160 per day world-wide.

Which means that almost every minute that we delay the implementation of self-driving technology by debating this silly red herring, one more person dies. And the injury rate is one serious, life-changing injury every 2½  to 5 seconds.

So that’s why I don’t give a flying fig, and you shouldn’t either. While these theoreticians debate how many angels can dance on the head of a pin, real people are suffering and dying every second of every day.

And that’s why I take this hard line: Anyone in government, industry, academia, or anywhere else who needlessly delays the implementation of self-driving/autonomous technology by even one minute is tantamount to being an accessory to murder. Strong stuff, but tell me why I’m wrong.


2 thoughts on “Red Herrings, Flying Figs – and Self-Driving Cars And Why I Don't Care If My Autonomous Vehicle Saves a Schoolbus

  1. ed helldane

    Nice, “accessory to murder”. But automatic drivers, AI Drivers, shouldn’t be programed with that kind of logic, not by default. They should just be programmed to avoid accidents, or avoid killing people themselves.

    That means, in the above scenario, the AI Driver would have moved your car out of the way without hesitation… without you even noticing, cause you were otherwise preoccupied. And yes, those other people would have maybe died.

    But in a similar situation, if your AI Driver was speeding down the road and a car pulled out in front of it, it should prefer to side swipe the car a lane over, rather than hit the car pulling out head on. Maybe the rider in the car your AI Driver sideswipes dies, and you and the driver of the car ‘that was in the wrong’ lives. This is still the best choice because it minimizes the percentage of chance their will be a ‘serious’ injury to any of the cars occupants.

    And that’s really the best we can do: try and avoid danger, try and avoid harm, try and avoid death… by the numbers.

    In a other situations, your AI Driver should always try and avoid collisions with lesser vehicles, bicycles, or pedestrians… just as you should when you are driving. Because you are the one assuming the risk and responsibility involved with choosing to drive or ride in a heavy, fast moving death machine. You can do the most harm, therefor you have the greatest responsibility.

    You can drop driving on a scale with things that are dramatically more severe, like being the person who fires nuclear weapons, and you suddenly start seeing the picture clearly. They get to use really dangerous things, and if they mess up, people are going to die. The same is true for every automobile driver, the risks and dangers are just less severe. But they still transfer over to AI Drivers.

    And anybody with a license should know they are obligated to avoid accidents and protect others from being harmed by their vehicle. Whether or not they want to use their vehicle to protect others, as a shield, is a personal choice. I’ve known people to do it. I would do it. I’d program my AI Driver to do it. But I wouldn’t buy, or ride in, a car that was programmed to do it against my will.

    And I think the people designing the things are smart enough to know this.

  2. tufur

    LMAO…..autonomous cars are nothing more than gps guided bombs. They are the perfect terrorist weapon. Even kids could get into the act on their way home from school. Just toss an aluminum dispersal bomb over the over pass and watch the self-driving cars react to a wall of aluminum flak. Worried about that self-driving car behind you? Just release a concentrated cloud of glitter from your tailpipe and poof, it is gone. 8D

Leave a Reply