The Ethical Dilemma of Self-Driving Cars

Based on this – go watch, I’ll wait:

Back already?

There are several flaws in the the premise:

  • A large object falls of the vehicle in front of you: no matter what size and mass of the object that fell of the vehicle in front of you, it’s not going to come to an instantaneous stop. Use the brakes and scrub off some speed the moment the threat is detected. Just like you would if you were driving yourself. I’d argue that regardless of the nature of the unsecured load, the brakes will stop your self-driving car long before the load has come to rest.
  • Arguing that “you cannot stop” is also entirely flawed. You don’t know that you can’t stop until you’ve tried. Have you tried? Try. Scrub off some speed. If you’re so close that the laws of physics prevent you stopping in time, then you’re too close. Use the brakes and Increase your following distance. Even if you -do- hit the object in front of you, doing it at a lower relative speed is better than doing it at a higher speed.
  • Swerving? Really? Is that what they teach in Drivers’ Ed. today? Swerving to avoid a hazard takes away valuable traction from the tires that is better leveraged for slowing the vehicle. Use the brakes.
  • Suggesting that your self-driving car – or self-driving network, even – might have a way to discern the contents, condition, ages, health, or any other aspect of the occupants of nearby vehicles is also flawed. Yes, I can comprehend a technical sequence of events that might lead to such compute capabilities in the future (that, actually, would be the bigger ethical dilemma). But your self-driving car doesn’t need to know any of these things. The only thing it needs to know – other than whether it’s upright and on course – is whether it’s traveling too fast for the current road conditions and nearby objects. If it is, then apply a bit of brake.

So, to answer the ethical dilemma of self-driving cars: apply the brakes.