Just last week I discussed how an issue cropping up in the news (doctors involved in execution) was similar to an issue in my novel Life First. Well, it crops up again this week.
This short but fascinating article gives a quick overview of the decisions programmers will have to make when coding automated driving cars. If an emergency arises, should the car save you or save others? For example, if your auto-drive car calculates that you will kill twenty pedestrians if you swerve one direction, while you will be slaughtered by an oncoming Mack truck if you swerve in the other direction, it might just decide it would rather you bite the dust rather than those 20 people on the side walk.
The car has all the Vulcan logic of Mr. Spock in Wrath of Khan.* But, do people really want cars making those decisions, or do we want our own human preservation instinct to kick in in these situations? It’s a question programmers have to ask (AND answer, which is the hard part. Asking is easy. I just did that, and wow, was it really really easy. I can ask other questions, too, if you like. But you’d probably think I was digressing, so I won’t ask other questions right now.)
Since I’m not digressing, let’s talk how this relates to Life First. In the novel, the society of the future is all about preserving life as a whole. They’ve faced the edge of extinction and lived to tell the tale. So, their belief system would be very much like Mr. Spock’s, who said, “The needs of the many outweigh the needs of the few.” (For the people who bear the shame of never having seen Star Trek II: Wrath of Khan, Mr. Spock gets radiation poisoning when he fixes the warp core, saving all the lives aboard the ship. As an aside, you don’t have to wear that shame forever. Go watch the movie.)
So, in the Federation of Surviving States, where Life First takes place, it is quite clear what they’d program the cars to do. If more lives would be saved by you dying, then it’s time for you to say, “Sayonara.” 🙁
Gee, that feels like a downer (not the saving lives part; the dying part). However, we’re not quite to the point where automated cars are in every garage. At the moment, only Google is doing the autopilot car thing. To end this post, here are a couple of questions ( since we’ve already established it’s easy for me to ask questions): Would you ride in a car programmed to kill you if that’s what saved more lives? Do you think programmers should make these choices, or should they punt and offer a manual override, so you have to make a decision right as you’re soiling your pants because your car is hurtling toward doom at 60 mph?
Pingback: Friday Fun Stuff: Prime Time | RJ Crayton
Hmmm, I still think I want the option of being incredibly selfish and thinking of my own
skin first. I also don’t think I would like a car being smarter than me. I already have grandchildren in pre-school being able to outsmart me on i-pads, i-phones and many i-stuff. That seems unfair, I should have the advantage of being older, wiser and more experienced. It would be adding insult to injury making my car smarter. And then killing me.
I agree with you and I think a lot of people want to make that choice themselves, especially since, as Dale pointed out, it was the “smart” car’s crazy driving that got you into this situation in the first place. If the car is hurtling you toward peril, perhaps it’s hit it’s limit of good ideas and a human brain needs to take over.
I agree, the car should hand over the controls to the driver. But, I think the car should be smart enough to not be in that dilemma in the first place.
Oh, that’s a great answer, Dale. That Smart Car was already showing it wasn’t working optimally if it’s in that situation to begin with. So, definitely the driver needs to step in. 🙂
You both have such great answers. I think I’m torn on what to do because from a public safety standpoint, I’d want to save many lives. I have not been in a lot of life or death driving situations so I don’t know that I’d be better equipped to make a decision than a well programmed machine.
However, what if it’s not a well programmed machine. What if it’s on the blitz today or has mistaken a flock of wild turkeys for people (sorry, I’m a writer; my whatifs can get a little nuts)? I hate to leave all that decision making –about my life– in the hands of a machine I hope is working correctly today.
Ultimately I think I like the idea of a manual override. The question is whether such a thing would be practical (would you have enough time to grab hold and take control; the stuff that causes serious car wrecks usually takes place in a few seconds). Or maybe we need an ejecter seat with parachute, like fighter pilots. Though ejecting all passengers might be cost prohibitive and lead to midair collisions. Oh my! I guess we’ll have to leave this one to thedesigners to solve (and not buy a autopilot car if we don’t like the solution. )
The possibilities are endless, aren’t they – and those only lead to more questions. That’s what makes us writers, I think.
It is a sticky question. But I think I’m with Charlie. There is often another option that won’t be programmed in.
But I’m a person who can think beyond my gut when asked these qustions. The answers are not automatic. And if the choice is my life or more than one other life, what right have I to make myself more important than they are?
You don’t ask easy questions, do you? Well, give me a car that lets me make the final decision – there’s always a plan B, you know.