
When it comes to artificial intelligence, can they make the ethical choices all humans struggle with?
Self-driving cars are already among us, and have proven they are safer than their human counterparts by having very few accidents (proportional to the amount of cars on the road, per accident). Even while safer, they aren’t immune to accidents. This brings up an ethical dilemma: what should a self-driving car do if it detects an unavoidable collision? Below we will address different possibilities and scenarios about the (shady) ethics of a self-driving car.
- The Laws of Robotics – Let’s set this up: Isaac Asimov, professor and famous sci-fi author, introduced four sets of laws that robots must follow in order to keep humanity safe. The laws are: a robot may not injure a human being, a robot must obey the orders of a human, a robot must protect its own existence, and a robot may not allow humanity to come to harm. In addition, each law cannot conflict with the law(s) that came before it. (Ex: a robot cannot harm a human being, even if given the order by another human.) With these laws in place, we can be reasonably confident about the artificial intelligence of self-driving cars.
- Ethics Problems #1: The Tunnel Problem – Let’s say you’re speeding along in your new self-driving car, and are on your way home from work. There’s a tunnel up ahead, and out of nowhere, a child crosses the street and breaks his ankle. Your car calculates there are only two options: 1) hit, and kill the child or 2) swerve, saving the child, but crashing and killing you. When asked, 64 percent said they would rather their car kill the child, while 36 percent would sacrifice their own life for another. It’s up to today’s brightest to determine how to program the vehicle.
- Ethics Problem #2: The Trolley Problem – Let’s say that in this hypothetical there are 5 people tied up on train tracks—while the train is coming at full speed with its brakes broken. It has the option of switching to another track, but that track has 2 people tied up on it. While unlikely, the issue of killing the few or the many still stands. What should it do? 99 percent of people said they would rather the train kill the two people, since less death is perceived as “better.” The developers behind self-driving cars say they can program their “smart” cars to make the best decisions in these scenarios.
While self-driving cars will become common by 2025, and severely reduce accidents—it won’t be immune to accidents, and some of those accidents may bring up the ethics of these so-called “smart” cars. Until that day comes, if you have any questions to insure your human-driven vehicle, contact Gee Schussler Insurance Agency in Orland Park, Illinois. Our goal is to provide you with the auto insurance you need to stay safe on the road – with or without autonomous cars.