A Self-Driving Car’s Choice: Who Lives and Who Dies?

A highway.
Almost 50 percent of all accidents take place on highways. (Image: JoshuaWoroniecki via Pixabay)

When faced with a choice of either crashing into a young person or a group of adults, what will a self-autonomous vehicle choose? And who is ultimately responsible for this choice — the driver or the car manufacturer? Who provides the ethical framework on which to base decisions for the self-driving car? On what basis do they prepare the framework? With self-driving cars becoming more prevalent, these are a few questions we will try to answer in this article.

More than 30,000 people are killed each year in accidents just in the U.S., and more than 2 million people are injured. Almost 95 percent of accidents are attributed to a misjudgment from the driver.

Alternatives

Take, for example, if you were driving a car, and you were forced to make a decision of crashing into one object to avoid another worse alternative. This would be termed as a sudden reaction, which it was, and not a deliberate act based on the intent of malice. But if a car manufacturer programmed a self-driving car to make such a decision based upon similar circumstances, that would amount to premeditated murder. For example: “In this situation, crash into this vehicle.”

Self-driving cars are designed to reduce accidents by cutting off the unstable human error from the scenario. This is the main premise of developing this kind of technology; with the luxury aspect secondary. Self-driving cars do not get drunk, tired, or angry.

Principles

When programmers code in a basic law for self-driving vehicles like “minimize harm,” that, in itself, leads to many varied interpretations based on situations, people involved, and the cost of a life. An example would be the two choices of truck and bike. If you program the car to crash into a truck because that’ll most likely result in fewer injuries; what if there was a baby inside? Or a bunch of babies?

Who decides the moral principles of a self-driving car?
Who decides the moral principles of a self-driving car? (Image: dominickvietor via Pixabay)

If you program the car to crash into the biker, then the consequences would be much higher, as expected. So do you allow the car’s program to make such a decision based upon the principle “minimize harm,” or do you feed a million possibilities into it?

Saving your own life first is the instinctual reaction of any human being. In this case, what decision does the car make — save your life or other lives — which life does it consider important? Would you buy a car that prefers saving other lives or your own life?

As it stands, there is no regulatory body to decide the principles on which to base the programming of a self-driving car. If a regulatory body were set up, who decides — the policymakers in the government, the manufacturer, or a consensus from the population? The fact is, no one knows.

Self-driving car ethics

The manufacturer wants to release as many vehicles out there as possible, and the government wants revenue in the form of taxes. That is where each of their obligations lies. So what happens to Joe Public?

Come to think of it — what is right and what is wrong? This has been the burning question since time immemorial. In Star Trek, Commander Spock says: “Logic dictates that the needs of the many outweigh the needs of the few.”

MIT built a “Moral Machine,” which displays moral dilemmas, and tests us by making the choice of doing what is considered the lesser of two evils. It works like a game; you can check it out here.

Ethics and morality have always been non-identical in different cultures. So which culture do we choose? It all comes down to your beliefs and perception of what is right.

As it stands, there is no regulatory body to decide the principles on which to base the programming of a self-driving car.
As it stands, there is no regulatory body to decide the principles on which to base the programming of a self-driving car. (Image: Proulain via Pixabay)

Lab rats

The argument in favor of launching self-driving vehicles now stems from the fact that mankind, on its own, doesn’t seem to do well. So does this mean we let all control of our fates into the hands of the machines? Where will that leave us?

With fewer responsibilities and so much more dependencies, does this make for a better human society? The counter-argument for this is that we do not give up control over things we can do, because this reduces our ability to accomplish things gradually, until, in the end, we are left with a population that’d need help getting up from bed.

The history of mankind is replete with mistake after mistake. It seems like we never learn, and are eager to repeat the follies. Do we stop deciding once and for all, and give that job to the AI machines — will that make this better? Or will that be our biggest mistake yet?

Follow us on Twitter, Facebook, or Pinterest

  • Armin Auctor

    Armin Auctor is an author who has been writing for more than a decade, with his main focus on Lifestyle, personal development, and ethical subjects like the persecution of minorities in China and human rights.

RECOMMENDATIONS FOR YOU