The Ethical Issues with Driverless Cars
Driverless cars are known for being made to help prevent crashes and accidents caused by human error. What happens when a crash is unavoidable and a driverless car is faced with making a decision where all the options put at least one life at risk?
This is a question that MIT Media Lab researchers have been investigating and the BBC have also reported on. (https://www.bbc.co.uk/news/technology-45991093)
There are lots of issues with this situation and it is a question that has been debated over for a while now. There are so many factors that must be considered but there can never be an ideal solution. The challenge is finding the best of the worst solutions, however, is it actually possible to decide in advance, who should die if this situation were to arise?
Currently when a human is put in this situation they only have seconds to make their decision with no time for real thought, especially not debating moral reasoning. Perhaps, in this instance a human would be involuntarily selfless, choosing to save a stranger over themselves, without any other factors coming in to play. Equally their instinct may be to save themselves. This decision cannot be predicted or controlled.
While it is an impossible decision for a driver to have to make, is it even harder to make the decision when you’re deciding what should happen in all scenarios and with time to think about it? While there are a huge number of factors in the debate, the main one seems to be the number of people in each scenario. If a car has to decide between hitting a wall and killing 4 people in the car or swerving and hitting just one pedestrian, who should the car choose to save? Should 3 extra lives being saved by hitting the pedestrian, be seen as an advantage? It may seem that more lives saved, the better, but on the other hand, the pedestrian’s life would not have been at risk in the first place if the car had not come in to the situation.