With the increasing use of autonomous driving vehicles we have to consider the ethical or moralistic questions of how vehicles should respond in different situations. Most readers of this post will likely have heard of the trolley problem. It basically asks if you knew a runaway train was going to run over and kill X number of people, should you push a fat man off of a bridge to stop the train and therefore save more lives? Is it justifiable because it's for the greater good of the population? Other versions of this question ask if you should divert the train to another track that would only kill one person. These questions were at one time simply a thought experiment to enlighten discussions on morality and ethics but today they are real. These things will actually occur in the future. Should the car veer off the road in and avoid a head on collision with a school but but assuredly kill a pedestrian or hit the bus?
If a car is programmed to respond in ways set by the programmers but has to make decisions on it's own based on probability and likelihood, is the car making a moral decision? Is that decision the car's own? Is it the decision of the programmer? What if the decision was the result of 100 programmers each focusing on a different decision? At what point would the car be making it's own decision? Who gets to decide how the cars should be programmed? Will different religious or belief groups want different decisions to be made?