Sütfeld and Pipa found that a simple “value-of-life” model, which in this study assigned a single value to each object, worked best. It was able to find a moral middle ground between all of the decisions made by the study participants, providing an average value for each of obstacle.
“In principle, other factors, such as different probabilities of injury or death, could also be included in the model, but that was not within the scope of this study,” Sütfeld explained.
“Neural networks are still mostly black boxes for us,” said Sütfeld. “We can see what we put into them and we can see what comes out, but we cannot really grasp what happens in between.”
A less-complex algorithm could be almost as accurate as neural network but offer far more transparency. In the case of a “value of life” model, for example, scientists know the assigned value of certain objects and how the algorithms is arriving at the decision. Models can also factor in other variables, such as whether someone ran into the street or made an illegal turn, for instance.
“This kind of transparency may be very important when it comes to public acceptance of these models,” Sütfeld pointed out.
Although the guidelines are moving in the right direction, Sütfeld said that there is some disconnect between what the committee thinks is morally correct and how a person would actually behave. For example, one guideline states that computer algorithms cannot factor in age as a way to classify whether a potential victim is expendable.
“Do we want them to behave as humans would, or adhere to categorical rules?” asked Sütfeld.
“What we can say for now,” Sütfeld noted, “is that VR is in our opinion a logical starting point and a viable solution that should be considered.”