If I never heard the trolley problem referenced in a discussion of autonomous vehicles again, I would be excessively happy. But I’m from the Midwest, so I’m ok with being miserable. I also understand the irony in writing this piece.
I’m not going to explain the basics of the Trolley Problem. No one needs another horrid rehash of it. It has been done to death.
But that says something.
The trolley problem is one of the go-to thought experiments in philosophy. We are fascinated, weirdly, about no-win situations where others die and we must make the decision. And one of the reasons for this it probes our interior life, forcing us to confront our decision making capabilities and our moral justifications.
The trolley problem is a thoroughly embodied experiment. Indeed, just how embodied the problem is leads to different outcomes. Are you pulling a handle to change the tracks or pushing an obese person to stop the train? The extent to which physical force is involved in making the choice matters. Pushing someone onto the tracks doesn’t get the kind of acceptance that a switch of the lever does.
And so the variations on this theme continue. Countless discussions and studies and retorts and rejoinders. Clearly it has been a boon; think of all of the philosophy tenures and job offers predicated on this experiment. But the entire point is to test the incompatibility of the potential answers. The experiment is structured to allow only two solutions, which both will lead to the loss of life. In turn, those solutions have lead philosophers and psychologists to probe the concept of intention and harm within the various schools of philosophical thought.
So what about autonomous vehicles and this experiment? For me, two broad problems exist, one involves context and the other involves moralizing.
So first, some words about context. When was the last time you had to choose, as Bill Ford suggests, between hitting a grandma or a baby? In fact, when was the last time that anything approximating the trolley problem came up?
The actual, real life problems are far more mundane and sadly avoidable. Last year, 35,000 lives were lost in the US due to traffic fatalities, and 94 percent of those are attributable to human error. Almost a third are due to alcohol alone. Injuries from collisions involved 2.44 million people, adding massive costs to healthcare and loss in productivity.
Does discussing the trolley problem really advance the ball for autonomous vehicles? Does it open up a class of questions that have to be confronted?
Philosophy professor Patrick Lin thinks there is merit. “Not a lot of engineers appreciate or grasp the problem of programming a car ethically, as opposed to programming it to strictly obey the law.” Funny thing, Sweden has been working on exactly that problem and has drastically cut their roads deaths. Stricter law enforcement along with better road design and better education of drivers have all combined to give Sweden the safest roads in the world.
While I do think there is merit to thinking through the ethical implications about autonomous systems, focusing on the trolley problem is the easy way to do it. Instead of this problem, more discussion should surround negligence and liability standards.
Or maybe municipalities should have stricter enforcement of laws. Why should an autonomous system have to conform to the crowd when the crowd is breaking the law en masse? Perhaps there should be more focus on safety and more training for drivers.
But that leads me into my main gripe.
Unless I am missing something here, autonomous vehicles aren’t going to be morally interiorized agents. I’m not sure I would like to ride in a car struck with the moral ennui of a third year philosophy student ruminating over the trolley problem. I have a feeling it would change the radio station to one that plays grunge and the annoyance of that would just be too much.
Applying the trolley problem to autonomous vehicles pushes us to believe that the technology will be embedded with our own ethical proclivities. And yet, that already happens, as James Moor explains,
Planes are constructed with warning devices to alert pilots when they’re near the ground or when another plane is approaching on a collision path. Automatic teller machines [cashpoints] must give out the right amount of money. These machines check the availability of funds and often limit the amount that can be withdrawn on a daily basis. These agents have designed reflexes for situations which require monitoring to ensure security. Implicit ethical agents have a kind of built-in virtue—not built-in by habit but by specific hardware or programming.
To me, the application of the trolley problem to autonomous vehicle highlights a bigger problem that we have yet to properly consider. Autonomous agents are beginning to have significant consequences, and yet it is impossible to decipher intent. And the role of intent is a big feature of the trolley problem experiments. We don’t really know what is going on inside the box and that isn’t comforting.
Notice what is suggested in this piece from The Atlantic:
Human drivers may be forgiven for making an instinctive but nonetheless bad split-second decision, such as swerving into incoming traffic rather than the other way into a field. But programmers and designers of automated cars don’t have that luxury, since they do have the time to get it right and therefore bear more responsibility for bad outcomes.
Some version of this view continually pops up. A programmer that is designing an algorithm for a self-driving car bears more responsibility than a driver in a similar situation. Why is that exactly? Yes, we all make mistakes, but generally people still maintain good intent. We know that people are flawed but they don’t mean to harm. An autonomous vehicle, on the other hand, cannot maintain any intent. Thinking through the trolley problem grafts on that moral calculation to autonomous vehicles, regardless of how useful it might be.
The trolley problem is an ethical blankie, a token of childhood in a wildly changing world.
Beginning with work by Heider and Simmel in 1944, countless studies have found that simple two-dimensional geometric shapes, such as triangles and discs, evoke attributions of beliefs, desires, emotional states, and even genders. Humans are quick to ascribe agency to objects and we are rapidly creating a population of digital and robotic agents.
Autonomous vehicles are just the first generation. We are entering an age of technoanimism.
Animsim often conjures up regressive image of a “primitive religion” where “trees, mountains, rivers and other natural formations possess an animating power or spirit.’’ But animism should be understood as a knowledge system, standing in contrast the traditional subject object distinction. Botanists might collect a specimen to categorize it, sort it, and place it within a larger system of knowledge. The botanists are the subject and the trees are the objects, but that isn’t the only way of understanding. Instead, animists will “talk with trees” to understand the tree’s relationship in their world, expecting a response and responding in kind. Rather than just an object, trees are understood as subjects of their own. Anthropologist Nurit Bird-David explained it like this, against the understanding of ‘‘I think, therefore I am,’’ stands in contrast ‘‘I relate, therefore I am’’ and ‘‘I know as I relate.’’
How we relate to these new technological agents is a far more important discussion than a new application of the trolley problem. I don’t ask for much, so make me happy by not talking about the trolley problem again. Let’s move on to something more substantive.