The Trolley Problem

I recently re-watched The Good Place (great show on Netflix) and wanted to weigh in on the ethical questions raised in the episode: The Trolley Problem.

Brief introduction: picture a trolley rolling down its tracks. Just up ahead, five people are tied to the tracks, sure to be killed by the trolley unless its path is altered. Good news: you have the power to switch it to another track by pulling a lever. Bad news: there's one person on that track. What do you do?

The common answer is to pull the lever, kill the one guy and save five. It's a net positive. That's Utilitarianism. It's clean and simple, and is the general consensus of what’s morally right in this instance.

The problem gets trickier when we change some conditions. Is it OK, for example, to divert the trolley not by pulling a lever, but by pushing someone onto the tracks to jar it off its path?time

The consensus flips. We say “no.”

Even though the math is the same: kill one guy, save five. Net positive.

Or is the math the same?

In the first example, we perform a benign act: we pull a lever. Nothing evil there. Then that neutral action has two outcomes: five people live, one person dies. We can easily add up those outcomes to find that we’re plus four on the day. We can then make the “right” decision.

But in the second example, the act isn't benign. The act itself is evil. We’re not just flipping a switch, we’re directly killing a guy. So now we're trying to weigh a good outcome against an evil act. Apples to oranges. And the majority of us seem to subscribe to the idea that an evil act is not justified by a good outcome. That’s Deontology: right is right, wrong is wrong.

Phooey.

If I put a million people on the track, we’re pushing the guy even if it is inherently wrong, right? Consequentialism (weighing net results) wins eventually.

So it isn’t that the act always trumps the outcome. It’s just weighted differently in our math equation, where we have to save a whole lot of people to justify killing one.

But why? In both examples, one guy dies. The only difference is when he dies, and how far removed we are from that evil.

Here’s my thought: we see evil as this dark, awful thing that will stain us if we’re too close.

When pushing the guy, we get evil all over us. It smells. We don't know how to get it out. We're tainted.

When pulling the lever, evil occurs wayyy down the line. We're out of the evil-splash-zone. We stay clean. It’s kind of like piloting a killer-drone using an Xbox controller, where pressing coloured buttons doesn’t feel so terrible. (If you’re wincing a bit—good.)

Back to trolleys: I think the discrepancy in the examples is that we lie to ourselves about what happened when we pulled the lever.

Let's try this: take away the five people tied to the track. Now what's worse: pushing a guy in front of a trolley or guiding the trolley into a guy?

They're clearly both terrible! In both instances a man dies directly as a result of our actions. We one-hundred-percent killed him in both scenarios. Extending our arms in a pushing motion had the exact same effect as grabbing and moving a lever.

So how neutral is flipping the lever, really? Does it just feel neutral because we're far enough away from the evil outcome? Is it OK because both the evil outcome and the good outcome occur simultaneously? Are we fooling ourselves by saying he’s just a casualty of the situation?

Or, in both examples, do we save five people by doing something evil? Do they differ only in that pushing a guy is tougher to stomach?

I say yes to those last two questions. I stick with a consequentialist view and say we support equally those who flip the lever and those who push the guy. (Though maybe we keep a close eye on the second group—who knows what else they're capable of.)

Now, I'm aware of the dangers here. The trolley is a very simple, isolated problem.

What happens if I start applying qualities to the people? Maybe the one person on the track is a child; the five are in their fifties.

What if there's only a 50% chance the train will be derailed after you push the guy?

What if the one guy on track two’s death is more painful?

What if we don’t know where the train will go if derailed—or where that second track leads?

As our math gets more complicated, it gets harder to do the evil thing in the name of good. We'll naturally exercise more caution, and skew towards inaction if we're unsure—especially if the stakes are high.

Seems wise.

But: how often will we be sure enough of the outcomes to say, "fuck yeah, consequentialism has me covered on this one!"?

So do we ignore it completely?

If we don’t do the evil thing out of fear of unintended consequences, does the math not bring us to negative four? Should we just make consequentialist decisions based on the information provided and say the rest was out of our control? Should there simply be a standard for how much information is enough to justify evil?

Or is this where we bring back deontology? If we can’t be sure—maybe we just do the right thing because it’s right? But wait, did we kill four people because we were scared or lack confidence? Is the inaction—the conscious decision not to flip the switch and let five people die—just as evil as the action?

I think I just opened a new can of worms that I don’t want to go into right now. I don't even know what schools of philosophy to look up yet.

For now, just know that if you pulled the lever, you killed a guy. Don’t kid yourself.

But you still did the right thing.

Maybe.