Living in a world suffused with news about violent conflict around the world, it is easy to lose sight of the fact that humans are, by and large, averse to harming others. Even in war, the reluctance of soldiers to fire at their opponents is legendary, and overcoming this reluctance is a cornerstone of military training.
There is, however, a flipside to doing harm because, arguably, it can also serve to achieve a greater good. Most actors in war believe that when their soldiers fire at other humans, they are serving a greater good—be it in a “war to end all wars” or in a “war on terror”.
The moral conflict that results when the human aversion to doing harm is placed into conflict with the equally strong desire to do good has inspired legends and literature for centuries. Robin Hood was lionized for taking money from the rich to give it to the poor, and even murderous outlaws can become cultural icons if their criminal acts are reinterpreted as heroically rebellious.
A large body of research on moral cognition has identified several of the variables that determine our moral choices. We now know that people often derive happiness from doing good, for example by spending money on others, and that they work harder (and cheat more in games involving hidden dice) when the earnings go to charity than to themselves. We also know that people judge identical harm to be “less wrong” if it arises as a side effect of a justifiable action rather than as a direct material consequence. In the classic trolley problem, two actions that have an identical outcome—namely, sacrificing one life to save 5 others, are judged very differently depending on the nature of the action:
People like to do good and people like to avoid harm.
But how do those two moral desires trade off when they are placed in conflict?
A recent article in the Psychonomic Bulletin & Review examined this moral conflict in a large-scale experiment. Researchers Perera, Canic, and Ludvig used a variant of the famous dictator game, in which the participant is given money that is to be divided between themselves and a passive and powerless recipient.
Perera and colleagues adapted the standard dictator game in three ways: First, the powerless recipient was designated as an orphan who needed charitable help. The intention of this framing was to increase the sense of harm that would arise from any money kept by the participant and not being allocated to the orphan.
Second, the allocation was framed differently between two experimental conditions. In the “take” condition, the money was initially given to the orphan and the participant decided how much to take away from the orphan. Any allocation of money away from the orphan thus had to be seen as doing harm. In the “split” condition, the money was initially unallocated to either party, and any allocation away from the orphan therefore arguably caused harm only as a side effect.
Third, and most important, in one experimental condition people did not allocate money between themselves and the orphan, but between the orphan and a charity. In this condition, the charity was described as benefitting more orphans than an equivalent allocation of money to the single orphan in the game. The allocation problem in this condition thus pitted harm (withholding money from the orphan-recipient) against the greater good (giving it to a charity whose overall benefit to orphans was greater).
Nearly 700 online participants were randomly assigned to one of the four conditions formed by those two experimental variables (take vs. split and self vs. charity). Participants were shown a photo of the orphan and were then asked to distribute 100 cents that, depending on condition, were presented either the orphan’s property or as a neutral pot of money that was available for allocation. Participants divided the money between the orphan and, depending on condition, either to themselves or a charity.
The results, expressed as the percentage of money given to the orphan, are shown in the figure below, using the condition labels just explained:
The results are intriguing and straightforward. To place the data into the context of previous research, consider first the pale blue bar on the right. This condition differs from the standard dictator game only in the designation of the recipient as an orphan. In replication of much previous research, people allocated slightly less than half the money to the recipient.
Apparently we take a little more than half the pie when we can get away with it, but we don’t take it all because we do not like doing harm.
Let’s consider the effects of the two experimental variables (there was no interaction which makes interpretation easy): When the allocation was between a single orphan and the charity, people allocated less to the orphan than when they themselves were the beneficiaries.
We like to do good by giving more to others than we would to ourselves.
And we are even willing to cause some harm to one person in order to do good to a larger number of people. We all want to be Robin Hood.
However, regardless of who benefits, we still do not like to do harm: we take less from a powerless person if they already have money than if we have to distribute it from an independent source. This result meshes well with the large body of research showing that people are more averse to causing harm as a means to an end—“take” condition in this instance or pushing a large person to his death in the trolley problem—then as a side effect—“split” condition or switching the trolley onto a side track.
There is another aspect to the results of Perera and colleagues that deserves mention: They also recorded the time it took people to make their decisions, reasoning that longer response times would reflect greater moral conflict. The response time data are shown in the next figure:
The results are again striking and straightforward: When we have to resolve a moral dilemma that pits the desire to do good (giving to charity) against the desire to avoid harm (not giving money to an orphan) we struggle for an extra 10 seconds or so to count the pennies. When doing harm is not just a side effect, but a direct outcome of our actions (in the charity-take condition), we struggle particularly hard to resolve that dilemma. But once it is resolved, we are willing to do more harm to a person for the benefit of others than we are for our own selfish benefit.
Perhaps Nick Lowe was on to something when he sang “You gotta be cruel to be kind”:
Focus article of this post:
Perera, P.; Canic, E. & Ludvig, E. A. (2015). Cruel to be kind but not cruel for cash: Harm aversion in the dictator game Psychonomic Bulletin & Review. DOI:10.3758/s13423-015-0959-4