We are social beings. As much as we admire the “rugged individualist”, from the Lone Ranger to Crocodile Dundee, in reality most of us are sensitive to the behavior of others around us. We conform to “social norms” on a myriad of occasions: when queueing for the bus, when participating in recycling because our neighbors do, when we do (or do not) cheat on our taxes based on what our compatriots do, and even when we adjust our opinions on scientific issues based on what the majority of scientists think.
Conforming to norms makes perfect sense in many circumstances: Although Kevin Kline thought it appropriate to drive through London on the right-hand side of the road in A Fish Called Wanda, most of us value our lives sufficiently to drive on the left in the U.K. or Australia.
But what if social norms are provided not by fellow humans but by nonhuman agents, such as computers? When Stanley Kubrick’s famous HAL orders us not to disconnect his cognitive circuitry in A Space Odyssey, do we conform?
A recent article by Ulrich Weger and colleagues in Psychonomic Bulletin & Review tackled this question. The authors argue that “Real-life encounters with face-to-face contact are on the decline.” Indeed, many of us increasingly interact with each other electronically, for example via Facebook, and many people use video games to interact not with actual people but “avatars” and other artificial non-human entities.
In two experiments, Weger and colleagues showed that people are quite sensitive to encounters in a virtual world. In the experimental condition, participants played an immersive video game in which they travelled through a virtual landscape and, through the eyes of an avatar, manipulated the environment at will.
Following this game, participants went through 30 trials of an ostensibly-unrelated “job-selection task”, in which they had to choose between two candidates for a position. Prior to registering their choice, participants were presented with the recommendations of two computer-based algorithms, under the cover story that those algorithms were “under development to help select future applicants more easily.”
The computer-generated recommendations were programmed to give the objectively “wrong” answer on nearly half the trials—that is, the algorithms selected the job candidate with the lower score on two dimensions of competence.
What would people do when confronted with two wrong recommendations by computer algorithms? Will they conform or will they stick to their own judgment?
Weger and colleagues found that people would frequently conform to the wrong recommendations of the computer algorithm—and crucially, they would do so more frequently after they had played an immersive video game than in a control condition in which participants merely surfed the internet or watched other participants play the immersive game.
Apparently, seeing and interacting with the world through the eyes of an avatar raises our appreciation of computers sufficiently so that we trust them more in an unrelated task a short while later—even if the computers are identifiably wrong.
Why would people do this? If your ATM tells you that your checking account has a balance of $8,000,000, would you really plan that long-delayed holiday to Surinam? Or would you think that this might constitute an error?
Weger and colleagues suggested that one reason for the observed conformity might be people’s desire to be accurate. The experimental tasks were ultimately arithmetic in nature—the “correct” choice for the job was the candidate whose combined score on the two dimensions was highest. Computers are often seen as the undisputed experts in arithmetic: We don’t expect a computer to return “5” when asked to add “2+2”. And if a computer does return 5, then it may be tempting to think that it somehow knows better.
An alternative explanation can be found in people’s desire to be part of a certain group and gain approval from that group. We may conform to our friends’ desires to go square dancing because we seek their approval. Weger and colleagues suggested that this desire for affiliation may also have been present in their studies, in particular because the extent of conformity differed between conditions: On this argument, exploring a virtual world as an avatar may make us desire some form of approval from otherwise soulless computer programs that we would not normally express.
This conclusion may appear somewhat dystopian. Indeed, the researchers believe there is a need to reflect on gaming practices and on the consequences of what happens when people enter the artificiality of a virtual world—especially given that young people on average spend more than an hour a day gaming.
“Parents, educators, and players will need to take these consequences into consideration and take appropriate countermeasures,” says Weger. “For instance, at the very least it would be appropriate to reflect on what it really means to be human. We need to examine how this humanness can be educated and strengthened when it is shifted towards a more robot-like nature during virtual journeys as an avatar.”
Someone has to stand up to HAL, after all, should the need arise.