The dark side of easy questions: Early confidence can sway jurors

“During the summer of 1979, Bernard Pagano, a Catholic priest, was arrested and put on trial in Delaware for a series of armed robberies. Seven eyewitnesses, ranging from clerks to bystanders, positively identified Father Pagano as the “gentleman bandit,” whose well-tailored appearance and courteous manners always belied his felonious purpose. As the trial was nearing an end, the real `gentleman bandit,’ fourteen years younger than Father Pagano and hardly a look alike [see pictures below], turned himself in.” This account of mistaken eyewitness identification is, alas, hardly unique: The list of people who have been wrongly convicted based on faulty eyewitness identification is long and tragic.

Unlike Father Pagano, many others have served years if not decades in prison before they were exonerated by DNA technology when that became available long after their conviction. There have been at least 330 post-conviction exonerations in the United States that have been based on DNA after an average of 14 years served in prison. In 70% of those cases, mistaken eyewitness identification was a factor in the wrongful conviction.

There is little that’s more powerful in a courtroom than a confident witness who points a finger at the defendant and says “he did it” with great confidence. Unfortunately, a large body of evidence tells us that eyewitnesses—no matter how honest and confident—are not always reliable.

For example, we know that subtle changes in the wording of questions can cause strikingly different testimonies: Ask a person who has just watched a video of a car accident how fast the cars were travelling when they “contacted” each other, and their speed estimates are considerably lower than after an identical video and an identical question in which the word “contacted” has been replaced by “smashed.”

But what if questions cause trouble even without being suggestive in one way or another? Could simply changing the order of questions affect how eyewitnesses appraise their own memory, and could that in turn affect how jurors appraise those witnesses?

recent article in the Psychonomic Bulletin & Review examined this seemingly innocuous question. Researchers Michael and Garry motivated their research by previous findings involving trivia questions. Across a series of studies, it has been repeatedly shown that participants who answered trivia questions from the easiest to the most difficult believed they got more questions right than participants who answered the questions in the reverse order. In actual fact, accuracy was the same regardless of question order; what differed was only people’s perception of their own abilities. You think you know more if you first respond to “what is the capital of France?” and then to “what is the capital of Timor Leste?” rather than the other way round, even if you only remember Dili in both cases.

In the witness context, confidence in one’s own memory may translate into confidence in the court room—and it is well known that jurors are particularly swayed by confident eyewitnesses.

And this is exactly what Michael and Garry found across 6 experiments.

The main stimuli in their experiments were video recordings of (staged) crimes, which were followed by a surprise memory test after some delay. For example, participants might watch a video of a tradesman who stole items from an unoccupied house he was working in. The crucial manipulation was the order in which the 30 questions of the surprise memory test were presented. In one condition, the first questions were those that had elicited the highest confidence ratings among participants on a separate pre-test. For example, the first question might be “What did the tradesman eat while he was in the kitchen?” The questions gradually became more difficult, until the last question was presented, which might be “How many toothbrushes were in the bathroom?” In another condition, the order of questions was reversed.

The figure below shows that this manipulation was successful: The confidence with which participants rated their response to the 30 items went up or down, depending on question order.

Figure 1 top panel in the featured article.

Accuracy of responding showed a similar pattern, but is of lesser interest here. What is of greatest interest are people’s own retrospective estimates of their overall memory accuracy and their overall confidence.

Those data are reported in the next figure, which shows that in both conditions people under-estimated their performance: They actually got about 20 questions correct irrespective of question order (darker bars in the top panel), but people’s estimates were below that in both conditions (lighter bars). The extent of that pessimism differed between conditions, exactly as expected. This is revealed in the bottom panel, which shows the difference scores between actual and estimated performance—it is clear that when people started out with easy questions (“high-to-low confidence”), they were less pessimistic than if they had to answer some tough questions first (“low-to-high confidence”).

Figure 2 in the featured article.

The above pattern was mirrored in the final measure, namely people’s overall confidence in their performance at the end of the experiment. Participants were less confident if they started out with low-confidence questions than with high-confidence questions. Michael and Garry replicated this basic result several times with quite large samples. We may therefore have some confidence in this pattern.

It is important to bear in mind that the conditions differed only with respect to the order of questions: the actual questions were identical overall.

So far, so good. Question order matters to a person’s confidence. But what about the more pressing question of how this might play out in the judicial system?

In three further experiments, Michael and Garry therefore presented participants not with the video of a crime, but with the average responses of people who had watched the video in the first set of experiments. In other words, participants now acted as “jurors” who were presented with “witness” reports of a remote event, including the confidence ratings of the “witness” to each question. After studying the “witness” reports, participants rated their confidence in the accuracy of the witness, and estimated the number of questions that the witness had answered correctly.

You may be able to guess the main result: “jurors” thought that witnesses in the high-to-low confidence condition answered more questions correctly than in the low-to-high condition. This was not a small effect: the estimates of the number of correct questions differed by more than 10%.

Finally, jurors themselves were considerably more confident in the accuracy of a witness if that witness had started out being highly confident than when the witness initially was not very confident of their responses.

Michael and Garry suggest that their study “paints a worrying picture of the malleability of beliefs about memory accuracy.” Without using any type of suggestion or subtle changes in wording, merely by manipulating the order in which participants answered questions, both witnesses’ and jurors’ confidence was significantly altered.

Perhaps this is why in TV courtroom dramas, witnesses are frequently asked to state their name and address first, before being asked more specific details about an event.

Author

  • Stephan Lewandowsky

    Stephan Lewandowsky's research examines memory, decision making, and knowledge structures, with a particular emphasis on how people update information in memory. He has also contributed nearly 50 opinion pieces to the global media on issues related to climate change "skepticism" and the coverage of science in the media.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like