Kim and Drew want to go out for the evening. Kim wants to attend a symphonic orchestra, but Drew wants to attend a gymnastics competition. While they do not agree on what event to attend, both prefer to do something together rather than alone. The situation constitutes an example of an experimental game, a set of methodologies widely used in psychology, economics, and other disciplines to model a range of phenomena from market competition to social interaction to biological evolution. Specifically, it constitutes an example of a social dilemma popularly known as the “battle of the sexes”. The table below shows the payoff matrix of the battle of the sexes.
Drew |
|||
Orchestra |
Gymnastics |
||
Kim | Orchestra |
2,1 |
0,0 |
Gymnastics |
0,0 |
1,2 |
The number pairs in the table represent the consequences for Kim and Drew depending on the combination of decisions. In each number pair, the first number represents the consequence for Kim and the second number represents the consequence for Drew. Higher numbers represent more desirable outcomes. Kim would prefer to go to the orchestra, so if they attended this event together, the payoff would be 2 for Kim and 1 for Drew. Drew would prefer going to the gymnastics competition, so if they attended this event together, the payoff would be 2 for Drew and 1 for Kim. And if they attend the events alone, the payoff would be 0 for both Kim and Drew. Thus, if they agreed on where to go, they would both be better off. I would personally recommend attending the Cirque du Soleil to enjoy a combination of music and acrobatic stunts, thereby removing the conflict of preferences.
Similar scenarios are used by researchers in psychology and economics to better understand human behavior in various contexts. In some of these experiments, deception is used. For example, when participants are told that they will interact with other participants in an experiment, but they are interacting with the researchers’ confederates or preprogrammed algorithms. Another example of the use of deception is when details are omitted to avoid influencing participants’ expectations that may lead to unnatural behavioral patterns.
Experimental Psychologists and Economists disagree on whether deception should be used in this line of research. In psychological research, there is more tolerance towards these manipulations as they allow for tighter experimental control and increased methodological flexibility. In contrast, in economics research, deception is discouraged and even banned in certain cases. The rationale behind such actions is that it protects a common good of participant pool users (i.e., participants do not alter their behavior as a result of having taken part in prior deceptive experiments).
Researchers in the fields of psychology and economics have a mutual interest which is to better understand human behavior in economic contexts. The differing views on the use of deception in research preclude working together. If deception is allowed, it may decrease data quality.
Krasnow, Howard, and Eisenbruch (2019), pictured below, tested whether the suspicion of deception or a history of being deceived in research contexts meaningfully changed behavioral patterns.
The research is described in an article published in the Psychonomic Society journal Behavior Research Methods. In two studies the authors measured 1) the opinions of researchers in the field of psychology and economics about the use of deception, and 2) whether past experiences with deception make participants more suspicious, and whether being suspicious affects behavior.
In the first study, they surveyed more than 500 researchers from the psychology and economics departments. Open-ended questions, multiple-choice and 7-point Likert type items were used for this purpose. One question, for example, was “Should deceptive practices be banned in general science journals?” with responses ranging from 1meaning “absolutely not” to 7 meaning “absolutely”.
As shown in the figure below, Economists considered banning deception in participants’ pools and journals important, and in contrast, Psychologists considered banning deception to be harmful.
Other topics where opinions differed were methodological rigor, effectiveness for achieving goals, and the potential damage for future studies. A higher proportion of Economists considered studies using deception to be less rigorous, ineffective, and likely to damage the field, whereas Psychologists were less likely to endorse such views, and sometimes considered studies including deception to be as rigorous or more than those without. The differing opinions are in line with the policies of their respective participant pools and academic journals.
In the second study, Krasnow and colleagues recruited more than 600 participants from different participant pools, including psychology, economics, and online-based ones. Participants completed tasks, including the Ultimatum Game, the Dictator Game, and the Welfare trade-off. Participants also answered questions to detect spontaneous suspicions of being deceived and questions regarding previous deception experiences, whether they were suspicious during the experiment, and, if so, whether and how it influenced their behavior. Note that no deception was used at any point in the study.
Previous deception experience did not predict suspicion, as it did not make participants more likely to spontaneously be suspicious, nor to declare suspicion when explicitly inquired about it. There were no differences in spontaneous suspicion rates when comparing participant pools that allow deceptive designs (psychology and online pools) with those that don’t (economic pools). Participants recruited from psychology pools were more likely to express suspicion than those recruited from economic pools, and those recruited from economic pools were more likely to express suspicion than those recruited from online pools (even though the latter pool usually does allow deception).
To assess whether suspicion altered behavior, Krasnow and colleagues compared performance on the different tasks between suspicious and credulous participants using Cohen’s d (the standardized mean difference in units of standard deviation). As shown in the figure below, all of the confidence intervals for the Cohen’s d values included 0, which is consistent with a lack of effects in all comparisons.
Although when disaggregating results by combinations of tasks and samples the authors were able to detect occasional significant results (to be expected by chance alone), follow-up Bayesian analyses revealed that the scenario of suspicion having no effects on behavior is over 30 times more likely than it having an effect on behavior.
Although it may be a bit early to advocate for banning deception bans, this study provides empirical data for researchers concerned by the potential effects of methodologies that include deception. Psychologists and Economists agree on the need for high-quality data, and they both will be better off (as well as the scientific community, in general) basing research policies on empirical data whenever such evidence can be obtained. Just as Kim and Drew can have a great time by aligning their interests, so too can researchers in different fields.
Psychonomic Society’s article:
Krasnow, M. M., Howard, R. M., & Eisenbruch, A. B. (2019). The importance of being honest? Evidence that deception may not pollute social science subject pools after all. Behavior Research Methods. https://doi.org/10.3758/s13428-019-01309-y