The Progress of Understanding Explanations

The word that came to mind as I read the collection of articles in the special issue of the Psychonomic Bulletin & Review dedicated to the processes of explanation was “progress.” The nature of explanation has of course been a core concern of cognitive scientists ever since there have been cognitive scientists. Yet, with a few important exceptions, I think it is fair to say that it hasn’t always been a core concern of cognitive experimental research, one on par with, say, decision making, problem solving, or category learning.

What this collection makes clear is that this situation has now changed as there are numerous empirical scientists that are interested in this phenomenon. And the outcome that ususally happens with the advent of experimental techniques has happened here: The phenomenon turns out to be deliciously more complicated—and interesting!—than previously thought. To this reader the field appears to be on the steep part of the learning curve when it comes to understanding explanation. Progress is happening, and rapidly.

For instance, consider the role that simplicity plays in judging the quality of an explanation. Many past investigators have argued that, all else being equal, a simple explanation is considered better (is perhaps more convincing, more persuasive, and so forth) than a complex one. In this volume, Walker, Bonawitz, and Lombrozo present one way that simplicity may be operationalized and demonstrate that simplicity is indeed preferred. Suppose Plant A and B are both sick. Plant A received too little sun. Plant B received too little water. Both plants are planted in soil of an unusual color. Citing their own previous findings, the authors note that both children and adults tend to prefer explanations of two effects (two sick plants) that appeal to a single, common cause (unusual soil) versus ones that appeal to two causes (little sun and little water), one for each effect. Walker and colleagues asked a new and interesting question, which was how this preference for simplicity is itself affected by whether children were prompted to “explain” what they saw (e.g., “Why do you think these [plants] are sick?”). Interestingly, 5-year old children asked to explain exhibited a larger simplicity bias than those that did not. (Neither 4-year nor 6-year olds exhibited this effect, which the authors attribute to the former group not understanding the task and to the possibility that the little sun and little water causes were especially salient to the latter group on the basis of prior knowledge.)

So, simple explanations are better, right? As is often the case, things are not so simple. In another article in this issue Zemla, Sloman, Bechlivanidis, & Lagnado asked whether the factors identified as contributing to explanation quality—including simplicity—would hold up for naturalistic explanations of real-world phenomena. The phenomena and explanations were taken from Reddit’s Explain Like I’m Five and included questions such as why college tuition has increased or why doctors, despite their training, sometimes contract Ebola from their patients. Subjects rated the accompanying explanations on their quality and 20 other variables, including complexity (the reverse of simplicity). Surprisingly, and unlike in the work of Walker and colleagues described above, complexity was positively correlated with explanation quality! Zemla and colleagues observed that an explanation might be viewed as complex both because it presents multiple mechanisms that lead to the effect and because it is very detailed. In fact, follow-up analyses revealed that both of these senses of complexity contributed to explanation quality. A number of other interesting variables also contributed, including internal coherence (how well the parts of an explanation fit together) and perceived expertise (whether subjects believes the explanation was writtin by an expert).

So are simple explanations good or bad? The two studies differed in numerous ways (scenarios from the lab vs. the real world, subjects that were children vs. adults, etc.), any one of which could be the factor responsible for the different findings. Zemla et al. themselves raise the intriguing possibility that the key difference may concern whether the mechanism(s) cited in an explanation operate probabilistically or deterministically. Complexity in the form of multiple probabilistic mechanisms may strengthen an explanation, for the simple reason that the additional mechanisms raises the probabilty of the effect. But when mechanisms are deterministic (are individually sufficient, that is they yield the effect with probability 1), adding mechanisms has no effect on the probabilty of the effect. In these circumstances, the preference for simplicity emerges. But whatever the correct explanation, it has become clear that simplicity is not as simple as previously thought. Recognizing this fact is progress.

Progress is also being made at identifying the mental processes by which people settle upon an explanation. Sometimes what needs explaining are multiple, sequentially-observed data points. Past research has shown that evidence that appears earlier in a sequence has relatively greater weight on which explanation is ultimately preferred, a phenomena interpreted by some as indicating that initially-formed hypotheses biases the interpretation of subsequently observed evidence. But whereas such conclusions have until now been inferred from subjects’ final judgments, in this volume Scholz, Krems, and Jahn explore the potential of the exciting method of memory indexing for providing information about the mental processes by which explanations emerge over time. Memory indexing interprets eye movements as reflecting reasoning processes in the special case where conceptual material has been previously associated with spatial locations on a computer screen. In this study, subjects were first instructed on various types of medical symptoms (experiencing some sort of pain, or skin ailment, or respiratory problem, etc.), and then which of those symptom types were a consequence of exposure to four different chemicals. In turn, those four chemicals were associated with the four quadrants on the computer screen.

In a test phase, subjects listened to a list of symptoms (e.g., “sting,” an example of pain; “rash,” an example of a skin ailment, etc.) and were asked to identify their cause. Subjects’ eye movements over a virtually blank screen were interpreted as reflecting which hypothesis (chemical) they were mentally processing in response to each symptom. Some of the results were perhaps not too surprising: Upon hearing a symptom, subjects tended to look at the screen quadrant associated with the chemical that that sympton implicated. But, interestingly, when a symptom was associated with two chemicals, subjects looked longer at the screen quadrant associated with the chemical that had been more implicated by the previous symptoms. The authors interpret this finding as an instance of integrated probability matching in which reasoners attempt to integrate new information with a current hypothesis. Of course, this sort of biased processing provides one account of why evidence has a greater influence on an ultimate decision when it is presented earlier rather than later. Scholz and colleagues also suggest that memory indexing can even provide information about when a reasoner replaces the leading hypothesis with a new one!

The authors acknowledge that there is still much that is uncertain about what this method reveals about explanation, such as what proportion of the eye fixations are due to retrieving the hypothesis directly associated with a symptom versus the cognitive operations that integrate that symptom with the current hypothesis. Yet although numerous computational models have been proposed as accounts of the evidence accumulation processes, direct empirical evidence regarding the operation of those processes is harder to come by. That we have a new source of such evidence is progress.

Finally, much of the experimental study of explanation has been energized by the finding that engaging in explanation when acquiring new material often yields learning benefits. Yet, one sign that the scientific understanding of an empirical phenomenon is advancing is the identification of conditions under which a phenomenon should be absent. In this volume, Rittle-Johnson and Loehr in fact identify a number of important conditions of just this type. First, because explanation tends to induce learners to search for general patterns, it may cause them to overlook exceptions, with the result that explanation may be detrimental in domains where exceptions are common (the authors cite grammatical rules as an example). Second, although explaining conceptual material is often beneficial, it can be harmful when what is being explained is a reasoner’s own, incorrect pre-existing ideas (because it renders the reasoner less open to new information). Third, there are different types of explanations and prompting learners for one type (e.g., in the domain of mathematical problem solving, why one performs certain operations) can come at the expense of other sorts of knowledge (procedural knowledge regarding how one performs the operations). Fourth, explaining does not always yield learning advantages relative to other instructional techniques. Learners whose attempts to explain fail (they are unable to generate an explanation of any kind) may benefit more from direct instruction, for example.

In a similar spirit, Soares and Storm in this volume investigate the role of explaining in inducing forgetting. They presented undergraduate subjects with arguments (e.g., that university should require their students to spend a semester studying abroad) and then asked them to generate arguments of their own (either in favor of or opposed to the intial argument). Intriguingly, they found that generating an argument resulted in subjects remembering less about the initial argument (relative to a suitable control condition). One caveat was that this effect disappeared in the special case in which the subject’s own argument was highly related to the original one, suggesting that the integration of the two arguments promoted (or at least didn’t harm) memory. Nevertheless, the clear message from both Rittle-Johnson and Loehr, and Soares and Storm, is that when it comes to explanation, just as is the case for most good things in life, moderation is sometimes called for. Given the real-world importance of the investigated phenomena, recognizing this is most definitely progress.

One will find many other fascinating topics in this heterogenous collection, such as the role of analogies and metaphors in generating explanations, the nature of the explanations generated for magical and extraordinary events, and how explanation quality varies with sort of inferences it will end up needing to support. But taken together they reflect a welcome development—that explanation is taking its rightful place alongside the other top-tier cognitive phenomenon such as decision making, problem solving, and category learning.

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like