It’s Tricky to Build an Explanation Machine – Let’s Fix That

What’s stopping scientists from building a machine that provides sensible explanations? Let’s be clear: what we need is a machine that explains simple matters, not free will or the plot of Inception. For instance, how would you respond if I asked you why apples don’t grow underground? Perhaps you’d say, “Because apples are a type of fruit.” Or you might say, “Because apples need sunlight to grow.” Whatever your answer, the question is not some difficult mystery in need of sleuthing. It seems more mentally demanding to book a flight or purchase groceries – both of which require planning and deliberation – than to explain the fact about apples. Given how easily some explanations come to mind, it ought to be straightforward to develop a computer algorithm that roughly mimics human explanations.

It’s not.

Here are a few examples of the sorts of explanations that computers can’t generate (and some of my friends’ responses in parentheses):

Why are trains bigger than cars?       (Because they carry more stuff.)

Why don’t mice have wings?              (Because they wouldn’t be mice.)

Why are tennis balls round?               (Because they need to bounce.)

These kinds of questions are simple for humans, but difficult for a search engine to index (click the links above). Of course, we shouldn’t be too critical of search engines; their job is to aggregate information, and it’s possible that those questions are relatively novel. But the questions don’t flummox human reasoners, and it’s the domain of cognitive scientists to understand why human background knowledge is flexible enough to yield explanations for novel questions.

As cognitive scientists, we’re responsible for describing what a reasonable explanation should look like, and why some explanations are almost always dissatisfying (e.g., “That’s just the way it is”). So what does cognition research have to say about explanatory reasoning?

The generation of explanations

It’s early days yet in the empirical study of explanations. So far, researchers have discovered many systematic structural preferences for certain explanations over others which I won’t review here (I have a symmary forthcoming in Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience), and a dominant way of conducting explanatory reasoning research has been to pit Explanation A against Explanation B in a study, and then ask participants to choose which of the two they like more.

This general methodology will doubtless help catalog and discover novel preferences, but in daily life, people seldom have access to a set of explanations to evaluate. Instead, they need to generate an explanation, and relatively fewer studies have focused on that generative process.

If the papers in the latest Special Issue of Psychonomic Bulletin and Review are any indication, several researchers have begun directly theorizing about how people generate explanations. The papers present exciting new advances in how to characterize and explore the generative aspects of explanatory inference. Let me try to describe some of those recent ideas by revisiting the simple mysteries I opened with.

Generation by comparison, or, “Why are trains bigger than cars?”

To answer this question, my friend needed to know a little about trains, a little about cars, and a little about how they relate. And she had to compare and contrast all that information. In their article in the special issue, Christian Hoyos and Dedre Gentner argue that reasoners use comparison to discover properties on which to base novel explanations. They investigated the idea that comparisons between one class and another, such as trains and cars, can yield a set of concepts about what’s similar or different about them. What’s similar about trains and cars? For one thing, they both carry items and people. What’s different? Trains carry more items and more people than cars do. So they have to be bigger. Simple, right?

That was the authors’ main point: comparison is a relatively rapid process. The ability to generate explanations comes about early in life, and as Hoyos and Gentner reasoned, young children should be able to construct explanations by using comparisons between two concepts to highlight their similarities and differences. Those similarities and differences can then serve as the building blocks for the explanations they generate. To investigate this process, the authors studied six-year-olds by asking them to explain what made certain toy towers structurally “strong.” In the relevant conditions in their study, children evaluated two towers against one another, one that included a support brace and one that did not. When those two towers were easy to compare to one another (i.e., two structurally similar towers) children based their explanations on the element that differed between the towers. The authors argued that the ability to compare and contrast the towers helped the children isolate the components that contributed to its strength.

What’s important about this work is less the specific result than the more general solution Hoyos and Gentner present. They argue people might form an initial explanation by detecting salient structural similarities to isolate properties that can then form the basis of an explanation. The process needs to be constrained, of course, because any two objects have an infinite number of similarities and dissimilaries. But provided the appropriate constraints, Hoyos and Gentner’s overarching idea is imminently computable, and approximations of it might be integrated into existing artificial intelligence systems.

Generation by conceptualization, or, “Why don’t mice have wings?”

 “Obviously, mice can’t have wings. Mice are mice, they’re not supposed to have wings. A mouse with wings isn’t a mouse, it’s like, a bat or something.”

My colleague’s justification exemplifies a special kind of explanation known as formal explanation. You know an explanation is formal when it references the kind of thing that participates in the explanation – in this case, mice. An explanation isn’t formal when it refers to other sorts of information, such as what it’s made of (its material composition), what it’s used for (its teleology), what brought it about (its root cause). In other words, this explanation is formal:

That car has four wheels because it’s a car.

but this one is not:

That car has four wheels so it can move quickly.

Sandeep Prasada pioneered the study of formal explanations. In a series of studies, including collaborative work we did together, he and his colleagues discovered that formal explanations can be used as a window onto our deep background knowledge, that is, our underlying conceptual framework. That’s because formal explanations only make sense for certain types of conceptual links between kinds and properties. Which of these two statements makes more sense?

That car has four wheels because it’s a car.

That car has a radio because it’s a car.

The first statement strikes me – and it struck the participants in a study of ours – as more sensible than the second. Having a radio seems less important to the concept of a car than having four wheels. A car without four wheels is broken in some way, or it’s not even a car. But a car without a radio isn’t broken or bereft of its car-ness. In other words, there exists a privileged link between certain kinds (cars) and certain properties (having four wheels), and as Prasada showed, formal explanations seems reasonable when they reference that privileged link. He refers to the link as a principled connection.

In his recent paper in the Special Issue, Prasada extends his theory of principled connections to show how they’re useful in building many different kinds of formal explanations. Principled connections establish links between a kind and information about what members of the kind are made of, where they come from, and how they behave. And so, under Prasada’s new analysis, all of these statements are viable formal explanations:

That has a tail because it’s a mouse.

That came from another mouse because it’s a mouse.

That eats cheese because it’s a mouse.

Mice are composed of tails, they come from other mice, and they eat cheese. This background knowledge – and its conceptual organization – help in generating sensible formal explanations while prohibiting others.

In sum, Prasada’s elaborated theory presents a novel way of generating formal explanations that depends on how concepts are organized and how that background knowledge is accessed. Formal explanations are not cop-outs – it’s not as if my friend said, “That’s just the way it is,” when I asked him why mice don’t have wings. Instead, his response reinforced the idea that there is some property other than wings (presumably legs) that is privileged. No modern artificial intelligence system that I’m aware of incorporates those privileged links into its processing, but on Prasada’s new account, those conceptual distinctions should be taken seriously. Perhaps they can even provide a guide for how to organize and implement new computational semantic networks.

Generation by inherence, or, “Why are tennis balls round?”

My friend answered this question by explaining that tennis balls needed to bounce. Flat or square things don’t bounce, so tennis balls are not flat or square. Of course, there are plenty of other decent explanations for why tennis balls are round. For instance:

Tennis balls are round because they were manufactured that way.

Tennis balls are round because of International Tennis Federation regulations.

Tennis balls are round because they were derived from handball sports.

These are all accurate explanations, but they don’t seem as compelling as my friend’s response. Why not? A central difference is that they concern extrinsic entities, which are external to the ball itself, such as manufacturing systems or international regulation bodies. In contrast, my friend’s explanation concerned bounciness, a property inherent of tennis balls.

Andrei Cimpian, who edited and summarized the new Special Issue of PBR, has shown in many studies that people are biased towards basing explanations on inherent properties. Inherent properties are fast to retrieve and don’t require further deliberation, and Cimpian and his colleagues developed an account that places explanatory reasoning within a dual-process framework: reasoners can construct explanations rapidly by accessing inherent properties from memory. They then use those properties to flesh out the explanation. My friend certainly did this, and it yielded an reasonable characterization of the roundness of tennis balls.

But, as Cimpian argues, the inherence bias can yield profound errors in explanatory reasoning. One example he cites concerns how people respond to the question: “Why do Americans drink orange juice for breakfast?” A common response is to cite some inherent property of orange juice, e.g., tanginess. Reasoners then elaborate on the idea by proposing that the tanginess helps you wake up in the morning. In fact, a better explanation for why Americans drink orange juice is because of the citrus lobby in the early 20th century. The latter explanation is extrinsic, more complex, and it demands specific background knowledge, so reasoners rarely generate it. With this idea, Cimpian opens up the door to studying systematic errors in explanatory reasoning and remedies for how to overcome them.

Where next?

Hoyos and Gentner describe how comparisons help make properties salient so that they can form the basis of explanations. Prasada and his colleauges argue that explanatory reasoning depends on an accurate characterization of how people represent conceptual information about kinds and their properties. And Cimpian’s work situates explanatory reasoning within a framework that distinguishes rapid, intuitive processing from slower, deliberative processing. Each of these ideas represent new breakthroughs in the study of how explanations are constructed.

As the ideas grow more concrete, my hope is that they can be fleshed out into working computational models. Explanation machines don’t exist yet, but new theories of how explanations are generated can help fix that.

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like