Conversations in Milan, Rome, or Madrid seem ever so much more animated and exciting than those polite chats over a tea cozy in Oxford, London, or Wetwang (Yorkshire). At least in part, this may reflect the greater physical rigor that denizens of the Mediterranean exhibit during their speech. As the New York Times put it: “when Italians chat, hands and fingers do the talking.”
Undoubtedly.
[youtube https://www.youtube.com/watch?v=N7uG6J5fp3Y]
But do gestures have an explanatory role that goes beyond punctuation, emphasis, and exclamation?
One of the articles in the first issue of the Psychonomic Society’s newest journal, Cognitive Research: Principles and Implications, investigated this issue. Researchers Seokmin Kang and Barbara Tversky asked whether gestures can contribute to the understanding of the actions of a dynamic system.
We are surrounded by dynamic systems: from the stock market to electoral campaigns to the global climate, dynamic systems play a large—and sometimes distressingly inescapable—role in our lives. Even though the components of dynamic systems can often be identified with relative ease (for example, all shares being traded on the stock market, plus the various brokers), how the system will play out in time is frequently difficult to understand. Thus, undergraduate students can readily identify the components of a bicycle pump, but they have difficulty understanding its behavior.
This is because, as Kang and Tversky put it, “understanding the behavior of dynamic systems entails comprehending the temporal sequence of the actions of the parts of the system, the nature of the actions, the changes that result, and the causal dependencies between the actions and the changes.”
It follows that gestures may provide a natural avenue to effectively explain a dynamic system. Gestures are actions, after all, and unlike graphical devices such as arrows, which can have multiple meanings and interpretations, there is little ambiguity in the direction, speed, and amplitude of a limb’s movement.
Accordingly, there is much research that shows that gestures carry information that’s absent in speech, and that they can facilitate a number of cognitive tasks, from word leaning to sentence memory and math.
Kang and Tversky chose to examine the importance of gestures in explaining a common but not immediately transparent system, namely the conventional four-stroke engine. Participants were presented with an explanation of an engine that was accompanied by one of two types of videos: In the action-gesture video, a speaker’s gestures portrayed actions of each part of the system (in case you are wondering, they are opening, closing, expelling, exploding, igniting, compressing, reducing, letting in, rotation, descending, going in, going up, going out, and rotation—and that may not be an exhaustive list). A brief clip from the action-gesture video is available below:
[youtube https://www.youtube.com/watch?v=fJpEUb7ZuHA]
In the structure-gesture video, an identical number of gestures was used but they portrayed the parts of the engine rather than their action (i.e., spark plug, exhaust valve, and so on). A short clip of this video can be found below:
[youtube https://www.youtube.com/watch?v=yOATKS-1poM]
Participants’ subsequent understanding of the engine was measured in a number of ways. The measures of greatest interest were participant-generated visual explanations followed by videoed oral explanations, intended to explain the workings of the engine to a novice.
In summary, participants watched a video of a person explaining the engine using either action or structure gestures, and they in turn then created a diagram, and then a video-taped explanation in their own words. The diagrams were analyzed for visualization of action and structure. The gestures and words on the participant-generated videos were analyzed and coded as representing action (“…that’s a rotation”) or structure (“…this is a valve”).
The figure below shows the components identified in participants’ drawings as a function of the type of gesture presented in the initial stimulus video.
Participants who watched action gestures incorporated more action into their diagrams than people who watched the structural videos. Conversely, the people who were exposed to structural gestures used more lines to label parts in their diagram. Overall, the action gestures were followed by significantly more complete diagrams—defined as those that included all four strokes in the engine’s cycle—than the structure gestures, suggesting that the action gestures had engendered better understanding of the engine’s workings that was then reflected in the diagram.
Watching action gestures also enhanced the subsequent use of such gestures in the participant’s own video, as shown in the figure below:
At first glance, this result may appear potentially trivial: might it be that participants simply repeated the gestures they had already seen when they explained the engine themselves? A further analysis by Kang and Tversky speaks against that possibility. Participants’ gestures were compared to those encoded in the original stimulus video and classified as imitations (i.e., repeating the same gestures) or innovations (something new and different).
Although people often mimic each other’s gestures in communication, Kang and Tversky found that most gestures were innovations. The researchers interpreted this as being indicative of particularly “deep” understanding by the participants. After all, one cannot generate an entirely new gesture from a different movement without understanding what that original movement implied.
Kang and Tversky analyzed a variety of further measures, but in all cases the outcome converged on the same conclusion: Watching action-based gestures enabled participants to develop a far better understanding of the engine that watching structural gestures. Because the verbal explanation that accompanied the stimulus videos was identical across both types of gestures, the differences in understanding had to represent a specific consequence of gesturing.
Gestures, it appears, can explain complex dynamic systems particularly well. Four-stroke engines can be explained by manual gestures very well—to the point where participants watching those gestures are able to produce a complete visual model of the engine. This principal insight from the study is of some practical significance—think about it when giving your next lecture—but it also contributes to our quest for fundamental understanding by teaching us more about the profound relationship between gestures and deep understanding of complex systems. The study thus squarely fits within Pasteur’s Quadrant, the new journals’ target arena.
It is left up to the reader to consider what gestures might explain the dynamics of the current election campaign.
Article focused on in this post:
Kang, S., & Tversky, B. (2016). From hands to minds: Gestures promote understanding. Cognitive Research: Principles and Implications. DOI: 10.1186/s41235-016-0004-9.