We have talked a lot on this blog about how important understanding multitasking and switching between different cognitively demanding tasks impacts visual processing, especially because there are individual differences (lots of them) and a number of consequences of poor multitasking. We recently covered a study showing that speaking a signed language natively can free up visual processing resources relative to speakers who don’t know any sign language. It is only very recently that language processing and cognitive control have been looked at together, and very little has been done with spoken language comprehension.
A recent study in the Psychonomic Bulletin and Review by Nazbanou Nozari, John Trueswell, and Sharon Thompson-Schill looked at how domain general cognitive abilities, specifically cognitive control, affects our susceptibility to look for the wrong thing when listening to another person talk. If understanding language is easy, or independent of cognitive control, then individual differences in cognitive control should have little to do with a language task.
One hallmark of language processing is that listeners often try to anticipate what speakers are about to say using any constraining information in the sentence. For example, if a speaker says “She will eat the…”, listeners tend to look at edible objects in a scene like a pear or a banana, even before they are named.
One question is when listeners use constraining information. Do they continue to entertain other possibilities that are compatible with the later words but clash with the earlier ones?
Imagine the sentence above continuing as “She will eat the red…” People may look at hearts or ladybugs as possible referents even though they are not edible. Alternatively, they may only ever entertain apples or strawberries, which are both red and edible.
Nozari and colleagues examined participants’ eye movements in real time as they listened to sentences like “She will eat the red pear.” The critical question was whether, upon hearing “red”, participants looked at a non-edible but prototypically red objects (e.g. a heart) on the scene, even though they were displayed in black and white. They found that participants indeed looked at an inedible object that was semantically related to “red” instead of the upcoming referent (i.e. the pear).
Different participants paid different degrees of attention to these distractors. Which participants were better at ignoring the distracting objects?
To measure cognitive control abilities, Nozari and colleagues used a version of the Flanker task with five fish, as in the figure above, instead of the classic arrows. In the Fish Flanker task, participants have to indicate the direction the central fish is pointing by pressing a button. This task is made harder by the so-called flanking fish that are off to the side of the fish of interest. If the center fish is congruent, then it faces in the same direction, but on incongruent trials it faces the opposite way. The idea is that in the incongruent condition, participants have to inhibit the irrelevant information that nevertheless imposes on the visual system (i.e., the flanking fish). If this is the same ability that participates use to inhibit the distracting referents in the sentence (e.g., the heart after hearing “red”), then people who are better on the incongruent trials in the Flanker task should be the same ones who can ignore visual distractors during a language task.
If the executive control system is used by both visual and linguistic tasks, then participants who process irrelevant information more will do so in all tasks.
The authors analyzed the time series data of what participants were looking at during each trial as a function of whether the verb was constraining or not, like eat, and whether there was an adjective mentioned (red making people look at prototypically red things). Here they looked at proportions of fixations to the target, which is actually named, two types of competitors (e.g. either a prototypically red thing or another edible thing), and unrelated words (e.g. referents which are neither edible nor prototypically red).
Both the verb and the adjective can be used to distract participants away from the actual target. When the verb predicts a specific kind of referent (e.g. “She will eat the…” predicts edible things), participants looked at the other edible referent as often as a totally unrelated one. When the adjective was associated with one of the referents (e.g. “She will touch the red…”), participants looked at the semantically related referent (the heart) more than an unrelated one (the igloo; the figure below shows that participants look more at the (prototypically red) heart when they hear “She will touch the red…” than they look at the igloo.).
To measure individual differences, Nozari and colleagues then measured the effect size of each participant’s tendency to pay attention to distractors and correlated that effect size with their Fish Flanker task score. They found that people who looked at the distractors in the scenes more were also slower to respond on the incongruent trials in the Fish Flanker task.
The results here are important to both theories of executive processing and theories of language processing. Most people agree that complex sentences are hard to understand and may need cognitive control, but this study is one of the first that has tied cognitive control to processing of simple sentences. How easy language comprehension is depends at least in part on your own ability to focus on tasks and ignore background information. This is surprising because understanding language seems so easy most of the time. Contrary to some long-standing beliefs, these results show that language processing relies on domain general cognitive skills.
Psychonomic Society article featured in this post:
Nozari, N., Trueswell, J. C., & Thompson-Schill, S. L. (2016). The interplay of local attraction, context and domain-general cognitive control in activation and suppression of semantic distractors during sentence comprehension. Psychonomic Bulletin & Review, 1-12. DOI: 10.3758/s13423-016-1068-8.