Trouble finding the red pen? Just say “tomato.” High-level conceptual information can direct our attention during visual search.
Wouldn’t it be handy if saying “metallic” made your keys pop out when you were looking for them? Or if saying “green” helped you find your beer on St. Patrick’s Day?
Language is used to orient our attention all the time. At the dinner table, we are often asked to locate the saltshaker, or maybe a passenger will yell, “Stop!” when we are about to run a red light. More commonly, when someone mentions your name at a party, even when you are deeply engaged in conversation with someone else, you are likely to hear it. You may turn around to try to find out who called your name. In a previous Featured Content post, we wrote about the “cocktail party effect” in images. This effect is just one instance of language orienting our attention toward people, objects, or events.
When we are looking for something, can simple words help us find what we are looking for? At the dinner table, when someone asks, “Would you please pass the…”, we are drawn to the salt, but may also consider that the person asking wants the pepper. Or, if we are walking in the grass, if someone reminds you to look out for snakes, you may avoid the garden hose as well. Perceptual and conceptual features seem to be automatically activated when we process words and help orient our attention because language can give us features to pay attention to.
Saying words like “frog” can lead us to look for anything green. In a study by Laure Léger and Elodie Chauvet that was recently published in Psychonomic Bulletin and Review, the researchers asked whether reading a word in a color that fits the word (e.g. the word canary presented in yellow font) makes it easier to find that word. Does “canary” written in yellow pop out more than “canary” written in purple?
The word “canary” is not prototypically yellow, even though the concept is. Using written word forms to prime perceptual information about words in a visual search task therefore is a strong test of how abstract and semantic an attentional set can be, because word forms are not used to cue attention to the perceptual forms of other words.
Léger and Chauvet asked participants to perform a visual search task. The stimuli were visually presented words like “tomato”, “canary” or “frog”, which participants had to then locate while their eye movements were being recorded.
All of the target words had the prototypical colors of red, yellow, and green. When searching for “canary”, six words would be presented in one color (e.g. yellow), and six other words in the “opposite” color (in this case, purple). In the target condition, the target “canary” and five other words would be presented in yellow. In the non-target condition, “canary” and those same five words would be presented in an irrelevant color (purple), while distractors would be presented in the cued color (yellow). In the control conditions, when searching for “canary”, the words would be presented in two unrelated colors like red and blue.
Visual search speed, what color participants fixated, and how long they looked at the primed color were all examined to see whether reading a word automatically activated color-relevant knowledge that would later make visual search more better.
Léger and Chauvet made several predictions: First, participants were expected to find “canary” faster when presented in yellow. Second, the researchers expected a cost when the target was not presented in the color that prototypically described the concept, and finally, participants were expected to be attracted to distractor words presented in the cued color that were not the target word. This is because words activate a set of features, a subset of which is perceptual in nature (e.g., color, size, or shape), which gives us an attentional feature set.
Léger and Chauvet largely found evidence for their predictions. When the distractor words were presented in the color that was cued by the target, visual search was slowed down. Additionally, participants tended to look at all of the yellow words when the target was “canary”, and this was especially true when “canary” was displayed in yellow. Thus, simply reading words provides some top-down semantic and perceptual features that can be exploited in visual search and visual selection.
Perhaps, a sometimes helpful strategy when you’ve misplaced your favorite red pen is saying, “Tomato, strawberry, ladybug.”
The authors point out that the fact that words associated with colors are easier to find when they are presented in the right color could change with expertise. Even though canaries are prototypically yellow, a bird watcher may know much more about canaries, and possibly not be as strongly drawn to yellow after hearing or reading “canary”. If experience plays a role in visual search, then semantic memory may play an even larger role in attention than previously thought.
6 Comments