When babies explore the world and stumble upon an interesting object, they check it out carefully. Usually by putting the object into their mouth. Adults tend to be more restrained in their oral explorations, but we retain a natural tendency for physical touch: when we become interested in an object we usually pick it up and check it out with our hands.
This tendency to explore objects with our hands has various cognitive flow-on consequences. For example, all other variables being equal, stimuli that are presented close to the hands benefit from enhanced visual analysis. We see things better when they happen to be near our hands, even if other variables (such as distance from the eyes) are controlled.
Previous research has assumed that this enhanced processing of stimuli occurs because objects that are close to our hands are candidates for action. The pen next to my hand is a better candidate for action than the stapler behind my keyboard.
What are the processes that underlie this hand-proximity advantage? A recent article in the Psychonomic Bulletin & Review addressed this question. Researchers Liepelt and Fischer contrasted two possible types of processes in a series of cleverly-designed experiments. They were interested in the contribution of “bottom-up” versus “top-down” processes. Bottom-up processes involve low-level perceptual analysis and drive higher-level cognition and decision making. Top-down processes, by contrast, engage high-level conceptual processes and guide perception by providing expectations about what we “should” see. This video explains the distinction well.
Liepelt and Fischer explored that distinction using the Simon task. In a Simon task people make a choice about a stimulus (e.g., “is the square red or green?”) and indicate their response by pressing a key with one hand or the other (e.g., left hand for red, right hand for green). The crucial manipulation is the compatibility between the location of the stimulus and the associated response. For example, the red square may appear on the left of the screen or on the right (and ditto for the green square). Although the location of the square is entirely irrelevant to the decision, the Simon effect refers to the reliable observation that when the stimulus and response locations are compatible (red square on the left or green on the right), people respond more quickly than when the locations are incompatible (red square on the right, or green on the left). You can try the experiment yourself here.
The Simon effect is known to be reduced (but not eliminated) when people engage top-down attentional control mechanisms during a task, rather than relying on bottom-up processing alone. Conversely, when bottom-up processing is enhanced, the magnitude of the effect is known to increase. Liepelt and Fisher exploited this known signature of the Simon effect in their experiments.
Liepelt and Fischer used a task that is known to involve a top-down component, namely the classification of digits into “large” (greater than 5) or “small” (less than 5). If the proximity of stimuli to our hands engages additional top-down processing when it is demanded by the task, perhaps in preparation for a physical action involving the stimuli, then the Simon effect should be reduced compared to a condition in which the stimuli were further from the participants’ hands.
In their first experiment, two variables were manipulated: The first one was the location of the digits on the screen, which was required to obtain a Simon effect. People classified a stimulus digit as being greater than 5 by pressing a response key on the right, and they used the left hand for the opposite response (less than 5). When a large digit (6-9) was presented on the left (or a small digit on the right) then the location of the digit was incompatible with the response hand. Conversely, a large digit presented on the right (or a small digit on the left) created compatibility with the response hand. The difference in response speed between those two conditions is a measure of the magnitude of the Simon effect.
The second variable concerned the position of the hands, which were either located on a pair of response keys attached to the monitor (close to the stimuli) or attached to a cardboard box that participants placed on their laps (far from the stimuli). The figure below shows the results:
Two effects are apparent in the figure: First, the incompatible trials took longer than the compatible trials—this is the standard Simon Effect. Second, the magnitude of that effect was reduced when the hands were in closer proximity to the stimuli. The second effect corresponded to expectation and suggests that when a task requires top-down processing to begin with—such as the magnitude classification of digits—then the proximity of the stimuli to the hands magnifies the involvement of those processing components. And magnification of top-down processing translates into a reduction of the Simon effect.
In their second study, the researchers used the same stimuli, except that the digits were now presented in two different colors, red or green. In one condition, participants were instructed to ignore the color and perform the magnitude classification task as in the first study. In another condition, participants were instructed to ignore magnitude and classify the stimuli by color alone. The rationale for this manipulation was that the focus on color would require bottom-up processing more so than top-down processes—in consequence, the Simon effect should now be enhanced by proximity of the hands to the stimuli.
The results conformed to expectation, as shown in the figure below:
Bringing the hands closer to the stimuli decreased the Simon effect for a task that preferentially drew on top-down processes, whereas it increased the effect for a task that preferentially drew on bottom-up processes.
The authors conclude that their results are “in line with the assumption that hand bias effects are the product of the attentional competition of bottom-up inputs and top-down control for attentional resources.” That is, depending on task demands, our attentional system recruits either top-down or bottom-up processes, and whichever mode of processing is recruited is amplified when the stimuli are close to our hands.
Liepelt and Fischer suggest that their results have practical implications, too: “For tasks in which increased levels of cognitive control are advantageous, new displays allowing direct visual–manual interaction could help to optimize task performance.” Specifically, under those circumstances performance may be “less susceptible to distracting and potentially error-prone influences.”
In a cognitively-optimized future, we should therefore consider the specific task demands to decide when to work with a computer mouse and when to better use a touch-screen.