The sum of attention is more than its past: When memory and vision subtract

We have talked about pop-out before. The phenomenon is nearly self-explanatory: consider the two sets of dots in the figure below. There are 18 dots on the left and 150 on the right. In each array, there is a single red dot: what is your intuition about how long it would take to detect the presence of the red dot in the two arrays?

Your intuition that the detection time would be equal across the two arrays has been confirmed innumerable times in the laboratory (that was your intuition, surely?). Although the pop-out phenomenon itself may be virtually self-explanatory, the underlying processes are actually non-trivial. After all, to make an overt response in this task, one’s attention ultimately has to focus on the target, but how can that attention be allocated with equal efficiency when there are 149 distractors rather than just 17? How can a red dot—or any other stimulus—attract attention unless attention is already being paid to it?

A common explanation for the phenomenon invokes a segmentation mechanism that partitions a visual array into regions of high and low “saliency”. Saliency depends only on local contrasts between features and can be computed in parallel, irrespective of how many features there are in the visual array. Focal attention is then attracted to the area of greatest saliency.

Intriguingly, recent research has identified a form of pop-out involving information held in visual working memory (VWM). Specifically, when people have encoded an array of bars at various orientations into memory, and are then presented with another display in which one of the bars has changed orientation, this change is detected in a manner reminiscent of pop-out—that is, the time to allocate attention to this change is independent of the number of distractors.

Might this mean that people can compute a “saliency map” across a spatial array that represents the change in a feature compared to a memory representation?

Could it be that the processes driving the well-established visual pop-out phenomenon are related to—or perhaps even identical to—the processes driving this new memory-based pop-out phenomenon?

A recent article in the Psychonomic Society’s journal Attention, Perception, and Psychophysics addressed this question. Researchers Heinrich and Anna Liesefeld, Hermann Müller and Dragan Rangelov presented participants first with a memory array, and then with a test array after a brief retention interval. The test array either did or did not contain a change from the stimulus array. The figure below shows the sequence for a trial on which there was a change:

Across trials, 1/3 of the time there was no change between encoding and test (requiring a “same” response). The remaining trials all involved a change (requiring a “different” response) but there were several different types of changes. On 1/3 of the trials, the change involved only color or orientation. On the remaining 1/3 of trials, both color and orientation were changed, as in the figure above. As a final methodological twist, those “redundant-change” trials were further sub-divided into those on which a single stimulus element changed on both dimensions, as in the figure above, and trials on which one dimension of two different stimuli changed (i.e., one changed its orientation, the other its color). It turns out that comparison of those two different redundant-change trials is particularly diagnostic of the underlying processes.

The results are best explained using the figure below, which displays the response-time distributions for all trial types.

We can ignore the very steep line labeled “RMI bound” in the legend, and focus on the remaining 4 lines. Each of these lines plots the cumulative probability of a (correct) response having been made as a function of the response time. Thus, virtually no responses were faster than around 450 ms, and all trials on which only the orientation of one stimulus changed (dotted blue) came to completion after more than 900 ms. When only the color of one stimulus element changed (dashed orange line), responding was slightly faster and all trials were over by around 900 ms.

Of greatest interest are the two green lines (one solid, one dashed), which represent the two types of redundant-change trials. First, it is clear that those two lines overlap nearly completely, suggesting that it did not matter whether the two feature changes made on those trials were focused on one stimulus element or spread across two. Second, the redundant-change trials were considerably faster than the trials on which only color or orientation changed.

This redundancy advantage mirrors one of the hallmarks of the visual pop-out effect. The redundancy advantage is typically attributed to a race between two saliency signals (one for each feature), with the response being triggered as soon as one signal finishes processing without having to wait for the other signal. Perhaps, therefore, people are creating two saliency maps that represent the local change in a feature between what is encoded in memory and what is detected in the test array.

Does this mean change-detection and visual pop-out operate in the same manner?

Not entirely: other aspects of the data of Liesefeld and colleagues point to striking differences between visual pop-out and memory-based change detection. Specifically, several inter-trial effects that routinely occur in visual search were absent with the change-detection paradigm. For example, in visual search, performance improves if a target-defining dimension is repeated from one trial to the next. Thus, in visual search, two consecutive trials involving targets defined by orientation lead to faster responding on the second trial than two consecutive trials with dimension changes (e.g., from a target defined by color to one defined by orientation). In the present study, this inter-trial effect was absent. Thus, several key mechanisms that underlie visual search appear to play no role in change detection.

The fact that there is any overlap between visual pop-out and memory-based change-detection is, however, quite remarkable. As Liesefeld and colleagues put it:

“That a change pops out, … is actually more fascinating than meets the eye: How can attention be guided by something that is not present but merely defined as the difference between two subsequently presented displays (a change)? Attention could not be guided by any feature of the changed stimulus: its features were not known in advance and the changed item was special only in that it was not present in the memory display. In fact, if participants had used their VWM content to guide search, they would have attended anywhere else but the change location, because the remaining (non-changed) elements matched their VWM content” (p. 2199).

In other words, attention can be guided by something that is not present, but without being guided by memory alone. Instead, attention appears capable of analyzing the differences between what is present and what is not.

Psychonomics article highlighted in this post:

Liesefeld, H. R., Liesefeld, A. M., Müller, H. J., & Rangelov, D. (2017). Saliency maps for finding changes in visual scenes? Attention, Perception, & Psychophysics, 79, 2190–2201. DOI: 10.3758/s13414-017-1383-9.

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like