Telling apart Santas, stockings, and sneaky Waldos: Ho-Ho-How similar is similar?

It is Christmas time and everything these days seems to be covered with singing Santas and stuffed stockings, shining brightly in red, white and green. Now imagine that sneaky red and white striped Waldo is hiding among the Christmas decorations. Telling him apart from the rest will be tedious. Needless to say, the similarity between targets and distractors has a huge impact on how you search, but determining the degree of similarity between search items is not trivial. How much more similar is Waldo to Santa than to a stuffed stocking?

To date, those similarity values usually were not fed into our analyses of visual search performance. In most classic search paradigms this was not necessary, since there are only a few parameters available to tweak for searching a T among Ls. In contrast, real-world objects usually comprise a complex mix of multi-dimensional features that make it difficult to judge their similarity.

In a paper recently published in the Psychonomic Society’s journal Attention, Perception, & Psychophysics, researchers Michael Hout and colleagues provide a helpful introduction to the multidimensional scaling (MDS) methodology for use in visual search paradigms, including step-by-step descriptions of how to implement MDS, as well as discussing the pros and cons relative to computational methods that could be put to the same use.

So what is MDS and why should we try to understand it? Michael Hout, the first author of the paper argues that “similarity is a really important concept in cognition (e.g., in categorization, language, and many other areas), and particularly important in studying visual search and attention.” He points out that “researchers invoke the concept all the time (e.g., search is harder when distractors are more similar to the target), but rarely actually quantify it in any meaningful sense.“ The article therefore proposes to “use an already existing method of quantifying similarity (MDS) to more carefully control stimulus selection, and/or more precisely quantify the relationships among stimuli in experiments.”

“MDS is a statistical technique that—when applied to overt or indirect similarity judgments—can be used to uncover the dimensions by which people perceive similarity”. It can therefore be used to measure the psychological similarity of items. This is different from computational approaches that measure visual similarity on the basis of pixels.

The authors of course did not invent MDS. It has been around as a technique for many decades, with the groundwork being laid between the 1950s and 1980s. Hout and colleagues suggest that the broader application of MDS to vision science could add greater precision to inferences drawn from experimental data.

So how does MDS work? In essence, you acquire observers’ pairwise similarity judgments either directly (by simply asking them) or indirectly (for instance by measuring how long it takes them to decide whether two items are identical or not). These data are then fed into a principal components or factor analysis and what you get is a “similarity map” that quantifies the perceived similarity between numerous items of interest.

On the basis of the raw similarity values, MDS does its magic by moving all items around in this hypothetical space such that similar items end up being grouped together. Similarity between items is measured as distance in a “k-dimensional Euclidean space”. Note that absolute values are not very informative here, it is the relative distance between pairs of objects that counts.

The potential rewards of MDS are illustrated in the figure below, which shows the famous color wheel of perception that is obtained from pairwise similarity judgments between colors.

The distances between points in the figure capture the similarity between those colors. Thus, green is maximally dissimilar from red, and blue is very different from yellow, and so on.

You might wonder how feasible the application of MDS is. Would we really want to first collect pairwise similarity ratings on all the stimuli we want to use in a visual search experiment? After all, with just 20 stimuli there are 190 possible pairwise comparisons. This could easily take twice as long as running the actual experiment.

Fortunately, before you become too disenchanted with the idea of actually using MDS for your next visual search study, Hout and colleagues provide an alternative route to acquiring similarity ratings without the need to rate every possible pairing of items.

For instance, you could instead use the spatial arrangement method — or SpAM— where you present multiple items simultaneously and observers are asked to move the items around (using the computer mouse) until the distances from one another reflect their perceived similarity. The authors point out that “whereas a 30-item set may take 25-30 minutes to rate using standard pairwise methods, a single trial of SpAM is sufficient to handle the task, often being completed in as little as 3-5 minutes.”

Importantly, MDS does not rely on any a priori hypotheses about the nature of the similarity estimates. That is, MDS does not require the experimenter to know beforehand what dimensions of an object will be used by the observers to construct their similarity judgments. Accordingly, the article states that “this technique is particularly useful when the stimuli of interest are complex, or high-dimensional, and therefore may be comprised of features that are unspecified or unknown a priori.”

Such features can also be non-visual, opening up a whole new area of investigations. For instance, MDS can be used to further investigate the role of semantic relationships between items, thereby allowing a closer look at the influence of high-level knowledge

In their paper, Hout and colleagues describe how this can be put into use. For example, they had used MDS in a previous study to differentiate the influence of visual versus semantic similarity when searching for numbers. Here semantic similarity was indexed by the numerical distance between the numbers. The results showed that both visual and semantic similarity had an effect, but the influence of visual similarity was nine times stronger. The researchers therefore concluded that “by applying the approach outlined in this article, we were … able to directly quantify the role that both visual and semantic similarity play in visual search for numbers.”

So the take home message for the holidays is that while good old Santa might be visually more similar to Waldo, most of us would still go search for Santa near the stockings. No MDS needed to calculate Santa similarities, all you need to do is to track children’s eyes.

Article mentioned in this post:

Hout, M. C., Godwin, H. J., Fitzsimmons, G., Robbins, A., Menneer, T., & Goldinger, S. D. (2015). Using multidimensional scaling to quantify similarity in visual search and beyond. Attention, Perception, & Psychophysics. Doi: 10.3758/s13414-015-1010-6.

You may also like