Similarity-based processing is in the eye of the inferrer

In reality shows like American Idol or RuPaul’s Drag Race, contestants compete for the chance to become the next big star. The judges on these shows want to pick a winner who they believe will have long-term success. One way the judges might choose a winner is through similarity-based processing, where they compare contestants to successful past winners. For example, it is high praise on these shows to be told that you remind the judges of Kelly Clarkson or Jinkx Monsoon!

On the other hand, the judges might use rule-based processing, where they score contestants based on specific skills like singing, dancing, or stage presence. Most likely, though, they will use a mix of both methods—they want someone who feels like past stars while also having the necessary skills to succeed. In any case, the judges have to make the best decision possible with limited information, making inferences about each contestant’s amount of future success.

These kinds of inferences aren’t just important for reality competitions! As the authors of today’s featured article also point out,

“In real-world settings like job interviews or clinical assessments, people often make new decisions by combining past experience and abstracted rules. For example, job applicants can be evaluated based on similar past hires or by assessing their skills.”

We can assume that most people are using a combination of both strategies, but how do we know for sure?

looking over two people's shoulders at a laptop
To guess how well a candidate would do in a job, you could compare them to other people who have had that job before (similarity-based processing), you could add up their qualifications (rule-based processing), or you could use some combination of both strategies. Photo by Kampus Production (pexels.com).

In their research published in Psychonomic Bulletin & Review, Florian Seitz, Rebecca Albrecht, Bettina von Helversen, Jörg Rieskamp, and Agnes Rosner (pictured below) combine two methods to figure out how much similarity-based processing a person is using cognitive modeling and eye tracking.

Five people smiling at the camera in five different photos.
Authors of the featured article, from left to right: Florian I. Seitz, Rebecca Albrecht, Bettina von Helversen, Jörg Rieskamp, and Agnes Rosner.

In this study, participants studied several images. Each image showed a die numbered 1-4, and a clock with 1-4 hands, creating 16 unique images. Each image also had a hidden “criterion” value, calculated using the following formula: criterion = (5/3 x (number of die) x (number clock)) + 2, rounded to the nearest whole number. For example, the image with a die with 3 dots and a clock with 3 hands had a criterion of 17.

Participants in this study were first trained to place four of the images in different corners of the screen (as shown in panel a in the figure below). Each of these four images showed different numbers on the dice and clocks. Next, they learned to match these images with their criterion number (as shown in panel b in the figure below). Finally, they took a test where they guessed the criterion number for each of the sixteen images (as shown in panel c in the figure below).

When someone uses similarity-based processing, they rely on their memory for previously seen images. Since memory is tied to location, participants might look at the same place where they saw an image previously if they were using similarity-based processing during the test. This behavior, called “looking-at-nothing,” helps the researchers know if someone is recalling information from memory. In other words, the researchers used eye tracking to see how much similarity-based processing each participant was using.

Showing procedure in three panels for location training, criterion training, and criterion test.
Figure showing the study procedure. Panel (a) shows the location training task, where participants learned to associate images with a specific corner of the screen. Panel (b) shows the criterion training task, where participants learned to associate the images with a hidden “criterion” number. Panel (c) shows the test, where participants had to guess the criterion number for each of the sixteen images.

Participants showed great memory for the on-screen locations and criterion values of the first four images. These memories were strongly associated, with participants who remembered on-screen location better also remembering criterion values better. Additionally, participants who did more looking-at-nothing were better at remembering the criterion values for images they had studied. On-screen location seemed to help people remember criterion values.

The researchers used a cognitive model with a value called “alpha” to figure out how much of each type of processing each person was using during the test. High alpha values mean more similarity-based processing, and in this study, they were associated with more looking-at-nothing. In other words, eye movements could predict how much similarity-based processing someone was using! These results are visualized in the figure below.

Figure highlighting the positive relationship between computational modeling results and looking-at-nothing, a measure of eye gaze that indicates if something is coming from memory.

The researchers also grouped participants into “similarity users” or “rule users” based on the cognitive model. The figure below shows how similarity users were more likely to look-at-nothing (as shown in panel A), tended to look-at-nothing more often when an image was highly similar to one of the studied images (as shown in panel B), and tended to look in the direction of more previously studied images (as shown in panel C).

Figure highlighting eye movement differences between similarity users and rule users. Panel (a) shows that similarity users spent more time looking-at-nothing. Panel (b) shows that when similarity users were making inferences about new images, they spent more time gazing in the direction of similar, previously seen images. Panel (c) shows that similarity users tended to look in the direction of more previously seen images during the test.

These results are important because they show two different methods of measuring which kind of process a person is using. In the words of the researchers, it “may be particularly beneficial when different cognitive processes lead to similar response predictions and thus cannot be distinguished at the behavioral level.”

This study shows that the kind of processing people use to make inferences is connected to their eye movements. Looking at the same locations as the images they were studying can be a sign of similarity-based processing. So, eye tracking gives us insights into how much someone relies on similarity-based processing while making an inference. You could say that similarity-based processing is in the eye of the inferrer!

Psychonomic Society article featured in this post:

Seitz, F. I., Albrecht, R., von Helversen, B., Rieskamp, J., & Rosner, A. (2025). Identifying similarity- and rule-based processes in quantitative judgments: A multi-method approach combining cognitive modeling and eye tracking. Psychonomic Bulletin & Review. https://doi.org/10.3758/s13423-024-02624-y

Author

  • Anthony Cruz is a PhD Candidate in the Department of Psychology at Western University. Under the supervision of Dr. John Paul Minda, he studies category learning, the process by which people learn to sort objects into groups. His research looks for ways to help people learn categories more effectively. He researches how spaced learning (taking breaks while studying) and metacognition (reflecting on your own learning) can enhance memory and make categorization easier.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.