In reality shows like American Idol or RuPaul’s Drag Race, contestants compete for the chance to become the next big star. The judges on these shows want to pick a winner who they believe will have long-term success. One way the judges might choose a winner is through similarity-based processing, where they compare contestants to successful past winners. For example, it is high praise on these shows to be told that you remind the judges of Kelly Clarkson or Jinkx Monsoon!
On the other hand, the judges might use rule-based processing, where they score contestants based on specific skills like singing, dancing, or stage presence. Most likely, though, they will use a mix of both methods—they want someone who feels like past stars while also having the necessary skills to succeed. In any case, the judges have to make the best decision possible with limited information, making inferences about each contestant’s amount of future success.
These kinds of inferences aren’t just important for reality competitions! As the authors of today’s featured article also point out,
“In real-world settings like job interviews or clinical assessments, people often make new decisions by combining past experience and abstracted rules. For example, job applicants can be evaluated based on similar past hires or by assessing their skills.”
We can assume that most people are using a combination of both strategies, but how do we know for sure?

In their research published in Psychonomic Bulletin & Review, Florian Seitz, Rebecca Albrecht, Bettina von Helversen, Jörg Rieskamp, and Agnes Rosner (pictured below) combine two methods to figure out how much similarity-based processing a person is using cognitive modeling and eye tracking.

In this study, participants studied several images. Each image showed a die numbered 1-4, and a clock with 1-4 hands, creating 16 unique images. Each image also had a hidden “criterion” value, calculated using the following formula: criterion = (5/3 x (number of die) x (number clock)) + 2, rounded to the nearest whole number. For example, the image with a die with 3 dots and a clock with 3 hands had a criterion of 17.
Participants in this study were first trained to place four of the images in different corners of the screen (as shown in panel a in the figure below). Each of these four images showed different numbers on the dice and clocks. Next, they learned to match these images with their criterion number (as shown in panel b in the figure below). Finally, they took a test where they guessed the criterion number for each of the sixteen images (as shown in panel c in the figure below).
When someone uses similarity-based processing, they rely on their memory for previously seen images. Since memory is tied to location, participants might look at the same place where they saw an image previously if they were using similarity-based processing during the test. This behavior, called “looking-at-nothing,” helps the researchers know if someone is recalling information from memory. In other words, the researchers used eye tracking to see how much similarity-based processing each participant was using.

Participants showed great memory for the on-screen locations and criterion values of the first four images. These memories were strongly associated, with participants who remembered on-screen location better also remembering criterion values better. Additionally, participants who did more looking-at-nothing were better at remembering the criterion values for images they had studied. On-screen location seemed to help people remember criterion values.
The researchers used a cognitive model with a value called “alpha” to figure out how much of each type of processing each person was using during the test. High alpha values mean more similarity-based processing, and in this study, they were associated with more looking-at-nothing. In other words, eye movements could predict how much similarity-based processing someone was using! These results are visualized in the figure below.

The researchers also grouped participants into “similarity users” or “rule users” based on the cognitive model. The figure below shows how similarity users were more likely to look-at-nothing (as shown in panel A), tended to look-at-nothing more often when an image was highly similar to one of the studied images (as shown in panel B), and tended to look in the direction of more previously studied images (as shown in panel C).

These results are important because they show two different methods of measuring which kind of process a person is using. In the words of the researchers, it “may be particularly beneficial when different cognitive processes lead to similar response predictions and thus cannot be distinguished at the behavioral level.”
This study shows that the kind of processing people use to make inferences is connected to their eye movements. Looking at the same locations as the images they were studying can be a sign of similarity-based processing. So, eye tracking gives us insights into how much someone relies on similarity-based processing while making an inference. You could say that similarity-based processing is in the eye of the inferrer!
Psychonomic Society article featured in this post:
Seitz, F. I., Albrecht, R., von Helversen, B., Rieskamp, J., & Rosner, A. (2025). Identifying similarity- and rule-based processes in quantitative judgments: A multi-method approach combining cognitive modeling and eye tracking. Psychonomic Bulletin & Review. https://doi.org/10.3758/s13423-024-02624-y