The ghost of targets past: How hidden patterns linger in your gaze

Humans are quite skilled at detecting patterns subconsciously. If you listen to a new song for the first time, you can probably follow the beat or predict how the melody will change next. If you go into a new grocery store, you can probably navigate to the potatoes based on your experience in other stores. If you drive home during rush hour, you might detect traffic patterns that prompt you to take a certain side street over another. Whether or not you’re aware of these patterns, the brain has a remarkable ability to track them.

This phenomenon has been called “statistical learning.” This has been applied to visual field studies that ask people to search for a target. When a target frequently appears at a specific location, people tend to prioritize that location when searching for a new target. For instance, if a flashing light often appears in the upper right corner, your attention might be focused on that corner when searching for the next flashing light.

This style of statistical learning seems to be relevant to one task, but the literature hasn’t explored if statistical learning can be transferred to unrelated tasks… until now. Sebastiano Cinetto and colleagues (pictured below) investigated the power of statistical learning in a novel closed-loop audio-visual search (AVS) task. This unique study has been published in the Psychonomic Society journal Psychonomic Bulletin & Review.

Authors of the featured article: Sebastiano Cinetto, Elvio Blini, Andrea Zangrossi, Maurizio Corbetta, Marco Zorzi.

In this experiment, the researchers asked participants to search for an invisible target on the screen. They used eye-gaze technology to determine how closely someone was looking at the target. Participants heard a 440 Hz tone that changed loudness based on their gaze. As they moved their gaze further from the target, the sound got louder. When they fixated on the target, the sound was silenced. Unbeknownst to the participants, the targets were biased to appear more on the left or right side of the screen. Over 180 trials, they learned to focus on the biased side more often, which helped them locate the invisible target faster. In fact, they located 85% of the invisible targets within 5 seconds. That may be faster than the time it took to load this web page (depending on your internet connection).

Figure 1. Gaze distance from target during audio-visual search task. The horizontal dotted line represents the threshold for reaching the target. In most trials, the target was reached within 5 seconds.

This shows an impressive interaction of domains—participants were able to navigate a visual field based on real-time auditory feedback. It also shows that statistical learning can happen across domains, even with a lack of awareness.

As if that weren’t enough, the researchers then asked: Can this statistical learning transfer to other unrelated viewing tasks? Spoiler: it can!

In a blank-screen viewing task, participants were allowed to move their eyes freely across a blank computer screen. Participants tended to shift their eyes toward the side of the screen that was biased in the AVS task.

In an image-viewing task, they showed photos of indoor or outdoor scenes with balanced exploration opportunities on both sides of the screen. Participants still tended to move their gaze toward the biased side from the AVS task.

In a landmark task, they presented a horizontal line bisected by a vertical line. Participants had to indicate whether the vertical line was placed to the right or the left of the horizontal line’s midpoint. For this task, the authors reported no effect of the bias from the AVS task.

Figure 2. Gaze shift during blank-screen viewing (A) and image-viewing task (B). Each participant’s gaze shift is represented on the y-axis. The x-axis shows the change in mean horizontal fixation from post-task minus pre-task position.

As shown in Figure 2, the blank-screen and image-viewing tasks were both influenced by learning during the AVS task. This is the first study to show that location probability learning can transfer to an unrelated free viewing task. The authors suggest this protocol could be used clinically with patients who have a pathological spatial bias.

The authors said,

“Our study shows that people can unconsciously learn hidden spatial patterns during a visual search—and that this learning influences how they look around in more natural settings.”

These findings open the door to further studies exploring how attentional biases can affect real-world situations.

Featured Psychonomic Society article:

Cinetto, S., Blini, E., Zangrossi, A., Corbetta, M., & Zorzi, M. (2025). Spatial regularities in a closed-loop audiovisual search task bias subsequent free-viewing behavior. Psychonomic Bulletin & Review. https://doi.org/10.3758/s13423-025-02703-8

Author

  • Brett Myers, PhD, CCC-SLP is an Associate Professor and the Director of Clinical Education in the Department of Communication Sciences and Disorders at the University of Utah. He received his doctorate from Vanderbilt University, where he studied with Duane Watson and Reyna Gordon. His research investigates planning processes during speech production, including parameters related to prosody, and their role in neural models of motor speech control.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like