What’s hiding in your reaction time data? New features of visual search behavior from hazard analysis

Sometimes, it seems like the most simple tasks are also the most frustrating. Take the junk drawer in your kitchen, for example. They’re in every home, we’re not proud of them, and whenever we spend several minutes looking for the vegetable peeler when cooking dinner, we promise ourselves that one day, we will get this mess organized. This process of visual search—looking for items in clutter—is a common element of our everyday lives, and involves a complex set of visual and cognitive processes, from looking at the scene, to recognizing objects (is that the handle of the peeler or a citrus juicer?), and making decisions (maybe it’s time to check the other junk drawer?).

To study visual search in the lab, researchers often use standard measures: reaction time (RT), or how quickly participants make their response, as well as their accuracy in reporting whether the target (the vegetable peeler) was present. Over the last several decades, this kind of data has provided cognitive scientists with a framework for understanding the mechanisms underlying visual search. In a paper in Attention, Perception, & Psychophysics, Panis, Moran, Wolkersdorfer, and Schmidt (pictured below) have taken these standard analyses a step further to reveal some of the complex processes that make up what we think of as search.

Panis Fig2 authorsTo do this, they analyzed the reaction times from a dataset of visual search experiments previously collected by Wolfe, Palmer, and Horowitz (2010), in which participants looked at simple displays like the ones below and indicated whether or not a given target was present. For example, participants looked for a red vertical bar in the left two images, or the number ‘2’ among the ‘5’s in the image on the right. 

Panis Fig 3 visual display
Panis et al (2020) analyzed reaction times from Wolfe, Palmer, and Horowitz (2010), who used visual displays like the images above.

The key change that Panis and colleagues made was, instead of just looking at the time it took to find items – the reaction time itself – they applied a discrete-time hazard analysis. This analysis parcels out the search task into a set of binned time points, and examines the probability, for each time point, that a response which has yet to occur will occur in the time point. In other words, a hazard function, like the one below, tells us the likelihood that a response that we are still waiting for will occur at time t.

Examples of hazard functions from the paper for two participants (shown separately as h(t) in A and B), at different set sizes.

Panis Fig Results
Examples of hazard functions from the paper for two participants (shown separately as h(t) in A and B), at different set sizes

This approach shows individual differences in visual search in a way that were hard to see in the data previously. In other words, we don’t all search the same way, and different people might use different strategies to find the vegetable peeler in the junk drawer. They also speak to how we choose between competing responses. For example, some participants have a tendency towards early false-positive responses. It’s as though you were looking for the peeler, it’s not in your kitchen drawer, but you might immediately grab the can opener by mistake. Panis and colleagues can see this in speed-accuracy trade-offs, which show that participants’ initial, quick responses are less accurate. 

What should we take away from this? Aside from revealing previously hidden facets of visual search, this work offers an important lesson for cognitive scientists. Averages are important for describing general trends in a data set, but they can mask a lot of valuable information, and hazard functions offer a different way to look at patterns that may be hiding in a dataset.

Panis Fig 5 visual display
Like this picture, there may be a lot more going on underneath a reaction time dataset than you might think! Source: imgur

Finally, the paper offers some new ways to think about reaction times in visual search. Cognitive scientists typically divide visual search into discrete stages (a certain amount of time to select an item to check, another chunk of time to make a decision, etc.), and reaction times are often thought to simply reflect the sum of these individual processes. However, there are many other things going on as we look for gadgets in our kitchen—we each adopt our own search strategies, selecting between competing responses, and so on. This work shows that variation in reaction times might actually reflect different cognitive processes, rather than the exact same operations each time. There are also other mental processes that are happening at the same time that we look for something (such as learning, recognizing objects, etc.), that contribute to reaction time. So, before you get your kitchen drawer organized, think about some of the complex mental operations you go through to find your gadgets (OK, and maybe then get around to cleaning it out).

Psychonomic Society article featured in this post:

Panis, S., Moran, R., Wolkersdorfer, M. P., & Schmidt, T. (2020). Studying the dynamics of visual search behavior using RT hazard and micro-level speed–accuracy tradeoff functions: A role for recurrent object recognition and cognitive control processes. Attention, Perception, Psychophysics, 82, 689–714. https://doi.org/10.3758/s13414-019-01897-z

Author

  • Kosovicheva Thumbnail

    Anna Kosovicheva is an Assistant Professor in the Department of Psychology at the University of Toronto, Mississauga. Her research focuses on visual localization and spatial and binocular vision, with an emphasis on the application of vision research to real-world problems.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like