Do you look for your glasses 200 times in a row? Can we learn from blocked designs even though we don’t block search in the world?

Search in the lab doesn’t look like search in the world (mostly)

Visual search is something that we do all the time – in the morning, pre-coffee, you might stumble into the washroom and look for your toothbrush (hoping that your cat didn’t get rambunctious in the night and knock it into the sink), followed by looking for the faucet handle and then the toothpaste. If you’re in the lab, however, search doesn’t look much like this – you’re probably not looking for totally different targets that might be potentially mobile on each trial. Odds are, you’re looking for something pretty mundane in the lab – maybe Ts among Ls, or if you’re lucky, slightly strange looking chickens, but you’re still doing a pile of trials that are all broadly the same, and that’s just not what most of us do outside of the testing booth.

Even if your desk looks like this, you probably don’t look for your glasses 200 times in a row. Credit: pexels.com

There’s a lurking question here that Jeremy Wolfe, Injae Hong, Ava Mitra, Eduard Objio, and Yousra Ali have dug into in their new paper for Attention, Perception & Psychophysics. In the lab, we usually want our observers to do hundreds of trials to really understand what’s going on across our paradigms, but that’s not particularly realistic. As Wolfe and his coauthors point out, there are some tasks where that’s a little closer to the truth – medical image search, at least within the same category of image (e.g., reading a stack of mammograms) or reading luggage x-ray images (bag after bag after bag, wondering why a passenger decided to bring the Costco-size tub of peanut butter in their carryon), but even those cases have a lot more variability within them than the average lab experiment.

How much can laboratory studies really tell us, and how do they differ from our lives?

At the core of their paper, however, are a couple of questions that many of us who study search and attention have probably thought about, and maybe have shied away from really wanting an answer to. One of them has come up already – how much do our in-lab tasks, which are usually much simpler than the tasks we do every day – speak to how we search for items outside the lab? That’s really where we started – the paper even notes, “No one does a block of 200 pickle searches in their kitchen, one after the other.” That would be a big enough question to tackle, but this paper tackles a second question along the way – does it matter if we block or randomize our search tasks in the lab, or is the inherent randomness of real-world search tasks something we’ve been ignoring?

But we should randomize our conditions! No, we need to block them! Credit: pexels.com

Across five experiments, both online and in the lab, Wolfe and coauthors show that blocking or randomizing really doesn’t have much of any effect on search performance – it doesn’t even matter if observers got to choose to move between tasks or not. Observers’ performance doesn’t decline when they do a less than realistic block of 100 trials on the same task, nor does it take a hit from switching. It doesn’t particularly matter if they were looking for colour-defined targets on one trial and much more challenging shape-defined targets on the next trial, they don’t miss more targets if tasks are randomized and they don’t get appreciably slower.

Yay, what we do in the lab speaks to the world! Credit: pexels.com

Successful search: we can be relieved

So, what does this work tell us? Aside from resolving what might be, in some labs, a more impassioned debate about experiment design, it points to the alignment between laboratory paradigms of search and the search tasks we do every day. Since there isn’t a penalty from blocking search tasks, even though blocked tasks aren’t very common in our lives, we can learn from the blocked tasks the field has historically used to understand how we search outside the lab.

Kitten, how’d you wind up there? Credit: pexels

So, what should we take away from this paper? Well, if your lab has had passionate debates about how to design your search studies, blocked vs randomized doesn’t matter enough to worry about (you can even let your participants choose!). It also prompts some potentially fun questions for us going forward, because a considerable piece of search in the world and in the lab is when we stop searching, accepting the fact that the cat isn’t where we’re looking (or it’s unlikely that the moose will jump out of the woods into the road in front of your car). In fact, as the paper points out, it’s likely that quite a bit of how we decide to terminate search comes out of what we expect from the environment we’re searching in based on our percept in the moment, not a gradual accumulation of information over many sequential trials. After all, you probably don’t go looking for the pickles 200 times in a row… but, depending on your cat, you might do a bunch of sequential cat searches in different environments to figure out where the cat had gotten to.

Featured Psychonomic Society paper

Wolfe, J.M., Hong, I., Mitra, A., Objio, E., Khalifa, H., Ali, Y. (2025). Mixing it up: Intermixed and blocked visual search tasks produce similar results. Attention, Perception & Psychophysics. https://doi.org/10.3758/s13414-025-03077-8

Author

  • Wolfe Ben Thumbnail

    Benjamin Wolfe is an Assistant Professor in the Department of Psychology at the University of Toronto, Mississauga. His research sits at the intersection of applied and basic vision science, including questions of visual perception in driving, improving readability and extending our understanding of visual perception in real-world settings.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.