How attentional control got too much attention — and how we can rethink latent constructs

If attention were a muscle, most of us would swear ours had been skipping leg day. One minute you’re reading an email, the next you’re three tabs deep into a recipe for a croquembouche that looks like a “Kraken bush”—and you don’t remember how you got there.

A festive holiday croquembouche and a Kraken bush village scene generated by AI. Don’t ask how we got here.

Psychologists call the ability to stay on task attentional control, and for decades, it’s been treated like a core mental superpower that is measurable with a stopwatch and a keyboard. But what if that tidy idea is wrong? What if “attentional control” isn’t a single inner force at all, but a patchwork of strategies that change depending on what you’re trying to do—and how distracting the world is that day?

Attentional control has been studied in many experimental designs, such as the following:

  • Stroop Task: you see a color word printed in colored ink (RED, BLUE), and you need to name the ink color (which may not be the same as the word)
  • Flanker Task: you see a sequence of symbols (<<><<) and you respond to the middle item, ignoring the flanking items
  • Go/No-Go Task: you respond to a rapid stream of stimuli, pressing a button for each stimulus EXCEPT when you see a letter “X” (for example)
  • N-Back Task: you indicate whether the current stimulus matches the one presented N trials earlier

Each of these tasks—and many more—has been thought to investigate the underlying cognitive ability of attentional control. That is, in an individual-differences framework, they can tell us how good someone is at resisting distraction and focusing on the task at hand. Now imagine putting decades of research under the microscope and finding that this supposed mental quality might not exist the way we thought.

Authors of the featured article from left to right: Alodie Rey-Mermet (photo credit: Vinzenz Pallotti University), Henrik Singmann, and Klaus Oberauer.

This is exactly the conclusion from a groundbreaking study published in Psychonomic Bulletin & Review. Researchers Alodie Rey-Mermet, Henrik Singmann, and Klaus Oberauer took an unflinching look at the measurements behind attentional control and found something surprising: even when you account for measurement noise and individual differences in thinking speed and accuracy, there still isn’t a clear, consistent cognitive factor that unites different tests of attentional control. In other words… it might not be a distinct psychometric construct after all.

One reason attentional control has been so hard to pin down is methodological rather than theoretical. Critics argue that the way we measure attentional control is inherently noisy. Most tasks try to isolate control by subtracting performance in an “easy” condition from performance in a “hard” one—like comparing reaction times for congruent versus incongruent trials in the Stroop task. In theory, this difference score should reveal pure attentional control. In practice, though, reaction times fluctuate wildly from trial to trial, and there could be various sources of measurement error (e.g., intelligence, mental speed). When researchers subtract two highly correlated averages, they often cancel out meaningful individual differences while amplifying random noise.

Another difficulty with finding evidence for a latent attentional-control construct is that many measures rely exclusively on response times. Individual differences in speed–accuracy trade-offs are disregarded, such as some participants favoring speed over accuracy, while others favor accuracy over speed. Therefore, a measure that ignores individual differences in accuracy could miss a substantial part of the variance in attentional-control ability.

To solve this, the featured authors used a sophisticated combination of Hierarchical Bayesian Wiener diffusion models and structural equation modeling to strip away the noise and biases and look for a real latent construct. They re-analyzed a number of datasets from attentional-control tasks, looking for something truly common across performance on these tasks.

The result? Even after controlling for measurement error and individual differences in speed–accuracy trade-offs, there wasn’t a coherent attentional-control factor that emerged from the data. The different attentional tasks simply didn’t correlate the way you’d expect if they were measuring the same underlying construct. This challenges a central assumption in cognitive psychology—that attentional control is a stable, measurable variable that can explain individual differences in focus, multitasking, and related abilities.

If “attentional control” isn’t a construct to measure directly, maybe it’s a collection of task-specific strategies and processes that don’t always line up neatly across different contexts. Think of it like this: Just as athletic ability looks very different in a swimmer versus a wrestler, “control” in one cognitive task might not transfer cleanly to another. People may rely on different combinations of skills depending on the demands of the task (e.g., working memory, perceptual speed, inhibition, or strategic adjustments) and lumping them all into a single score might oversimplify the messy reality of human cognition.

By questioning long-held assumptions and applying rigorous modeling, Rey-Mermet and colleagues invite us to reconsider how we conceptualize and quantify mental abilities. Rather than taking constructs for granted, cognitive science must continually test whether our measures reflect the processes we claim they do.

This work suggests that attention might not be a single, omnipresent mental power but a mosaic of interwoven capabilities that shift from task to task and moment to moment. And that’s not a setback — it’s an invitation. An invitation to deepen our tools, refine our theories, and embrace the beautiful complexity of the thinking mind.

Psychonomic Society article featured in this post

Rey-Mermet, A., Singmann, H., & Oberauer, K. (2025). Neither measurement error nor speed–accuracy trade-offs explain the difficulty of establishing attentional control as a psychometric construct: Evidence from a latent-variable analysis using diffusion modeling. Psychonomic Bulletin & Review, 32, 2585–2632. https://doi.org/10.3758/s13423-025-02696-4

Author

  • Brett Myers, PhD, CCC-SLP is an Associate Professor and the Director of Clinical Education in the Department of Communication Sciences and Disorders at the University of Utah. He received his doctorate from Vanderbilt University, where he studied with Duane Watson and Reyna Gordon. His research investigates planning processes during speech production, including parameters related to prosody, and their role in neural models of motor speech control.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like