Predicting changes in cognitive workload in real time

In cases where humans are tasked with jobs that have a lot of variability in workload, the aid of an automated system at the right times would undoubtedly come in handy. In this interview, Andrew Heathcote (pictured below) describes a recent paper by him and his co-authors published in the Psychonomic Society journal Cognitive Research: Principles and Implications on the topic. In it, the authors describe that workload changes can be predicted based on what just occurred. The applied utility of this line of research includes improved task management, performance, and increased productivity.

 

Heathcote Fig 1
Andrew Heathcote

Transcription

Hill: You’re listening to All Things Cognition, a Psychonomic Society podcast. Now, here is your host, Laura Mickes.

Mickes: I’m talking with Andrew Heathcote about his paper published in Cognitive Research: Principles and Implications called “Real-time prediction of short timescale fluctuations in cognitive workload.”

Hi Andrew. Thanks for talking to me about your research.

Heathcote: Hi Laura. Yeah. Thanks. Thanks for talking to me about it.

Mickes: So your work is usually pretty heavily mathematical and theoretical, and the topic of this paper is more applied than usual and there’s only one equation. So what’s happening? [laughs]

Heathcote: Are you implying that I have lots of … I try my hardest not to have lots of equations. Although the first author is a postdoc of mine who’s very technical. So, you know, there was a definite sort of need to push down some of the technical stuff. But the, the genesis of this project was a very applied question. And one that fits pretty well with Jeremy Wolfe’s sort of vision for this journal, I think, which was that we had got some funding from the Australian defense science technology group, and they were very interested, as lots of people are, in automation. Now, automation is typically not something that’s going to completely take over a task from a human, but it’s going to help because it’s not really good enough to completely take things over, but there are upsides and downsides of this. So if the automation is working very reliably, then humans can become disengaged and there are some pretty high profile cases of people driving their Teslas and becoming disengaged and paying the price of, um, you know, being killed because they didn’t come back into the loop when they needed to. So there’s an underload issue. And of course there’s also an overload issue. So at times you have too much on and you’re going to start failing. Now that seems the ideal time for some sort of automated aid to come in and help you. But the question is, how does the automated aid know when to do these things? How does it know you’re underloaded and that it could give you some jobs or that you’re overloaded and really you shouldn’t be interrupted at this point in time. Now we’re used to working with human team members. We’re very good at this because they can pick up various verbal, nonverbal, other signals that let them know, they have a situational awareness about the workload of their team members, hopefully at least in well-performing teams.

So what they wanted to know in this project was is there a way that we can give automation the same or analogous sort of situational awareness about the workload of an operator in a complex task? So that was, that was the key question. Now in the, in the nature of, I’m going to call it CR:PI and I shouldn’t, but, um, C-R-P-I. The nature of the journal, their mission is to say, well, sometimes these applied things bring up fundamental or research questions. The applied question is, can you predict workload at least on a short term timescale. That is, you know, knowing where they’re at right now, can, you know what they’re going to be like in three to five seconds, something like that. And then that brings on a sort of theoretical question, because we mainly think of cognitive capacity in two ways, one in a sort of bottleneck way that there’s a limited capacity channel.

And if something’s in it, then something else can’t come in, it’s got to wait, which is a very brief time scale thing. So once that thing’s through, then the next thing can come in. So that’s quite brief timescale. And the other way is in terms of some capacity that gets shared around among cognitive processes, one thing is more demanding, then it could mean you have less capacity to do another thing. At least when you’ve hit the kind of overall level of capacity. I mean, this is, there’s a famous paper calling the idea of capacity, a soup stone, uh, that being a stone that you put in your soup to make it taste better, but no one really knows whether, you know, actually you don’t need to put any, no one ever checked that you needed this thing. So there’s a famous paper on that. So it’s a slippery concept, but regardless, you’ve got these ideas that are fundamental theoretical ideas.

Now the question is what’s the timescale of change? So in other words, if I’m in a period of high workload, as soon as that workload is done, I immediately go back or does all my capacity return to me? Or is there some sort of time period afterwards, which you might intuitively think that I’ve had a high capacity period, you know, a high workload period, and I need a little time to recover. So that question of, of the kind of timescale of workload cognitive capacity fluctuations, what is that? And then how does that chime in with the ebbs and flows of demanding tasks? Because you know, a difficult task and operator say monitoring some sort of technical system will have periods of high and low workload, and that could, can vary quite, you know, sort of quite quickly in itself. So they were the fundamental questions for us. Are we able to predict workload on a short timescale? So it was very much about prediction that really had an underlying theoretical question of the way that workload changes over time and how quickly does it change.

Mickes: Right, so then you gave them a task to do where, where you could do all of these things, where you can, where the task was difficult and easy. And it changed across time. Is that, is that right?

Heathcote: So we made up a sort of video game tasking using unity, one of the big game engines, and it was an asset management task. So, and these are common tasks where operators have to manage a bunch of assets, have to monitor them and then, and maybe intervene at times. You know, so check, are they okay? And if they’re not okay, do something about it. So our specific task was you were managing a fleet of UAVs, unmanned, aerial vehicles. You kind of viewed them from above like a bird’s-eye view. They were flying over the ocean. [The screenshot of procedure is below.] You had more difficult versions that are flying over, you know, pictures of cities and things like that, but that just became insanely hard. So we made that easier. And so they’re flying around and they have fuel, which drains over time and your task is to keep them fueled.

Heathcote Fig 2 7 UAVs
Task display screenshot

And so to do that, you could move your mouse over and they’re moving around the screen, right? So they’re bouncing around this screen, not really fast, about 30 seconds to transit the entire screen, something like that. You can move your mouse, go over a UAV and that’ll bring up a fuel gauge and you see how much fuel that’s got left. And if it’s down in the red zone, got less than 25% left, you can click on it to refuel it. And then the game had some rules, which were that if it ran out of fuel, it’ll blow up. All right, you lost a lot of points and a new UAV would come in. If you clicked on it at the right time, when it had less than 25% left, then you could refuel it. But if you clicked on the wrong time, there was a false alarm refuel, you would lose points for doing that. And then it costs you some points. If you checked the UAV by hovering over it, to check it to gauge when it was in the refuel zone, you get a bonus. That was a good, good check. But if you checked, when it wasn’t, it was a bad check. So what you had to do here was maintain a kind of situational awareness of the fuel state of all of these UAVs running around and make sure you serviced them, ideally in the right order, deal with the ones that are, you know, low fuel and so they don’t blow up. To make an explicit and a kind of benchmark workload manipulation was you either had to manage three, which was manageable, you had to pay attention, but it was all right, five, it’s getting tough seven. It was hell. Things are blowing up, stuff is running around, you know, and, and you did this over two minute periods. And so there would be periods of high, low workload in this, we did like 24 cycles of these two minute periods. And so what you had was a manipulation of the overall blocked manipulation of the overall difficulty and then fluctuations within any one of those blocks of workload. And so we had that block manipulation as a kind of benchmark. And then we had the fluctuations over time, depending on the events that were occurring, that we were trying to predict workload with.

Mickes: It sounds kind of fun to participants like it, or was it stressful?

Heathcote: I actually, we, we recorded EMG as well, although it’s not in this paper and, and what it looks like was that there was stress in the first session a bit, but then they did two sessions. In the second section they’d got kind of used to it and knew how to manage it, the stress seemed to go down. And so, yeah, I think, with the seven, you know, and you’re trying to do well and things are blowing up, yes, it’s going to be a bit stressful. So it had to be demanding. What we wanted to do was make this a task that took you from manageable to really, really difficult so that we could really see a big spread of workload.

Workload is, you’re, you’re overloaded with seven. Are you underloaded with three or three is just …

Three was probably not particularly underloaded. You had to pay some attention, a little under loaded, but really you were, you was fairly reasonably low.

Mickes: Why did they come back for a second session?

Heathcote: I guess this is just something that I, I think is often the case that expertise and practice can interact with the effects you’re interested in. And because we were interested in, to some degree, you know, what would happen with more expert operators. So, you know, particularly in the military context, you’re often those people are very good at their tasks, you know, I’m not saying that people here with two, one hour sessions are going to be experts, but I wanted to at least look at that factor.

Mickes: Right. So what’d you find?

Heathcote: Well, there’s one other missing ingredient here, which is how do we measure workload itself? So what’s a kind of gold standard measure. So it turns out that there’s one out there that, that gets used to decide whether I don’t know, Mercedes-Benz or BMW gets to put in a new heads up display in your car or, or another widget or gadget on it or extra buttons, right? Because these are all distracting things. So I’m working here with a guy called Dave Strayer in Utah, and he gets a lot of funding from AAA to check the workload imposed by vehicles because that demand can, of course distract you and then you have an accident. And so this is a really important real world issue. He’s done a bunch of studies looking at how you could best meet your workload. And, but one, he likes the best is something called the DRT: detection response task.

Now the DRT, it’s a really simple idea. It’s a secondary task workload measure. And it goes back to what I said about capacity before. So if you’re doing your primary task, that is, you know, monitoring the UAV, then every, so often every three to five seconds randomly, you know, … you got a little buzzer on your clavicle and it buzzes, and you’ve just got to press a button. We put the button on a foot peddle. But, but they’ve got this stuff set up for cars where you’ve got a thumb switch and you’re driving. And maybe there’s a little, I think they’re set up is they have a little [ ] with an led and when the light goes on, you just press the button. And this is something you can automate very easily. It doesn’t really distract too much from the primary task. And so you just, your task is to do your main tasks, but just press that button, do a simple response time.

And it turns out that’s a very sensitive measure of your workload state. You’re going to be slowed if you’re under high workload and you’re going to omit, or often that is you simply going to fail to respond. Dave did a nice study comparing this with a subjective assessment. The classic one that’s used as the NASA TLX, which asks you about this stuff and an ERP measure. And essentially the DRT is up there with the NASA TLX and maybe a little bit less in terms of sensitivity. And then the ERP is a way down that they work, but they’re really noisy. But the trouble with a subjective workload assessment is it’s retrospective and it’s very coarse temporal sampling. So you get to know about, you know, what happened 10 minutes ago, but we don’t need that. We need real time measurement. Our idea was we look at primary tasks events, you know, whether things have blown up or whether you’ve you’ve checked with something.

And we say, can that predict your DRT in the next three seconds later? Or, you know, we looked at a window three to five seconds later and we built statistical predictive models saying to what degree can we predict your DRT response time or your emission rate based on events that had just happened. And our aim here in the long run would be to remove the DRT because that can be a little intrusive and I’m actually doing some other work, um, with the Navy in Australia where we’re looking at the design of submarine rooms, and I’ve been trying to get them to use the DRT, but they’re a bit like, yea, but it’s kind of intrusive and we don’t want to use it. I’ll get them to do it eventually. The idea is with this DRT you just use the calibration, and then eventually you can develop a predictive model that can predict your workload in the next few seconds, just based on the successful predictors of the DRT. So in other words, if that statistical model can predict DRT, then we’re saying, well, that means it can predict workload. And we would then use that as something that the automation would say, Hey, statistically would say, Hey, they’ve just had this and that happened to them, the situation is like this, and so look, they’re going to be underloaded or they’re going to be overloaded. Does that? Does that makes sense.

Mickes: Yeah, that makes perfect sense. They’re doing the task of refueling the UAVs and then at the same time, they’re having to respond to every time they get a buzz.

Heathcote: Click with a foot pedal, every time they get a little buzz on their, on their clavicle.

Mickes: And you looked at the, the results of both tasks: you modeled the DRT timing, and then you looked at the accuracy?

Heathcote: Yeah. That’s the sort of thing. What we first did was we, we said, can we validate the DRT? We kind of know it works, but let’s make sure it works in our task. And we looked at the change in DRT RT from three to five to seven UAVs and they slowed down hugely. So we know that, and they made more emissions. So we know that kind of validates it as a measure. And then what we want to do, um, is say, well, can we now use this valid measure as the gold standard on which we were going to try and predict by events that were occurring within the game, the primary task event.

Mickes: Right. and…

Heathcote: What happened?

Mickes: What happened, yeah?

Heathcote: A very nice piece of psychology here. So it turned out that what mattered were measures of situational awareness. We just started out with events. Did it just blow up? Did you successfully refuel? Did you hover over successfully or unsuccessfully? All that sort of stuff. And what we got was the predictive performance wasn’t great. If any of those things by themselves, they didn’t really predict all that well, and this prediction is a, it’s a kind of high bar. We weren’t looking, is it significant? We were looking, did we get out of sample, cross validation? So in other words, what we do is we develop the model based on one set of participants. And then we’d say, well, given that model, how well can it predict the behavior of another set of participants? So this is the toughest form of cross-validation, right, because it’s not just predicting trials from the same person, predicting performance of other people. So you build a model on one set of people. You look at your predictive ability on the other set of people. So this was something that’s been happening a bit more in the literature, looking at workload measures, but no one’s really tried it in the short time scales. What they typically do is that block thing, they go, all right, you know, you’re in a high workload block here. Look, my measure X is bigger here or less here than it is in a low workload thing. No one had tried this fast time scale or prediction sort of way of doing things. And so it didn’t look that great on the first go around just with events, but they’re kind of key to [ ]. The things that did have some predictive ability look like things that were about them losing situational awareness. So they started checking more often, you know, so they weren’t sure what was going on.

So what we did was then develop a set of more refined predictors. They were the things like, what’s the average fuel load of the fleet that’s out there at the moment. If you’re aware that there’s a lot of low fuel vehicles out there, then your workload is higher. We have measures looking at the order in which you check, ideally you should check the ones that are closest to out of fuel before the ones that are high, right? If you are understanding what’s going on and some of the measures related to the number of times, I check recently, things like that when we did that prediction performance was a lot better. And indeed what we could show some of these measures like that, that measure of weight and fuel the one standard deviation difference in that predicted about as much bump in workload as going from say five to seven UAVs.

So we got predictions. So what. Did we get good prediction? Was it big?

Mickes: Yea.

Heathcote: And so our measure was just, well, we know going from five to seven, really hurt. [Reaction times as a function of the number of UAVs for day 1 and day 2 are in the plot below.]

Heathcote Fig 3 results
Reaction times by number of UAVs for day 1 and day 2

Mickes: Yeah.

Heathcote: Can that weight fuel measure tell us about a fluctuation in workload measured by the DRT of the same magnitude. And, yes, it could. In the end, I think the answer to this is that yes, indeed there are ways of predicting workload in the short term. That fluctuations of workload in the short-term can be quite large. They can move around quite quickly. And so there is the possibility of developing a, and this is not something we did, but it’s, you know, this is sort of proof concept that it could be done, of developing ways of predicting workload so that automation could indeed get that sort of situational awareness that at least good team members also have.

Mickes: Right. So that’s really promising for moving forward with automation. What would you do then if you can predict it? Okay, this, this operator is overloaded now, switch operators. Would you do something like that?

Heathcote: Yeah,

Mickes: Is that the idea?

Heathcote: Yea. Try to have the automation come in and provide some assistance, some suggestions, make the team commander aware that it looks like there might be sort of problems developing. So you might bring another human into the loop or pass it off. You might also find that some people are somewhat underloaded and shift tasks to them within the team, right? And you might even need a commander to do that. So I think that you can imagine automation might have some sort of scheduler about things that need to be done and need some constraint to make decisions that take account of the human in the loop. And I think this is something we’re going to have for a long time. I don’t think at present, at least automation looks like it can totally take over. And, and you know, even as it gets better and better and better, I think you’ve had a podcast about this awhile ago. as it gets better and better and better, you’re still going to need a human in there for those critical situations. But then if they’re underloaded, it probably doesn’t actually make sense to take them right out of the loop. So the automation might, you know, give them jobs, even though maybe it can handle it, but just give them jobs to keep them engaged, that sort of stuff. That’s what I’m envisaging could be done.

Mickes: Right. You said the model could predict other people’s performance

Heathcote: In a cross validation sense. So the way this would work would be you would take the data from one set of people. You would learn a predictive model and it wasn’t anything super sophisticated. We just use linear mixed models, right? So we just used linear, mixed models that then you can draw [ ]. And then we fed into that model, the data from another set of people, and we got it to predict what they would do. In other words, the predicted RT and the observed RT and then we just sort of took a normalized mean squared error. And we looked, how, how good was that? And we technically, what we did was we developed a whole load of different models with different factors in there. So I think it was, you know, thousands of models in the end that we fit. And then we compared them on measures like AIC or BIC in terms of doing model selection. Always, really the, the thing for us was the ability to predict the performance of other people. Right. So in other words, the ability to predict things that you hadn’t seen, this is something that people have called for, for more, to have more of this to happen in psychology, not the things that are significant, but they’re able to predict. The machine learning people in particular said, psychology should care more about prediction than explanation. I think both are important here, but you know, this, this certainly was in a practical sense. Prediction is a powerful thing.

Mickes: Right. Yeah. So you responded to that request. So what does it mean theoretically? What’s the soup stone?

Heathcote: Yeah. Right. So I don’t, it’s certainly not deciding between continuous measures of capacity and bottlenecks, but what I do think it means is that your ability to do cognitive work can fluctuate quite quickly. You can have periods of low and high workload. So if you’re in a high workload situation, then it does look like as soon as workload out there goes away, then your capacity comes back up again. And that’s not to say you might be tonically stressed or something that could certainly be something that could overlay on the top of that. Interestingly, the EMG measures and the DRT measures differed. EMG tends to pick up a lot more, just stress. There’s a paper being prepared on this. We did find that some measures of the fluctuation in EMG seemed to be more about cognitive workload. This is a big issue in general here for the measurement of workload.

There’s a lot of people out there taking physiological measures because they seem very attractive in this domain, right? Non-intrusive um, although sticking the electrodes on you can be a problem or whatever, you know, it can be intrusive and physically moving around and they often actually… If they don’t do that, then they get interfered with, by ambient things like, pupillometry gets affected by ambient light. Um, you know, that, that sort of stuff. So these workload measures are out there and could be, I guess, combined. I mean, I’m not sort of wanting to rule them out, but they have a problem with interpretation. What are they really measuring? Yes. There’s a change in physiology, heart rate, variability changes. What does that mean? Is that specifically the cognitive workload construct? So I’d like to see more of that research using DRT because it feels to me, DRT is kind of fairly undeniably, you know, the, the secondary task, you know, the dual task measure or workload that it’s based on that that sort of framework is very old. I think it last 1898 was one of the references I found, you know, when that first was used as the way of understanding workload. So it seems to me that the trouble with the physiology is going to be, what does it mean? And it’s very noisy, but I do think in the future, what we might get is if we can validate the right bits of the physiology, we might bring all of those things together. So we can build predictive models that maybe leverage a number of these measures. But at the moment, I would say it’s pretty clear. There’s a lot of engineers, very interested in physiology who haven’t really thought too much about the psychology of it. I think we’re needed in that, in the development of these systems to really say, Hey, wait a minute. What’s the psychological concept you’re measuring? And what does it really mean? And how is that related to the physiology?

Mickes: Is that one of the things that you’re doing next? Are you following up on to this work besides the stuff that you’re doing with the Navy in Australia?

Heathcote: Yes. The Navy stuff will follow up a bit on that. I don’t have any researchers directly trying to build the predictive models yet. I’ve got an interest in doing that. This was a fairly full-on project running a year on very fast timelines. So we’re all recovering from that now. And, and it was a very big team as well. I have to thank everyone on the team. The experiments were conceived of over email between three continents. They were running in the USA, much of the analysis was done in Europe. And, you know, I was in Australia for part of that, but then also went to Europe and also went the US, so fun stuff. But no, I haven’t been seeking funding to try and build that predictive model, but I do think it’s viable and hopefully people will be interested in taking it up.

Mickes: That’d be great. I hadn’t heard of the DRT. Now I just want to use it.

Heathcote: It’s an ISO standard. So it’s actually an international standards organization, certified standard way of measuring workload. Believe it or not. They have a protocol that says how you should do this, particularly for in vehicle workload measurements. So it gets, it gets used by regulatory bodies as I said, to decide how our cars are configured.

I’ve got a theoretical interest in the DRT. So I’ve published a few papers now modeling the DRT because I do this evidence accumulation modeling with Spencer Castro, a PhD student of Dave Strayer’s. We’ve got a number of papers now on this, and that’s an ambition perhaps betraying my more theoretical orientation for me to take this work forward. So the DRT has an RT and an omission component and what I’ve been developing is a unified model of both of those things. So in other words, the evidence accumulation model that describes when you respond and then also when you fail to respond.

And so we get a unified way of understanding those two. Everything in this paper was done separately on omissions and on DRT RT. And, you know, it was converging, but still there was some differences and it would be nice to think theoretically about how we do this. It’s a simple RT task. I did the RT. I mean, it’s just simply you, it accumulates and you press, but then of course, omissions are a real thing because when you get heavily, under-load people do omit quite a bit and reasonably so, right. Cause they get told the primary task is what you have to do and the DRT, do it if you can. But yeah, so that’s certainly been a focus of development – is models of when people fail to respond and how you build that into the models of the timing of when they do respond. It works pretty well. My hope would be, and this may well be, it will be a thing in, in, in applied modeling if I were to be part of it, would be starting to build in that proper cognitive model of the DRT itself and then linking that up to some sort of model of, of the workload process. So to me, that’s the way to go – is to build these action models and then link them to cognitive models through shared parameters.

Mickes: Right? It’s really neat that you’ve done some of this applied work.

Heathcote: A focus for me in the modeling has always been practically usable models, models that have good statistical properties that can be estimated well. And as, so I developed with Scott Brown, something called the LBA, the linear ballistic accumulator, which was in many ways a simplified version of something called the diffusion model that my postdoc supervisor, Roger Ratcliffe, has been a champion of. And I’ve continued to do that and treat these models, not just as about decisions, but more and more about cognitive processes so that they can be applied more broadly. And then having done that, I’m interested in a stream of human factors work looking at high workload situations and how do evidence accumulation model parameters change as part of that. So what happens when you get overloaded? Do you change the amount of information you require? The threshold? Does it change? The quality of information? So you can answer those sort of questions and I’m, I guess in general, have moved more towards saying, well, how can we use these in applications.

Mickes: Right.

Heathcote: How can we use these sort of models to really inject more psychology? Not to just look at RT. Not to just look at accuracy, but really to pull out the underlying things that drive those. And there’s now been a range of theoretical papers that have, I think given me the, where with all to apply these things. I think the most exciting one for me recently to divert you off topic a little bit, was an e-life paper where we put together reinforcement learning models and evidence accumulation models. These models could learn a task. And what we found was this actually estimated really, really well, because a lot of the fluctuations that we see in these evidence accumulation models are probably just people adapting, right. So they’re not really noise. There’s something about your learning process. And, and so there’s all of this variance in RTs. They vary, right? Why do they vary so much, same stimulus, exactly the same, or, you know, actually vary, quite a lot. And my hypothesis is that we’re constantly ready for adaptation. Experiments are weird, right? They’re independent, densely distributed time periods, the world isn’t like that. You’re always ready for the world to change. So you’ve got these adaptive mechanisms that are constantly moving around the amount of information you require. And I think the real power is going to be as we, and I’m, that’s what I’m working on at the moment, is bringing those together with the evidence accumulation models, two most successful classes of cognitive model that are out there and then building them in ways that can apply to more real-world tasks.

For example, confidence ratings is one that I’ve got and just about to send off where I’ve taken fairly simple evidence accumulation models and been able to model confidence ratings.

Mickes: Oh really?

Heathcote: Everything else that’s out there is kind of unusably complex in many ways. Long way around, Laura, is that what I’m interested in doing is taking these math-y models, these computational models, and making them practically useful. Things that can actually be used by psychologists and can be used in applied settings. And so that’s been a long, long road, but I feel like it’s really beginning to come together now. And so, I will be doing more of this sort of stuff. I hope

Mickes: It surprised me when I saw the paper from you, but it does make so much sense that you would go in this direction and it makes sense for you to do it because you have a foundation there to build on.

Heathcote: I think the theoreticians and the mathematical psychologists also have a responsibility to make their work, bring it out of the ivory tower a bit, and at least make it deal with realistically complex problems. You know, this is, an issue I think we have that we go, here’s an interesting phenomenon. Let’s control away, almost everything that’s interesting and then look at it, you know, and so people get frustrated with that. And the applied people get frustrated with that. And I think we need to have a movement together where they’re willing to realize that you can’t be simple minded about this stuff. Like it’s going to be complicated and that we’ve got, you know, a good hundred years of understanding mental processes, you know, and that we need to bring that understanding to link in with the kind of obvious thing, oh, someone’s more accurate, they’re fast or whatever. It’s not that simple. You can’t just take those measures and treat them in a simple, minor way, but I’m trying quite hard to push in from that end. But you probably think my papers don’t look that way at all. They’re probably too complicated, but.

Mickes: [laughs]

Heathcote: I am, I collaborate a lot and I’m very interested in collaborations where I can take this sort of stuff and then bringing it into useful real world settings.

Mickes: This is great. Thank you so much for talking about your research.

Heathcote: Thanks for doing this.

Concluding statement

Hill: Thank you for listening to All Things Cognition, a Psychonomic Society podcast.

If you liked this episode, please consider subscribing to the podcast and leaving us a review. Reviews help us grow our audience and reach more people with the latest scientific research.

See you next time on All Things Cognition.

Featured Psychonomic Society article

Boehm, U., Matzke, D., Gretton, M., Castro, S., Cooper, J., Skinner, M., Strayer, D., & Heathcote, A. (2021). Real-time prediction of short-timescale fluctuations in cognitive workload. Cognitive Research: Principles & Implications, 6, 30. https://doi.org/10.1186/s41235-021-00289-y

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like