How aware are remote operators of autonomous vehicles?

The title of this post is “How are aware remote operators of autonomous vehicles?” According to a new paper by Mutzenich, Durant, Helman, and Dalton (pictured below) published in the Psychonomic Society journal Cognitive Research: Principles and Implications, the answer is: we don’t know. One of the points of the paper is to urge researchers to consider how our theories on situation awareness can inform this applied problem. The applied problem has outpaced our understanding of the situation awareness of remote operators.

Mutzenich Fig 1 Authors

One of the authors, Clare Mutzenich, tells me all about this fascinating topic in the interview below. Learn all about it, and listen to Clare expose my ignorance on the topic, give fashion advice, and talk about how gender bias can be deadly.

Interview

Transcription

Chung: You’re listening to All Things Cognition, a Psychonomic Society podcast. Now, here is your host, Laura Mickes.

Mickes: This is our first podcast of 2021. And it’s been a while since our last one. So I’m thrilled to get back to it with this interview. Oh, Clare, how do you say your last name?

Mutzenich: Mut-zen-ich

Mickes: So, Clare will tell us about her review paper published in the journal, Cognitive Research: Principles and Implications also known as CR:PI (creepy). Who were your coauthors, Clare?

Mutzenich: My coauthors are Polly Dalton at Royal Holloway and Sean Helman, who’s at TRL (Transportation Research Laboratory) and also Szonya Durant, who’s also at Royal Holloway.

Mickes: I know Polly and Szonya pretty well. What a great team.

Mutzenich: Yeah.

Mickes: So having moved here from California where it’s easiest to get around if you drive, and moving from California to England, I haven’t driven in eight years and I used to love driving. I don’t miss it at all.

I wouldn’t trust myself to drive because here I’d be driving on the wrong side of the road in the wrong side of the car. Sorry, Brits. That’s just my, from my perspective, it’s the wrong side.

An autonomous vehicle would be ideal in my case … maybe, but I’ll let you tell me, Clare, if I am safer in an autonomous vehicle, or learning, actually learning how to drive in the UK.

Your paper with your colleagues is called “Updating our understanding of situation awareness in relation to remote operators of autonomous vehicles.”

Will you please help me unpack that title? I mean, maybe it’s obvious to everybody, but first, what do you mean by situation awareness?

Mutzenich: Well, situation awareness is knowing what’s going on in your environment, around you and understanding what that information means, and being able to use that to possibly predict what might happen a couple of seconds into the future.

So in my research, I use situation awareness to understand what remote operators of autonomous vehicles might see in their environment and how long it would take them to process that.

Mickes: How does that relate to autonomous vehicles?

Mutzenich: So my thesis is an applied thesis where I’m looking at autonomous vehicles, which are sometimes called self-driving cars. And if I take a few seconds to just explain kind of autonomous vehicles and those levels of automation, because some of your listeners might not be kind of au fait with that kind of terminology, autonomous vehicles, automated vehicles, there’s a set of standard industry-directed or taxonomy, which explains five levels of automation.

So the first two, it’s basically just the human is in control of driving the car. You might get some assistance from like braking or steering or antilock braking, but essentially you are in charge as the human behind the wheel and you have to do all of the dynamic driving and braking tasks at level three.

That’s when we start getting into this idea of self-driving cars, the car at level three can drive alone for short periods, but the human has to be behind the wheel the whole time, ready to take over if the system requests it.

At level four, the car can drive by itself and doesn’t require the humans takeover, but it can only do so in certain what’s called operational design domains. And that could be a motorway. It could be a time of day or particular location, like a retirement village. And so at level four, the car can actually sort itself out and come to a safe stop.

But at level five, that’s what we really see it that fully self-driving where the human only needs to set the destination and the car supposedly would never need a human to take over.

Mickes: They never take over? Or they can, is there…

Mutzenich: That’s what the taxonomy says,

Mickes: Okay.

Mutzenich: I mean, some people have referred to these stages as well, like informally, as feet off, hands off, eyes off, brain off. Whereas our research, our article is actually dealing with the far off. That what happens when even entirely driverless vehicles are occasionally going to need some kind of remote human intervention. And there are a number of reasons why this might happen. So the reason for this is because many problems arise because they’re not actually understood by the autonomous vehicle, which we call AVs. The AV’s programming, which would oblige some kind of human involvement. And these are called edge cases.

Suggested revisions to the SAE (2016) taxonomy

So edge cases refer to, whereas an AV is trained or millions of images. They might not, they might come across something which they haven’t encountered before, and they have nothing to actually reference. And in that case of a remote operator, a human, might have to rejoin the system, rejoin the loop in order to actually provide some kind of answer or possibly in the event of total system failure, they might have to actually move the car to a safe location on the side of the road or drive the car.

Mutzenich Fig 3 Edge Case Example
Example of an edge case (Photo by Unknown Author is licensed under CC BY-SA). The dog may be a problematic case.

Mickes: Right? The situation awareness is of the operator, the remote operator. So you just might be kicking back in a level five, AV, and then all of a sudden “beep, beep, beep” there’s danger. And then you have to kick into gear, pay attention, put your phone down.

Mutzenich: There are increasingly sort of technologies is, you know, sort of inexorably marching on. And although we haven’t quite got to level five yet there are a number of car manufacturers that are bringing out level five vehicles. Cruise, of bringing out a prototype called the Origin, and that has no steering wheel or pedals. So in this case, even if you were a passenger in this vehicle, you’d have no ability to take over in the event of some kind of edge case.

Mickes: Oh my gosh.

Mutzenich: So this might be a remote operation, it means that a human is going to have to join the loop remotely. And they’re going to need to build up enough situation awareness to be able to, first of all, understand what’s happened because maybe the AV can’t communicate that and they’re going to need to safely drive the vehicle.

The whole point of the article is sort of to say that these standard taxonomies need to be able to include remote operation so that we can put in place regulatory frameworks for remote operators, they might need training. We certainly need to discuss safety protocols.

And we also find in the literature that the situation awareness theories that we currently have are not really refined to deal with the issues of remote operation, with the challenges that are going to be unique to a remote operator in a situation that they’re not physically in controlling a car that they’re not actually in.

Mickes: The paper was basically saying, we have to figure this out because technology is moving so fast. Yeah. And we have to do this type of research and you have to figure out your taxonomy.

Mutzenich: That’s right. And we’re also saying it’s an urgent research priority to actually investigate, um, how to maximize situation awareness for remote operators. So how we can design interfaces that would enable them to get the right amount of information, but striking a balance with how much information would represent a cognitive overload that would slow down their situation awareness while they’re trying to process, you know, all the helpful information that we could give them. Our research particularly is saying an urgent priority is to really decide what is the, the Goldilocks amount for a remote operator [inaudible].

Mickes: It’s got to be just right!

Is that work that you’re doing, that you’re doing right now?

Mutzenich: Yeah. So that’s currently underway in our laboratory. We’re looking at a number of ways that we can deal with what the call situation awareness demons; like being out of the loop. So remote operator hasn’t been monitoring the situation. So they need to actually get back into the loop. They need to work out what’s happening. And we’re trying to discover how we can minimize the amount of time it takes them to do that.

And also the issues of embodiment are really important because if a remote operator isn’t there that has an impact on their risk perception because they’re not in any danger and it might also affect their speed perception because it’s very difficult to pick up cues from the vehicle you’re not in like the force feedback of being pushed back in the car or being buffeted by the wind. None of these things would exist for a remote operator.

Workload issues are really important as well. If you have too much to process or you get another call from another AV, having a problem, you, how do you prioritize which one you continue dealing with? Or similarly too little luck is going to result in decreased vigilance. So how do we keep our remote operators poised and ready to take over whenever they might get a call? These are all really important research priorities.

Mickes: Do we know how often they might need to take over in a level five AV?

Mutzenich: Yeah. So we can look at data, um, from disengagements disengagement is when an AV has had to either stop or allow the safety driver to take over. In California, your hometown or home state is the only place that actually currently records disengagement. So we can use that data and we can look… so Waymo, they drove the most number of miles, which is 1.45 million. And they had in that year, 110 disengagements one, every 13,000 miles.

Mickes: That’s it? Oh my gosh.

Mutzenich: Yeah. But we can look at all of the disengagements and actually see what kinds of things are causing disengagements, which is fascinating. What are some of the top ones? The top ones are perceptual issues. So it’s things to do with weather like sun glare. And it could be, um, things like, uh, the perception of signage. You talked about not driving in the UK. So maybe you don’t kind of really understand the pain of so many of our signs are occluded by other things, that the road environment is very cluttered, particularly in London and cities like that.

Mickes: Yes!

Mutzenich: You can actually see parallel roads and all the signs for those roads. And that’s actually fairly easy for humans to filter out irrelevant information, but an AV might interpret all of them as relevant. And it gets very muddled. Similarly, as well, I talked about the way that the system is trained on millions of images. If they haven’t been trained, let’s say all of pedestrians are people walking around on legs. Then that means that they might not recognize wheelchair users as pedestrians.

We also see a little bit of, um, gender bias in that a lot of the software designers are men and men wear trousers and they don’t think about skirts and AVs aren’t very good with their legs.

And really disturbingly red lights, which is a big issue really on traffic light perception is quite poor in an AV.

So all of these different edge cases could be quite easily interpreted by human. While AVs possess perceptual abilities that obviously are far greater than humans, in some instances, we still see this need for a human to take over, potentially just provide guidance, some kind of remote assistance or at the other end of the scale, to have to remotely drive the vehicle, which brings us back to the situation awareness needs.

Mickes: I have so many questions. It’s fascinating. I could talk to you for hours about this stuff.

Mutzenich: It’s such a great subject.

Mickes: It’s so neat. I mean, I am maybe just ancient and haven’t driven in so long that the automated bit that I remember my dad, when he would take us on vacations, he would, um, use cruise control. Do you know what that is?

Mutzenich: Yeah, yeah, yeah. It’s, it’s evolution, not revolution. That actually the, I used the word inexorable before for like the sort of technological developments that people have quite strong views about autonomous cars, but they’re accepting increasing levels of automated systems in the car. Um, and they get in a car with the taxi driver who you trust to drive and humans are fairly bad drivers.

Mickes: [Laughs.]

Mutzenich: You know, we’re talking about 110 disengagements in a year. It’s it’s really, you know, compared to the number of accidents that humans have

Mickes: Oh, it’s impressive. Yeah.

Mutzenich: The benefits of autonomous vehicles are quite clear.

Mickes: Right. And also as this becomes more and more popular, I will not wear skirts.

Mutzenich: I think perhaps they should just employ some female software engineers.

These edge cases of unknowable unknowns. You can’t predict them, but you know, there are more and more instances on in the media is we’re getting these pictures of, you know, it could be a paper bag that’s misinterpreted as a solid object. Something as trivial as that. In one of the California, disengagements a company which I won’t name that they had actually registered too big rain. Literally the size of the rain was the problem [laughs].

Mickes: [laughs] Really?

Mutzenich: In the UK that’s obviously a problem. And you know, a lot of these AVS are being tested in places like California, where the weather is fairly constant and pleasant. And when you get to the UK and every road leads finally in the last mile to some little dirt track with no signage and nothing, you know, to kind of help these different roles that remote operator might have to play actually kind of broadens. It could be informative. It could be purely, um, logistical in terms of providing map based, um, information, you know, because we all know what it’s like when you’re following a sat nav. And it’s … suddenly, the computer says, no, [laughs]

Mickes: If they’re testing all of these AVS in California, they must be measuring how the remote operators interact. Is there a lot of good data that you can …

Mutzenich: So currently in the industry, there’s a lot of debate about the role of a remote operator. Um, it ranges from people saying a remote operator is inherently unsafe. So there is absolutely no place for a remote operator. And there were companies operating, which did not have a remote operator in their business model at all.

There are some companies that see more of a remote assistance role where the remote operator is almost like a conductor who can be communicated with by the passengers, um, or might give information to the passengers if there’s a problem, or a hold-up, they might be able to actually communicate that to the onboard passengers. There are other companies who actually have remote control, so direct tele-operated driving of, of, of vehicles as their entire business model. And they are currently doing that process. They’ve extended it, I think because of the pandemic, um, they’ve extended it to delivery robots.

So Postmates is one example, um, where there is a remote operator who can step in to help if the, if the bot is having a problem. And again, there are some people within the industry that accept a remote operators role is really important only at certain speeds or only in certain environments.

Mickes: Oh.

Mutzenich: Well that in itself creates a problem for situation awareness because the situation awareness needs for someone remotely operating a car or a vehicle on a motorway where it’s at high speed is going to be very, very different to maybe in a loading bay where it’s very slow speed, but a really cluttered, busy environment. And one of the use models that I’ve seen for remote operation is valet parking. So we all have problems dealing with a car park. Um, and so, you know, the idea that, um, an AV is going to be able to negotiate that kind of environment with no, no need to call upon some human intervention is, is just unlikely and extreme.

Mickes: So you think it’s hooey that, that there would be no remote operator that seems …

Mutzenich: I think it would be irresponsible. I don’t think any engineer builds a system that they think can never fail. And so we need to ask what happens when it fails and have some type of, as I said before, some type of use case for the roles of a remote operator and be able to actually establish some safety protocols and regulatory frameworks for who that remote operator might be, how many cars they can remote the operate or monitor at any one time.

Even things like the handover between remote operators. You know, you’ve finished your day and the next one comes, how do you fill in that gap in that process, if there’s a call coming through. And there’s a lot of, I think there’s a lot of work to bring this remote operator into the industry in some kind of formalized way.

Mickes: And that was basically your take-home message from the review paper.

Mutzenich: Yeah, yeah. Basically we need to change the industry taxonomies to include remote operation, that the theories of situation awareness need to be adapted to include the challenges that a remote operator would need to encounter and face and a consideration of how to actually maximize situation awareness for remote operators, as an urgent research priority.

Mickes: Wow. You’re going to set off a whole new line of research, aren’t you? Yeah,

Mutzenich: Yeah, yeah.

Yes, it needs it.

It’s quite a disruptive message. And yet an obvious one, very obvious one, I think really when you speak to industry, um, sort of companies, it was always get the, get the car, get the technology, right. We’ll deal with these little problems at the end. And that’s where, that’s where the hold is is when I see …

Mickes: Humans are always the problem!

Mutzenich: Yeah! That’s it! Originally with some of the first trials, the problem with the AV stopping was because humans kept jumping in front of it to see …

humans are interesting. We are unpredictable. It is hard work for an AV to understand what we’re going to do. And for it to understand that a big face on the side of a lorry, isn’t a pedestrian.

The key issue is that the technology went so fast that the human factor problems have actually now been the thing that is hampering, that final stage. And that last mile, that last push. And when I started my PhD that, you know, everybody was saying it’s 2021, what’s going to be the first self-driving cars in the UK? And that’s already been pushed back to 2030.

Mickes: Really?!

Mutzenich: And yeah, conservative estimates are more like 2050.

Mickes: WOW.

Mutzenich: But it is going, yeah, it’s, as I say, it will be evolution rather than revelation. I see there’s so many benefits with ride-sharing mobility solutions for people who have, um, you know, restricted mobility, like the elderly or the young, um, and in a COVID world, being able to have your own vehicle rather than go on public transport, but you might not have the funds or the parking to own a vehicle. You know, these cars could be self-cleaning. And so, you know, you get picked up by a perfectly COVID-tastic vehicle. [laughs]

Mickes: [laughs] COVID-tastic!

Now, that, that, that, now that this I’m going to cut this out [I didn’t cut it out], I just got so excited about a clean car! Do you know how many dirty taxis I’ve been in?!

Mutzenich: I know!

Mickes: [laughs] So gross.

Mickes: This is so exciting. Fascinating, great work, Clare.

I can’t thank you enough for talking about it. This has been so interesting. I think what I’m going to do is I’m going to continue to walk everywhere. What pants on .. or trousers, excuse me,

Mutzenich: You can keep pants on as well. [laughs]

You can wear both!

Mickes: [laughs] Thanks so much for talking to me about your really fascinating work and I can’t wait to see what happens as a result of this.

Mutzenich: Thank you for having me on the show!

Mutzenich and Mickes 2021
Clare Mutzenich (left) and Laura Mickes (right)

Concluding statement

Chung: Thank you for listening to All Things Cognition, a Psychonomic Society podcast.

If you liked this episode, please consider subscribing to the podcast and leaving us a review. Reviews help us grow our audience and reach more people with the latest scientific research.

See you next time on All Things Cognition.

Featured Psychonomic Society article

Mutzenich, C., Durant, S., Helman, S., & Dalton, P. (2021). Updating our understanding of situation awareness in relation to remote operators of autonomous vehicles. Cognitive Research: Principles and Implications, 6, 9 (2021). https://doi.org/10.1186/s41235-021-00271-8

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like