The case of the missing information: Reconstructed faces and hy the other race effect happens

Ever confuse two people you’ve met, particularly if they’re both of a different race than you are? It’s likely that part of why this is hard for you is the Other Race Effect, where we find people whose appearance differs from our own to be harder to tell apart. This happens even when we’re motivated to be able to tell people apart – say, if you’re a teacher and you’ve just started a new academic year – but why does it happen? What information are we not using, and what’s responsible for the Other Race Effect?

Fundamentally, this is a question of what information we are or aren’t representing from the faces we see. The same information is available to everyone – it’s not like your face changes when someone else sees it (that would be a little strange!) – but the information each of us uses will vary. But how can we get at this information? We can’t exactly ask study participants (or anyone else) to be able to tell us what they are or aren’t using – they’d be able to tell us what they think they’re using, but that’s not the same thing!

To get around this problem, Moaz Shoura*, Dirk B. Walther and Adrian Nestor* (*pictured below) started with a standard set of similarity experiments – asking participants whether a pair of faces were the same or different, which is how we look for the Other Race Effect in the lab – but used that behavioural data to crack open the window into representation-space with image reconstruction.

Authors of the featured article, Moaz Shoura (left) and Adrian Nestor (right).

Going from behaviour to reconstructions

So, if the big problem here is that we can’t get direct access to each person’s individual representation of a face that they see, how did Shoura and colleagues find a way around it? They started, as we’ve mentioned, with a standard similarity task using faces of real people, which showed, as expected, that two different groups of online participants exhibited the Other Race Effect for a race they didn’t belong to. Where things get interesting is what Shoura and colleagues did with those responses – which, in brief, let them build a computational representation of the information their participants used to say that a face was or was not similar to other faces they saw. With that representation, they were then able to use a general adversarial model – specifically, StyleGAN2 – to reconstruct faces that only used the information their participants had used when they did the task with real faces.

How’d we get here? Figure 1 from Shoura, Walther, and Nestor, showing their entire process.

Now, merely reconstructing photorealistic faces that happened to use the same information that human observers did would be interesting as a proof of concept, but our story doesn’t end there. Shoura and colleagues followed up on making these reconstructed faces by showing that these reconstructed faces were both indistinguishable from the real faces they had started with, and that because they were missing some of the information that the real faces had – because their behavioural data showed that their participants didn’t use that information – that the reconstructed faces lead to an Other Race Effect. Think of them as “knockout” faces – they’re missing key information.

What can we learn from “knockout” faces?

For human observers – you know, us – we think the Other Race Effect probably comes about from a lack of visual experience. We just don’t see enough faces from other races, and so we tend to be worse at telling people apart. This is a well-known effect in machine learning as well, where facial recognition systems trained on unbalanced data (for example, with more White than South Asian faces in the training set) will show an Other Race Effect for the race they’re undertrained on. These two facts lead to a bit of a problem for this work – as we’ve talked about since the beginning, humans are subject to the Other Race Effect, and many machine learning algorithms have the same issue. So, how can you test whether the “knockout” faces are missing information in a way that leads to the Other Race Effect? You compare them to the faces that you started with, and figure out whether they’re similar – and they were.

Can you tell what’s missing? You might think the images on the left and right columns in (a) and (b) look a little different, but I’d bet you can’t say what’s missing!

So, what does this give us? The approach described here by Shoura and colleagues gives us a novel way to study what information we use when trying to recognize faces, what information we discard and what the consequences are, all while not asking participants to tell us things they can’t. This new method, in fact, suggests that the information we discard has other effects – an unexpected finding from these reconstructed “knockout” faces is that they appear younger than they should!

Facing the Other Race Effect

The Other Race Effect isn’t a case of any of us trying to be worse at distinguishing people – it’s a consequence of the information we’re extracting from the faces we see, and the experience we have doing so. This work not only gives us a new way to start understanding the mechanisms of the Other Race Effect, but also points us in a direction that might help us help ourselves to overcome it in the future – knowing what we’re missing is a great start!

Featured Psychonomic Society article

Shoura, M., Walther, D. B., & Nestor, A. (2025). Unraveling other-race face perception with GAN-based image reconstruction. Behavior Research Methods, 57(4), 1-14. https://doi.org/10.3758/s13428-025-02636-z

Author

  • Wolfe Ben Thumbnail

    Benjamin Wolfe is an Assistant Professor in the Department of Psychology at the University of Toronto, Mississauga. His research sits at the intersection of applied and basic vision science, including questions of visual perception in driving, improving readability and extending our understanding of visual perception in real-world settings.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like