Many of us have things that we look forward to when the weather gets cooler – the usual apple picking, pumpkin carving, admiring the fall leaves, and (how could I forget) the Psychonomic Society Annual Meeting. One thing that I don’t particularly look forward to is the dense fog that happens around this time of year. While it’s great for setting a spooky mood for Halloween, it’s terrifying if you’re a driver! Where I live in Ontario, we tend to get fog once the weather gets a bit cooler, giving off some pretty serious Silent Hill vibes. The picture that I took last fall speaks for itself.
Driving under these conditions is definitely a challenge, especially if you’re taking an unfamiliar route. Looks can be deceiving. For example, the next right turn that I need to take could be a street, or it could be an entrance to a parking lot. The sign in the distance could be a detour sign, or something else. To navigate these conditions, I might rely not just what I’m seeing, but also what I remember about the neighborhood, so that I don’t accidentally pull into the wrong driveway. For example, I might remember that the turn I want to take is just after the railroad tracks. Do I base my decisions off of what I see, or what I remember?
This fundamental problem of how we balance sensory input with our memory is the question that Aaron Bornstein*, Mariam Aly*, Samuel Feng, Nicholas Turk-Browne, Kenneth Norman & Jonathan Cohen* (*pictured below) investigated in a recent paper in the Psychonomic Society journal Cognitive, Affective & Behavioral Neuroscience.
In their study, the authors propose that our decisions reflect a combination of memory and sensory evidence. This makes a lot of intuitive sense. When we’re on a familiar route, we have some pretty strong expectations about where landmarks are going to be, and so we can rely more on our memory to know where we’re going, even when the fog is dense. But let’s say we’re driving someplace we don’t visit often. Our memory may be unreliable, so we might base our judgments more on what we see. In other words, we should continuously update our expectations based on the available information.
To test this idea in the lab, the authors first had participants learn associations between cues (in this case, a set of fractal patterns) and images of faces and scenes that followed. For example, when participants saw the light-yellow and green pattern (in the lower-left corner of the image below), there was a 60% chance that the image that followed would be a picture of a path in the middle of a forest. Sometimes these associations would be very strong. For example, a given pattern would mean that there was an 80% chance that they would see a particular face, much in the same way that you’re pretty confident that the next turn you want to take is after you see the railroad tracks.
In the next phase of the study, the authors looked at whether participants relied on these memories when making decisions about what they saw. Participants were first shown a cue from the learning phase, and after a delay, they saw a target image and were asked to identify what they were looking at. To make the task difficult, the target image was shown in-between a series of flickering pictures, making it hard to tell what was actually shown (kind of like driving in dense fog).
The authors’ results showed that, when there was a very reliable association in memory between the cue and the target, they would rely more on their memory, and base their decisions off of the cue. On the other hand, when the association in memory was not very reliable, they did not use this information, and based their decisions more on what was shown in the flickering images.
Not only did participants’ decisions reflect a balance of memory and sensory input, but this was also observed in participants’ brain activity. In a second experiment, the authors measured participants’ brain responses using functional magnetic resonance imaging (fMRI) to determine the extent to which participants were relying on their memory for each response. The results corresponded well to what they observed in participants’ behavior. After the cue appeared, participants’ brain activity changed depending on the strength of the memory evidence.
Altogether, this research applies to many situations where we have to make decisions that we’re not certain about, aside from driving in dense fog. According to Bornstein, “These results could eventually help scientists understand when and how people see what they expect to, rather than what is really there.”
We’ve all had the experience of thinking we spotted an old friend in a crowded airport or shopping mall, when it was actually a stranger. Your decision to wave at them might depend on your memory of where they live or places you know they often visit. While there are many cases like this where we’re not quite sure what we’re seeing, our memory can help steer us in the right direction.
Featured Psychonomic Society article
Bornstein, A. M., Aly, M., Feng, S. F., Turk-Browne, N. B., Norman, K. A., & Cohen, J. D. (2023). Associative memory retrieval modulates upcoming perceptual decisions. Cognitive, Affective, & Behavioral Neuroscience, 1-21. https://doi.org/10.3758/s13415-023-01092-6