#AS50: The journey towards finding a precisely-right explanation for memory

Atkinson and Shiffrin’s “modal” model of memory is more than 50 years old and continues to inspire memory research. The continued reliance on the model is a testament to its strength and the strength of the work that informed it. There are plenty of robust and replicable findings in the published memory literature, and many of these benchmark findings point to a need to propose dissociations within a memory system. Atkinson and Shiffrin gave us two ways to do this: memories may belong to structurally distinct systems – sensory, short-term, and long-term systems – or memories may be acted upon by different control processes.

We might observe robust and consistent dissociations in what is remembered by using the restrictions unique to these proposed structures and processes to predict what, or how much, may be remembered under particular circumstances. Or, while perhaps “roughly-right” (paraphrasing Carver, Logic: Deductive and Inductive, Chapter 22), the dissociations proposed may not be “right enough” to encourage progress towards the precisely-right global model of memory we really want. Let’s look at some of the steps along this journey from Memory & Cognition’s special issue celebrating the modal model.

Perhaps the most vivid and well-known dissociation Atkinson and Shiffrin offer contrasts the “short-term” with the “long-term” memory system. The short-term system does not simply hold information passively. It is acknowledged to handle many functions, including implementing flexible control processes and outputting responses. Terming these systems “short-term” and “long-term” highlights at once the strengths and weaknesses of creating such a vivid and broad framework: it is clear enough to make anyone feel they understand what is meant, while evading the precise definitions needed for theory to progress further. Just to illustrate this, what do you consider “short-term” memory?

If you study it closely and do primary research on it, I bet your answer will be phrased in seconds; if you are a more casual observer of memory phenomena, I bet you think of it on the order of minutes at least, or maybe even days. Even experts in short-term memory would struggle to agree on a very precise range of how many seconds they consider it to last, which is a problem for constructing unambiguous tests of short-term memory. Atkinson and Shiffrin were not naïve about this ―they supposed that information currently in “short-term” memory was first activated in “long-term” memory. One might instead describe these different memories as “active” versus “dormant”. Each of these formulations describes a dissociation, but each leads to different predictions which could be compared.

In their contribution to the special issue, Baddeley, Hitch, and Allen consider the breadth and scope of the modal model a strength, and liken it to their multiple-component working memory model, which is similarly broad and vivid. The multiple-component working memory model can be seen as specifying detail within Atkinson and Shiffrin’s short-term system, which is given many diverse functions. Baddeley and colleagues describe a short-term system that includes distinct stores for verbal and visuospatial information, an attentional system capable of implementing control processes, and a structure for interfacing the contents of this system with long-term memory. Like Atkinson and Shiffrin’s model, this proposed structure for the short-term system arose from assumptions grounded in consistent, robust data patterns. Baddeley et al. argue that this broad approach to theorizing is most useful: quoting, “it is better to be roughly right than precisely wrong”. But surely our ultimate goal is to be precisely right, so we should not grow too complacent with our roughly-right frameworks. To make further progress, we need to try for greater specificity, which will sometimes mean questioning our “roughly right” assumptions and getting comfortable with being “precisely wrong” sometimes as we travel the road toward precisely right. Discovering which predictions are precisely wrong is incredibly useful for building knowledge: this tells us which directions to avoid more clearly than a vague instantiation does.

Some papers in the special issue of Memory & Cognition celebrating Atkinson and Shiffrin’s model revisit assumptions from these broad frameworks, and their work suggests that these “roughly right” descriptions would benefit from reconsideration. Poirer, Yearsley, Saint-Aubin, Fortin, Gallant, and Guitard replicate some of the benchmark work that led theorists to presume that verbal and spatial information are maintained in distinct short-term memory buffers. Participants remember less verbal information when carrying out a verbal secondary task than when carrying out a non-verbal secondary task. Similarly, participants tend to remember less spatial information with a spatial secondary task than a verbal one. Though this robust finding fits with the idea that there are distinct short-term stores for verbal and spatial information, Poirier and colleagues found that they could account for the pattern quite well without assuming that the information is stored separately in distinct systems. Perhaps these clear empirical findings may be explained by other assumptions than those leading to a “roughly right” framework.

Ward and Tan describe results which may be better explained by distinguishing different retrieval control processes rather than structural storage systems. They compared which items (i.e., the first, the last, the second-to-last, etc.) participants recalled from a verbal list as a function of how long the list was and how many items the participant was told to report from the list. Participants were told how many items to recall after the list was presented, so this manipulation would not have influenced how they encoded the items. They consistently found that participants were likely to prioritize the last item(s) when recalling only part of the list, but to start with the first item if instructed to recall everything.

The order in which participants recall information has traditionally been one source of evidence for the dissociation between “short-term” and “long-term” storage. Because the short-term store was believed to perform many functions, it makes sense to predict that participants would strategically unload the information held in it (which should be the items from the end of the list) whenever possible before retrieving information from the long-term store (i.e., items from earlier in the list), perhaps to avoid provoking conflict between the control and storage processes both attributed to the short-term store. However, Ward and Tan found that more elements were recalled in total when participants initiated recall with the first list item than when participants started from the end (i.e., first “emptying” the short-term store). Ward and Tan showed very consistent patterns of recall across different memory tasks, which again suggests that the same information is available in different memory tasks: what differs is which processes are performed on it.

Broad frameworks like those of Atkinson and Shiffrin and those described by Baddeley and colleagues give us the lay of the land. These frameworks introduce and integrate benchmark findings, and initiate explanations of memory by proposing reasons behind the clearest, most consistent patterns observed in our data. Portraying this scope provides a communicable snapshot of what is plausible at a given timepoint. That Atkinson and Shiffrin’s depiction of the memory system has persisted so long is a testament to its quality – they identified findings that were convincing enough to support such a broad framework for how information may be remembered, and explained them in a manner considered “roughly right” by many. However, improving how precisely we can explain memory functions is also important, and that decades on we can still lack clarity on whether key components of the “roughly right” instantiation are really precisely right is troubling. There was always more than one way these benchmarks could be explained, and clinging to the “roughly right” should not inhibit searching for the “precisely right”.

We should not be afraid of taking some “precisely wrong” turns on this journey – “roughly right” was not where we intended to end up.

Author

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like