When HOOK lets you remember the voice of BOOK: generation effects for context

Get ready to think of some antonyms. Ready? Now fill in the blanks: HOT-C____, SHORT-T_____, and LEFT-R____.

Decades of memory research have converged on the strong conclusion that your memory for cold, tall, and right will be better after you generate them in response to the antonym cues than if you had merely read those words. This phenomenon has been aptly named the “generation effect” and it is the reason why I only make a partial set of PowerPoint slides available to students ahead of my lectures—filling in the missing bits during the lecture might improve their memory for the material.

Although the basic generation effect is not in any doubt, there is some controversy about its boundary conditions. One of those controversies concerns the effect of generation on contextual information. Several studies have found that generation of words reduces memory for surface details, such as information about the typeface or color ink in which the cue was presented.

One explanation for this reduced memory for contextual details invokes the kind of processing that is performed at encoding: antonym generation focuses on the meaning of the words, thereby directing processing away from the stimuli’s surface details. In support of this explanation, rhyme generation (e.g., BALL-T____ or LIGHT-R____) also has negative effects on memory for visual features, whereas visually-oriented generation (e.g., re-arrange LALT or TRIGH) does not impair context memory.

In a nutshell, on this account, if the processing during generation overlaps with the type of information being tested later, memory benefits. When the processing does not overlap, memory is impaired.

One limitation of existing research, however, is that it has only focused on the potentially negative effects of generation on memory for contextual details. A recent article in the Psychonomic Bulletin & Review asked whether this can be turned around: if overlap of processing is crucial, then surely there must be a positive effect of generation on contextual memory if the generation at study involves more extensive processing of the relevant context features than does reading?

Researchers Amy Overman, Alison Richard, and Beau Stephens proposed that a rhyme-generation task, for example, should facilitate contextual memory if the tested context is auditory rather than visual.

Overman and colleagues presented participants with two study-test sequences. Each sequence involved auditory and visual presentation of the material followed by a visual recognition test, with the type of generation task being alternated between study-test sequences: in one sequence, participants generated based on rhymes, in the other they used antonyms to generate the target. Each sequence also involved a number of pairs that were presented intact, without the need to generate anything.

To illustrate, consider the antonym-generation sequence. Participants would view each study pair, randomly presented either for generation (e.g., BEFORE-A____) or reading (e.g., BEFORE-AFTER). Participants’ task was to type the target word (“AFTER”) irrespective of whether or not it was presented in full. Once the target had been typed, with the stimuli still visible, the word pair would be read aloud in a male or female voice. Participants were instructed to remember the gender of the voice, as well as the identity of the target.

In the rhyme-generation sequence, the procedure was virtually identical except that the pairs were of the type HOOK-B____ (with HOOK-BOOK being shown when no generation was required).

The recognition test was the same for both sequences and involved visual presentation of the probe items, which comprised the studied words and an equal number of new words in random order. For each probe, participants were instructed to judge whether the word had been studied as a target word, and if so, to identify the gender of voice that had spoken it at study.

Turning to the results, let’s first consider recognition performance. To measure memory performance, the hits (responding “old” to studied items) and false alarms (responding “old” to foils not actually seen at study) were combined into d’ scores. (A video-based tutorial on d’ can be found here.) The figure below shows the results:

Figure 1a in the featured article.

It is clear from the figure that recognition was better after generating than after merely reading the word pairs. Although reassuring, this replication is hardly surprising given the strength of the evidence for the generation effect.

The figure also shows that generating antonyms is slightly more effective than generating rhymes: this is also unsurprising given that memory is known to benefit particularly from meaning-based (semantic) processing at study.

Now let’s turn to the most interesting aspect of the results, namely the memory for contextual features. The primary measure of context memory was the accuracy with which the gender of voice was identified for old items at test. Those responses were also converted into d’ scores, which are shown in the figure below:

Figure 1b in the featured article.

Unlike for recognition, there was no main effect of generation. Likewise, the type of generation task had no effect on context memory. Crucially, however, there was a significant interaction between the two variables, which was driven by a positive effect of generation on context memory in the rhyming task. That is, exactly as expected by Overman and colleagues, rhyme generation yielded a positive effect on auditory context memory, compared to a condition in which the material was read only. Moreover, in replication of earlier results, antonym generation was not beneficial for context memory.

Overman and colleagues conclude that “the pattern of findings reinforces the view that generation effects are a product of the processing differences between generate and non-generate conditions.” The data thus support the processing account of the generation effect. Moreover, the data provide evidence against an alternative view, namely that generation induces a trade-off between item and context memory. On the trade-off view, enhanced recognition should be accompanied by decreased context memory—however, this was not observed because for the rhyme-generation task memory was facilitated for recognition and context.

A final intriguing aspect of this study is that the auditory cues for gender were provided after the participants had already generated (or read) the target word. It follows that rhyme generation must have enhanced the encoding of subsequent auditory context. Overman and colleagues suggest that this may have represented a shift in attention towards phonological processing during generation, which then transferred to the encoding of gender. The details of this process remain to be worked out.

Article focused on in this post:

Overman, A. A., Richard, A. G., & Stephens, J. D. (2016). A Positive Generation Effect on Memory for Auditory Context. Psychonomic Bulletin & Review. DOI: 10.3758/s13423-016-1169-4.

Author

  • Stephan Lewandowsky

    Stephan Lewandowsky's research examines memory, decision making, and knowledge structures, with a particular emphasis on how people update information in memory. He has also contributed nearly 50 opinion pieces to the global media on issues related to climate change "skepticism" and the coverage of science in the media.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like