Imagine learning how to read and play music for the first time. It starts with a series of dots and lines thrown around on a piece of paper. Soon you learn how to interpret these so-called notes. You assign a letter name to each note, and then you figure out how to produce that note on your instrument. After a while, you may learn to recognize the sound of that note. Throughout this process, you are making associations across modalities—you relate information that is visual (the image of the note), textual (the letter notation), tactile (playing the note), and auditory (the sound of the note). And before long, you’re playing Beethoven’s Moonlight Sonata for your friends and family.
The same kind of associations occurs as we learn language. We see words on a paper, and we link the corresponding sounds of those words by having someone read them to us or by “sounding them out.” Sometimes the words are accompanied by pictures, which help us comprehend the message.
We make these associations across modalities all the time. However, associations between visual and auditory cues do not come easily for everyone. Individuals with hearing loss may have particular difficulty connecting visual and auditory information. One device that helps improve auditory acuity is a cochlear implant (CI). The CI sends an acoustic signal to electrodes within the cochlea that stimulate auditory nerve fibers. This stimulation can allow people with severe sensorineural hearing loss to access sound.
Even with a CI in place, children who had hearing loss before learning to speak may have trouble understanding spoken communication. These children do not have the same opportunities to interact with speech sounds that typical hearing children have. Therefore, the relationship between spoken words and their meanings is not established in the same way as other children. To improve this, children with CIs often enroll in speech therapy or audiology services to practice auditory recognition and auditory comprehension of spoken language. This helps them draw connections between sound and meaning.
Anderson Neves and colleagues (pictured below) at the National Institute of Science and Technology tested procedures for training children with CIs to strengthen auditory sentence comprehension. They published their study in the Psychonomic Society journal Learning and Behavior earlier this year.
In their study, they used the stimulus-equivalence paradigm to teach the meaning of spoken sentences. Think back to basic math principles, and you may recall the concept behind stimulus equivalence.
If A = B,
and B = C,
then A must also equal C.
If you are able to deduce that A = C (without being told), then you have learned that A, B, and C are all representative of the same thing.
An example of this task could use the following items:
- Spoken word “pen” (A)
- Written word “PEN” (B)
- Picture of a pen (C)
In an experiment, you may teach a participant that the spoken word and written word are the same (AB relation) and the spoken word and picture are the same (AC relation). If the participant can also indicate that the written word and picture are the same (BC relation), then they have established an equivalence class that involves all three items (ABC class).
We generally assume that equivalence means the participant understands all versions of the stimulus. Sometimes children with CIs have trouble learning these equivalence classes when using auditory stimuli. This research team did an initial study and found that children with CIs failed to learn relationships between spoken words and abstract pictures. In another study using sentences, participants needed more than five exposures of the stimuli before learning the auditory-visual relationships. CI users have been studied extensively in these tasks, and the results repeatedly point to needing more effective teaching methods for these individuals.
In their study recently published in Learning & Behavior, Neves and colleagues tested learning in three female CI users (ages 9 to 11 years old) using a simple discrimination task. Participants learned the meaning of pseudo-sentences (in Portuguese) and abstract pictures. They would present one of these stimuli, along with two other versions of the stimuli that were either flipped or inverted. If the participant selected the correct orientation, then they would see a picture representing that stimulus and an audio dictation of the pseudo-sentence.
Next, they tested the participants to see if they learned the equivalence of the pseudo-sentences and abstract pictures. Participants would see one stimulus and select the corresponding stimulus from the other set. See the image below for a schematic depiction of the experiment.
Over the course of four weeks of training, two out of the three participants developed equivalence classes between dictated pseudo-sentences, corresponding images, written pseudo-sentences, and abstract pictures. This suggests that they successfully comprehended the auditory presentation of sentences. The third participant had more difficulty making these connections, particularly relating to the abstract pictures, but the authors suspect she could have improved with more training exposure.
This work shows that CI users can improve auditory comprehension of sentences by using auditory and visual conditional relations to assist in learning. They demonstrate that simple discrimination training and specific consequence protocol may be viable for teaching sentence comprehension to CI users. These findings set a promising path forward for future work to determine the evidence-based efficacy of this teaching method for this population.
Featured Psychonomic Society article
das Neves, A.J., Almeida-Verdu, A.C.M., Nascimento Silva, L.T., Mortari Moret, A.L., & de Souza, D.G. (2021). Auditory sentence comprehension in children with cochlear implants after simple visual discrimination training with specific auditory-visual consequences. Learning & Behavior 49, 240–258. https://doi.org/10.3758/s13420-020-00435-4