Opening the black box within the black box: What the brain tells the brain

When we teach our first-year Psychology students about the “history of Psychology”, they usually get to see at least one slide that shows a “black box”. “This black box” – we tend to say – “is what Behaviorists like Watson and Skinner wanted to keep unopened, since all they were interested in was the relationship of a stimulus and the response it evokes.“

We go on by saying something like “[…] the main aim of Cognitive Psychologists, on the other hand, was — and still is — to go beyond mere stimulus response mapping by opening up this black box to see what’s going on in our minds.” However we phrase it, the way that we process information seems to be a core topic for most of us interested in the vast field of cognition. Cognitive neuroscience has taken up this topic by trying to measure “information” in the brain. But what exactly is the “information” that we are trying to measure? Like “attention” the term “information” is quite ill defined. With more and more fancy methods developing in the field of Cognitive Neuroscience, it might be time to hold our breaths for a second to reconsider what “information” really means and what we can do with it.

This was the aim of a recent paper by De-Wit, Alexander, Ekroll, and Wagemans published in the Psychonomic Bulletin & Review. Here they argue that the term “information” is often used as if it is clear what it actually means and that “if the formulation proposed by Shannon is applied to modern neuroimaging, then numerous results would be interpreted differently.”

So a brief reminder of what the mathematician Claude E. Shannon proposed in his influential 1948 paper “A Mathematical Theory of Communication”. Shannon’s article described the basic elements of communication (see the figure below): 1) An information source that produces a message, 2) a transmitter that operates on the message to create a signal which can be sent through a channel, 3) a channel, which is the medium over which the signal, carrying the information that composes the message, is sent (e.g. a pair of wires, a band of radio frequencies, a beam of light, etc), 4) a receiver, which ordinarily performs the inverse operation of that done by the transmitter, reconstructing the message from the signal, and finally 5) a destination, which can be a person or “thing”, for whom or which the message is intended.

While “the focus of Shannon’s formulation was on the signal and noise of the channel, […] he made it clear that whatever was sent over the channel would need to be decoded by a receiver. Thus, in Shannon’s formulation, the quantification of information over a channel was contingent on the existence of a ‘receiver’.” the authors write. A key notion of the paper by De-Wit and colleagues is therefore that most of modern neuroscience studies seem to focus on how we can interpret the activation we find in BOLD or EEG signals (experimenter-as-receiver), while the focus should be whether the rest of the brain can actually interpret this activation (cortex-as-receiver). “It is only when physical responses can be shown to be used by the brain that we have positive evidence that a physical signal acts as information.”

Information is in that sense not an objective measure, but heavily depends on the subjective interpretation of a receiver, whoever or whatever that may be. The paper offers many examples to make this point. One of them refers to the issue of signal and noise. Take an encryption method where an algorithm will encrypt the target signal in such a way that it seems like noise for any receiver who doesn’t have the encryption key. “Without the key, there is no immediate way to tell whether a message is signal or noise.”

This notion has consequences with regard to how we analyze our data, like turning EEG signals into ERP components. Averaging over trials to calculate ERPs is often justified as a means of getting rid of noise and by doing so, to increase the signal-to-noise ratio. But averaging could also cause one to cancel out what might in fact be important signals. To distinguish between signal and noise, we need a model that correctly describes the complex interactions between sender, transmitter, and receiver. The authors further argue that neuroscience should “find the ‘correct model of interaction’ for the case of the brain.”

But Shannon’s theory has its own issues as well. While it mainly targets the communication of information, the rather philosophical question as to whether we as human beings do not constantly create information is not part of Shannon’s equation. Let’s take perception, which is a highly creative process in which we, for example, create discrete objects out of a myriad of edges processed in visual cortex. According to De-Wit et al. “it is philosophically questionable whether that object can be said to exist in any objectively definable way in the actual physics of the world, as a pre-existing signal that was ‘sent’ by the transmitter.” Perception might therefore be a process of creating differences that make a difference, but that merely exists because of what the brain does.

So is trying to open the black box within the black box a lost cause? And if not, what can be done? It seems like the issue the authors raise is at least not easily solvable, nor is it clear that one necessarily needs to solve it in general. There are surely instances, where the experimenter being the receiver rather than the brain itself might be all that we want and need. Correctly interpreting a spot on an MRI image as a brain tumor to find the right therapy will not depend on the question of whether the brain has interpreted the tumor correctly. And for some of our research questions it might be absolutely sufficient to know that the brain was able to distinguish between two types of inputs that we manipulated according to our research questions. But when the aim is to understand the mechanisms behind how the brain actually performs such tasks, it seems good to remind ourselves what it is that we are actually measuring. Instead of simply recording under which circumstances what parts of the brain become active, we should find a better way to understand the neural code that is used by the brain as information.

How can this be done? The firing rate alone might not be sufficient. When looking at oscillatory rhythms, for instance, more information might be carried in the phase of those rhythms rather than in their frequency or amplitude. The seminal work of Singer and colleagues that emphasizes the role of neuronal synchrony as key to communication in the brain seems to lead in the right direction. “Big-data” approaches in fMRI or recent advances in combining different techniques — like MEG, DTI, and fMRI — in a sophisticated manner that make use of whole-brain recordings might provide a better way of figuring out how the brain actually communicates as a whole.

The authors simply hope that “this article will cause a shift in emphasis away from thinking about what we can decode from different neuroimaging techniques to thinking about whether those recordings of neural activity are differences that could be decoded by the rest of the brain.”

Article focused on in this post:

De-Wit, L., Alexander, D., Ekroll, V., & Wagemans, J. (2016). Is neuroimaging measuring information in the brain? Psychonomic Bulletin and Reviewhttp://doi.org/10.3758/s13423-016-1002-0

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like