The goal of cognitive science is to understand how the mind works. It is a peculiar aspect of this quest that cognitive science often seems to be as much about computers and software as it is about the human mind: There is an intriguing parallelism between developments in computer science and affiliated fields on the one hand, and the progress of research in cognitive science on the other.
To illustrate, when artificial intelligence (AI) was in its infancy and computer scientists were designing “blocks worlds” in which various cubes were stacked onto each other (for no immediately apparent reason other than to teach a computer how to handle simple constraints), cognitive scientists were designing “box worlds” in which cognition was represented as a flowchart of boxes connected by various arrows. Our recent digital event in honor of the 50th anniversary of the publication of Atkinson and Shiffrin’s landmark paper provides a glimpse of that era. If you want to know more, Google the string “cognition models 1970s” and look at the images. I cannot help but detect some striking similarities between the formalisms in those models and the symbolic processing underlying the blocks worlds.
When AI moved on from blocks worlds and their symbolic representations to the asymbolic learning embodied in neural networks, cognitive science followed suit and discovered how much of human cognition could be elegantly explained by simulating ensembles of neurons. Despite being individually quite stupid, those ensembles of networked neurons could learn many things, from family trees to speech recognition.
Later on, once statisticians had tamed Bayesian techniques by developing efficient sampling algorithms, cognitive scientists discovered that people often conform to norms of Bayesian rational reasoning. This approach, often known as “Bayes in the Head”, has reformed our appreciation of how well people can be attuned to their environment.
So, when Silicon Graphics guru John Mashey coined the term “Big Data” in 1998, it was only a matter of time until cognitive science would come to appreciate the power of this new approach to data.
What is “Big Data”?
Well, it’s big.
And it keeps getting bigger: According to Wikipedia, as of 2012 “big data” ranged from few dozen terabytes to many zettabytes of data. A zettabyte is 1,000 × 1,000 × 1,000 × 1,000 × 1,000 × 1,000 × 1,000 bytes of data.
But it is not just size that matters: “Big Data” is also characterized by new techniques and technologies with new forms of integration that can reveal insights unobtainable by conventional statistical means.
Here are some success stories of the power emerging from the Big Data approach:
If you watched the video, you may wonder whether such “successes” should always count as successes, or might be better understood as warning signals that much of our privacy can be compromised by big data analyses. Unsurprisingly, big data is not without its critics.
For cognitive psychology, however, big data provides another analytic window into the human mind that can yield surprising and fruitful insights. This recognition gave rise to a recent special issue of the Psychonomic Society’s journal Behavior Research Methods which forms the basis of this digital event.
From tomorrow (9 July 2019) onward, we will devote nearly two weeks to a discussion of the role of big data in psychology. The following posts, listed in the likely order of their publication, are expected to contribute to the event:
Tuesday. Guest editors Gary Lupyan and Rob Goldstone will provide an introduction and highlight their motivations for the special issue.
Wednesday. Todd Gureckis and Tom Griffiths will argue that the next wave of innovation within the psychological sciences should not only be “big data” but “big experiments.”
Thursday. Molly Lewis will explore the fact that negative messages (e.g., “lol you’re lying!”) lead to about three times more online chat responses than positive messages (e.g., “Ok, then great work!”).
Friday. Tim Mullett will draw attention to the fact that while big data can provide validation for laboratory work, it can also begin to question it when laboratory findings do not hold up on a larger scale.
And there will be more during the following week, from Monday 15 July onward:
Monday. Wayne Gray will suggest that big data can be better than gold. Whereas gold can only be mined once, the utility of big data is limited only by the methods and research questions we bring to the problem.
Tuesday. Alexandra Paxton will provide her own perspective (to be confirmed).
Wednesday. Padraic Monaghan will perform an archaelogical dig into the mind by telling us what we were thinking hundreds of years ago.
Thursday. Rick Dale is one of the authors who contributed to the special issue. He will add his own perspective on the posts that were contributed to this digital event, thus interlinking the digital event further with the published articles.
Much to look forward to—please tune in for the next two weeks of continuous discussion.