Without effective treatment or vaccine, social measures remain at the heart of the world’s response to the COVID-19 pandemic. With this, behaviour change remains one of the top three scientific priorities for the coming months, according to the Lancet, and the behavioural sciences are implicated throughout the complex task of bringing societies out of lockdown. Providing a suitable evidence base for these high-stakes policy decisions means drawing together research across presently, at best, loosely interconnected sub-fields and disciplines, formulating and conducting new research, distilling findings into formats digestible by policymakers, journalists, and the public, and providing expert guidance to decision-making bodies.
What feels like years ago, we wrote a paper to prompt debate on how the behavioural sciences could reconfigure to rise to the challenge by finding new ways of knowledge creation, integration, and dissemination.
A new model of “proper science without the drag” that accelerates knowledge production without sacrificing quality is needed. Constructive, critical input at all stages of the research process, from study idea, through design, to data analysis is the obvious way for improving research quality while cutting down on time. The pressing need for such input has already become apparent: early voices warned about the adverse impact of fast research under pressure, and this can now be seen in poor quality studies, needless reduplication of effort, and irresponsible amplification of problematic results through media pick-up.
Welcome or not, pre-prints have become the crisis norm. At the time of writing, there are over 200 COVID-19 themed research articles on PsyArXiv alone. Collaborative, online, alternative review models, under discussion for several years, are becoming a necessity because these pre-prints are already “out there” as part of the emerging evidence-base.
We must also adapt knowledge integration. Critical evaluation never stops with publication, but the ‘normal’ process of integrating new research with existing knowledge, e.g., through review articles, is slow. Twitter, in particular, is playing an ever-greater role in dissemination and in post-publication critique, but does not lend itself to knowledge integration. Aggregation should be a key feature of our response. Optimal responding would see a degree of synthesis well beyond the slightly haphazard publication of review papers. And we need summaries suitable for policy-makers and journalists in wiki-style, accessible, formats. Natural language processing and machine learning will feature in that aggregation and integration on a scale not seen before.
In all of this we need to manage expertise. Researchers, policy-makers and journalists need to know who the experts are, but expertise is also changing as a result of the crisis as people take up new topics of research. And we need to bring together scientists from different disciplines and even subfields within a discipline addressing similar questions with different approaches. Current ‘expert databases’ (e.g., from learned societies) are too exclusive, static, and focussed on past achievements not present research.
We believe these challenges can only by a broad, community-based infrastructure that draws together researchers across career stages, institutions, and across the world’s regions. This would make available all of our resources, instead of just drawing on an overstretched, expert few, providing resilience through redundancy, as well as epistemic diversity: There are sound theoretical reasons why diversity is beneficial to outcomes and the crisis is already replete with high-profile opinions and recommendations that were misguided or unrealistic in ways that were readily apparent to others with different, complementary, perspectives or expertise.
A transparent, digital, community-based forum for scientific exchange can solve the key challenges just set out: stimulating collaboration and knowledge exchange; boosting the quality of new studies through early feedback; providing rapid post-publication evaluation, critique and integration with extant knowledge; supporting the semi-automated construction of a large-scale knowledge-base; providing an interface for journalists, policy-makers, and citizens seeking to pose questions; and providing a dynamically evolving database of experts, accessible both to other scientists, and to policy-makers and journalists.
Two months ago, we started to put into practice these ideas with an integrated environment including a Twitter account, three related reddit discussion boards, and a project based on web.hypothes.is for annotating, linking, and making electronically searchable the material emerging across these channels (see www.scibeh.org). Two of the subreddits r/BehSciResearch and r/BehSciMeta allow only scientists to post. r/BehSciResearch is for Covid-19 relevant research from the behavioral sciences or of relevance to the behavioural sciences and features research ideas, designs, preprint discussions, evaluation of published work, information on past work and discussions of its relevance now, moving from research to policy, and methods and tools. The second, r/BehSciMeta is for discussion about how to adapt our science process. Finally, a third subreddit, r/BehSciAsk, allows anyone to pose a question to behavioural scientists, and is open to all.
There is interesting discussion material there, and as detailed in two of the other pieces in this digital event, it successfully fostered new research and research collaboration, including a global collaboration involving data collection in 7 countries with more than 15,000 participants do date and not only prompted the dynamic database project, but remains an integral part of that database, because reddit posts and comments are automatically collected and passed on for annotation. r/BehSciAsk, finally, has been used by academic researchers, a think tank and journalists.
Because reddits can be viewed by anyone, page views far exceed the number of community members. This makes the thought and effort going into posts available to anyone in the world with an internet connection, including scientists, policy-makers or journalists in countries that do not have a wealth of behavioural science expertise.
Finally, unlike other reddits, contributors must edit their profiles to include their full name, affiliation, main expertise, and under “working on now” an indication of emerging expertise from current projects. This not only allows one to examine the credentials of a source, but it also creates a simple, dynamic database of expertise.
This provides proof of concept, but is still a long way from realising its full potential given the size of the target scientific community. What is still lacking is scale. Closer consideration of what might be limiting factors in uptake is thus essential.
First, grassroots initiatives require time to gain momentum, particularly those built bottom-up, without external funding or resources, by entirely inexperienced academics. This is all the more true in a high-volume information environment: years of research shows “success” in terms of numbers (whether citations, sales, or ‘likes’) is massively determined by random factors in preferential attachment processes. However, having closely monitored the flood of COVID-19 activities in the last months, ranging from research trackers put together by individuals, to professional networks, publisher environments, and the activities of professional societies, we still see no other project that combines all of the key features (and potential benefits) of our digital community initiative.
One barrier to entry is negative perceptions of “social media”. This emerged early on in exchanges with other researchers whose participation we were trying to solicit: “I don’t do social media” was a frequently heard reply. There are age effects here, but the sentiment also came from younger researchers, including in a platform-specific form (e.g., “I’m not on reddit” from a Twitter power user). Yet expediency forces the adoption of extant platforms: it is simply impossible to create and maintain a scalable community without significant technical and financial resources -resources that we presently do not have. This means platform reputations will be inherited. Overcoming this requires communicating the difference between what social media have traditionally been used for, and our current activity- a digital community environment for scientific exchange. This will hopefully encourage social media ‘sceptics’ to overcome resistance. At the same time, developing new tools and interfaces that feel more mainstream to users would be desirable, and seem possible given the possibility of machine interaction with platforms such as reddit or Twitter via their APIs. At root, here, is a deeper challenge for the pandemic crisis: scientific communities must build information environments that are suitable for science, with incentive structures and algorithmic underpinnings matched to the needs of knowledge generation not mere popularity or advertising revenue. Given the spread, technical expertise, and reach of extant social media it seems expedient to use social media as tools, but build science compatible structures from those tools.
Such ‘superstructures’ may also remedy seemingly simple design features that negatively impact usability. These matter because, at a time of crisis, even small hurdles are magnified. For example, reddit provides no facility for alerting community members to new posts beyond alerts to direct replies. Tools exist to do this (we presently make use of these internally), but setting this up in a user-friendly manner for all users requires technical expertise, time, and effort.
Lack of time itself is an obvious factor limiting engagement given the multiple pressures of working at home during a crisis, especially with caring responsibilities. Those limitations are real, but posting comments is an inherently scalable activity that can make a genuinely valuable contribution even with just minutes of time. And the total number of comments that can be found across, e.g., society pages (unfortunately often private), ResearchGate, and, of course, Twitter, is already substantial. These comments would be much more impactful if they were drawn together, possibly with the aid of machine tools.
Undoubtedly, there will also be limits to purely bottom-up engagement and our initiative (like others) is presently exploring the creation of a more structured process of preprint review.
Finally, barriers between scientific communities, and the behavioural sciences, in particular, are not easy to shift. As scientists, we could do better at moving beyond the theoretical framework, discipline, and methodological divides found in research trying to understand human behavior. And we need to learn that reaching out and including other researchers, sharing work, sharing resources, sharing knowledge and, with that, potentially our own ‘edge’, need not diminish our own academic success. More collaborative models of science can be found. If not now, then when? In fighting the pandemic, evidence-based policy is humanity’s best bet, and any contribution we can make can ultimately help ourselves, the people we most care about, and indeed the world.
So, please join! Please volunteer! Please help make it work!