A tale of two island nations: Lessons for crisis knowledge management

When confronted with the Black Death in the middle ages, leading authorities resorted to analysis of the position of the planets —Jupiter’s hostility against Mars features prominently— to explain the plague. Today, authorities rely mainly on science to explain and manage the COVID-19 pandemic. The success of this is borne out in countries such as Germany and New Zealand, both of which have managed to control the pandemic. In particular, in New Zealand the virus is eradicated after causing 21 deaths altogether (4 fatalities per million people).

But there is more to the story than science being a better reality-tracking device than astrology.

In the United Kingdom, the number of deaths now exceeds 37,000 (544 fatalities per million), the second-highest toll in the world after the U.S., and considerably more than in Italy, Spain, and France, the three European countries that were hardest hit on the continent.

The fact that the UK’s trajectory unfolded so differently from New Zealand presents a scientific conundrum. New Zealand and the U.K. are both island nations, which even in today’s connected world facilitates border control and quarantine measures. And just like in New Zealand, the U.K. government has been committed to “follow the science” in their policies. For example, on 16 March, when the toll stood at 34 deaths, British Transport Secretary Grant Shapps explained his government’s stance to reject a lockdown by declaring “we are just being entirely science-led, we’re not doing the things that are happening elsewhere just because it seems like a popularist [sic] thing to do“.

On the basis of currently available information, it is therefore inappropriate to believe that the U.K. government deliberately ignored scientific advice. It did not. It was advised by leading scientists in a group called SAGE (Scientific Advisory Group for Emergencies), some of whom appeared at press conferences together with politicians.

There are many aspects to what has happened at the interface between science and policy in the U.K., and only the Mother of All Inquiries that some commentators have anticipated will reveal the full answer. Here, I focus on one issue that has become increasingly visible and that may set aside the U.K. from other countries, namely that scientific knowledge was available early on that questioned “the science” being followed by the government.

When the government resisted calls for a lockdown in early March, several groups of scientists independently wrote to the government seeking clarification of the scientific advice underlying the government’s policy to avoid a lockdown. One of the letters, by behavioral scientists, ultimately attracted nearly 700 signatures. (Full disclosure: the letter was organized by Ulrike Hahn and I was one of the early signatories.) This by itself does not prove anyone right or wrong but it testifies to the fact that scientific concerns existed at the time and not just in hindsight. The letters were accompanied by vigorous discussion on social media and quite strong language in mainstream media—for example a Harvard epidemiologist wrote in the Guardian that “When I heard about Britain’s ‘herd immunity’ coronavirus plan, I thought it was satire.

Since then, things have evolved further in the direction of scientific dissent and plurality, and the UK now has an independent, self-appointed, expert-advisory team to mirror the official SAGE. This independent body has attracted both praise and critique. The editor of the Lancet lauded the body’s first meeting, noting that it

“set a new standard for science policy making. The openness of the process, vigour of discussion, and identification of issues so far barely discussed by politicians injected much-needed candour into public and political discussions.”

Other experts have been more critical, noting the danger that:

“in pursuing some ideal of scientific independence, political issues get disguised as technical matters. This risks handing decisions to scientific experts rather than elected politicians, hiding both decisions and politicians from public scrutiny.”

The complexities of developments and diversity of views cannot be fully explored in a brief post. However, the following statements seem to be a legitimate summary:

In the context of this digital event, there is one implication of those three statements that deserves exploration; namely, whether better knowledge management might, not just now but in future crises, put scientific advice on a broader and more encompassing foundation.

What is “crisis knowledge management” and how can it be improved?

Scientists have not been idle. The sheer number of papers being published on COVID-19 (Google Scholar returns 48,300 hits in 2020) has propelled many new initiatives and proposals for how to improve science during the time of crisis

This digital event of the Psychonomic Society has provided a range of perspectives on those efforts. I highlight a few of them and the lessons they may offer.

Draw in expertise from afar

Jonathon Crystal’s post highlights that major psychology societies have convened working groups, assembling known experts, to generate materials that can help support the policy process. As Jonathan’s example shows, a noteworthy aspect of this process is that people get drawn into policy-relevant research whose background lies elsewhere. In his words, “I wonder how someone who develops animal models of memory got pulled so far from my comfort zone and area of expertise”. It turns out that conditioning people to avoid touching their faces is not completely different from those animal models. It’s also surprisingly difficult (but achievable), as I have learned during the last few months.

Lesson: Basic research can suddenly become relevant in a crisis. Just because you are working on eyelid conditioning in chlamydosaurus doesn’t mean you can enjoy obscurity forever.

Draw in expertise early on

Yasmina Okan related her experience about how she got involved, via a Reddit discussion, in a project that ultimately spanned the globe and collected responses from 15,000 participants to measure the public’s attitudes to privacy-encroaching tracking technologies. (Full disclosure: I am part of that team).

Lesson: Even at the planning stages of research there is opportunity for peer involvement—be it through critical commentary or collaboration.

Create knowledge quickly

Ulrike Hahn’s post underscored the need to conduct science “without the drag”. If the number of cases doubles every 3 days or less—as it did for COVID-19 early on—then a month of peer review translates into more than 1,000 times as many cases at the end. At the same time, we must ensure that science is not only quick but also robust and reliable. As Ulrike noted:

“Welcome or not, pre-prints have become the crisis norm … Collaborative, online, alternative review models, under discussion for several years, are becoming a necessity because these pre-prints are already “out there” as part of the emerging evidence-base.”

Muhsin Yesilada’s post provided a snapshot of the more than 200 preprints that have been published in psychology relating to COVID-19, and one very encouraging aspect of that work has been the fact that preprints are considered stepping-stones en route to peer-review and journal publications. His findings also support Steven Lindsay’s more qualified endorsement of preprints because most authors report having received reviewers’ comments, thus practicing what Lindsay was calling for.

Lesson: Preprints are here to stay. And that’s probably OK.

Create knowledge reliably even if it’s quick

Alex Holcombe’s post highlights how science can be “unruly” and fast, but nonetheless subject to peer commentary. As he put it:

“More publicly-available peer review is happening today than ever before – just not at our journals. While academic journals typically call on half a handful of hand-picked, often reluctant referees, social media is not as limiting, and lively expert discussions are flourishing at forums like Twitter, Pubpeer, and the commenting facility of preprint servers.”

This sounds great—and it is—but as Ulrike Hahn noted in her post, some colleagues resist engagement on social media, either because they “don’t do” social media altogether or because they only engage on their favoured platform but not others. A further problem was picked up by Steven Lindsay, who feared that:

“Even if open reviews are posted anonymously, they are written for a wide audience. I worry that this may influence reviewers toward playing to the crowd, grandstanding, self-enhancement, etc.”

Lesson: Social media may not be your thing, and there are worrying edges to public review, but there is little else available right now for rapid peer commentary on rapid science. Except that we can build a better infrastructure.

Knowledge creation and management requires proper infrastructure

Stefan Herzog’s post described a new infrastructure that’s specifically dedicated to crisis knowledge management and is summarized at the SciBeh.org website. (Full disclosure: I am part of that team). The crucial aspect of this emerging infrastructure is that it integrates across channels—from Twitter to Reddit to PsyArXiv—and offers the opportunity to link together discussions of the same paper. Here is an example of what you can do with this:

“After the post “Open policy processes for COVID-19” appeared on our subreddit r/BehSciMeta, the relevant links in that post got annotated. One of those links is https://stateup.co/how-governments-can-engage-digital-resources-to-manage-their-covid-19-response, which got annotated like this (click on the item to see all annotations).”

This sounds complex—and it is; but it’s worth clicking through those links to understand what is happening.

But there is even more power here than this example suggests: Ultimately, the idea is that if you encounter a preprint online, you may find it in our knowledge base together with the discussion of that paper on Reddit or Twitter.

Watch this space for updates.

Lesson: We need tools to manage “science without the drag”. Some interesting proposals are beginning to emerge.

Knowledge creation is not always policy-ready

Patrick Forscher and colleagues show in their post that caution and humility must be exercised in our claims vis-à-vis policymakers. The ideal of producing knowledge that can simply be handed over, policy ready, will in many, if not most cases, be unattainable. But there are multiple responses to this limitation. For one, we must think carefully about the kinds of studies that we could and should be conducting. For example, all provisos and cautions notwithstanding, there are things we can offer with considerable confidence, provided they are based on large samples and are replicable. The example in Yasmina Okan’s post is a case in point.

From my experience working with policymakers (I spend a significant share of my time outside academia and in policy circles and other public fora), they actually know perfectly well that any science comes with qualifications and uncertainties, and that most of it isn’t “policy ready.”

Forscher and colleagues add several suggestions similar to those made above: they see “promise in ‘team science’ approaches that pool the resources of multiple labs into one large study”, they invoke fast peer review, and they provide a decision tool known as Evidence Readiness Levels (ERLs). Users of the ERL follow a series of steps from theory to application and highlight areas of uncertainty at each level.

Lesson: Policymakers want the solution tomorrow. We may not have the solution by then and we must not pretend that we can answer their pleas reliably. However, we may have answers in a week if we do rapid science without the drag.

Conclusion

This post just scratches the surface of crisis knowledge management. Not everybody will agree with the lessons I have proposed. But then, the whole point of “science without the drag” is to have an ongoing conversation about the sciences as it emerges—and that includes the process of developing crisis knowledge management itself.

I would like to draw two conclusions.

First, what all the posts and the lessons point to, is that reliance on a small number of prominent experts may not lead to the best outcome, especially if their deliberations are (initially) kept from public scrutiny. Science is a communal effort and its self-correction requires involvement of large communities. This in turn implies the need for transparency: policy-relevant advice must be subject to broad scrutiny, even from people outside a given domain of expertise.

Second, at the time of this writing, no vaccine or convincing treatment exists for COVID-19. The most powerful tools we have at our disposal to control the pandemic are behavioral interventions, from handwashing to social distancing. For the time being, therefore, behavioral science is our best defense against the pandemic. A paper just out in Science came to the same conclusion and offers some guidance for future research.

By pursuing “science without the drag”, with due caution and humility but also determination, we can help manage the pandemic better than by analyzing Jupiter’s hostility towards Mars.

Author

  • Stephan Lewandowsky

    Stephan Lewandowsky's research examines memory, decision making, and knowledge structures, with a particular emphasis on how people update information in memory. He has also contributed nearly 50 opinion pieces to the global media on issues related to climate change "skepticism" and the coverage of science in the media.

    View all posts

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like