One of the great transformations in psychological science over the past 10 years has been the embrace of scientific, methodological, and analytical transparency. Open science has changed the ways in which we design experiments, train students, and think about statistics. Although this movement has, at times, generated controversy, it has improved the way we do science.
The open science movement and SPARK developed over the same time frame, and in many ways, they have complementary goals. The SPARK Society was founded in 2017 with the goal of increasing and supporting Black and Brown scholars in Cognitive Psychology. Psychology is a disproportionately White field, and Cognitive psychology is probably the least diverse subfield in the discipline. Both SPARK and the open science movement have worked towards increasing inclusivity and self-improvement through transparency and openness in how we do science.
However, in this post, we discuss some of the ways in which the goals of open science and inclusivity come into conflict. In some ways, open science has created unequal burdens for both scientists and research participants from marginalized groups. Below, we talk about some of these burdens and some potential solutions.
One of the central tenets of open science is transparency: we share data, methods, and pre-registration with the goal of making our experimental and theoretical choices clearer. The motivation for these ideas is hard to argue with. Transparency is generally a good thing. However, without understanding the social and cultural contexts in which science takes place, scientists from marginalized backgrounds may be unintentionally harmed by these goals.
Why might a Black or Brown scholar be reluctant to share data? There is a very real risk that historically excluded scholars will be scooped before the data is published or will not be recognized for their work after the data is published. This isn’t paranoia. Historically, Black, Indigenous, and Latinx scholars have not been recognized within scientific communities, and there are structural barriers that have led to negative outcomes for historically excluded scholars. For example, Ginther et al. (2011) found that, despite equal qualifications, Black and Brown scholars are less likely to be funded by NIH than White scholars. Similar funding disparities also exist at the US National Science Foundation. More generally across the health sciences, people of color (and women) receive fewer citations than White men. There is a consistent pattern of overlooking, ignoring, and dismissing work by scientists of color, which disincentivizes data sharing.
Another issue is related to the types of research questions that Black and Brown scholars probe. Many historically excluded researchers are drawn toward working on research questions that engage marginalized populations and communities, even in the cognitive sciences. Members of these communities have historically had a fraught relationship with the behavioral sciences, marred by abuse and distrust. Thus, working within these communities is important but time-consuming; it involves building connections and working and helping in the community in ways that are unrelated to research. This is what it takes to do good community research, but it is not the type of work that typically leads to tenure or citations. Data collected in these projects pose an important set of ethical questions. Will these PIs receive credit if they share this type of data broadly and what does this credit look like?
Historically excluded researchers often engage in practices that build trust with underrepresented populations. Principles of transparency, in this case, may inadvertently create structural barriers for historically excluded researchers, placing them in situations where they do substantially more work for the same, or less, “reward” than White researchers who re-analyze this data for their own purposes. Although it is true that there are costs associated with sharing data for all scientists, it is important to recognize that these costs (and associated risks) can disproportionately harm marginalized scholars given the structural barriers embedded in systems of academic publishing, promotion, and recognition.
Open science also raises ethical questions for researchers investigating behavior in underrepresented communities. De-identified data from some populations are easily re-identified. Given the fraught relationship between the behavioral sciences and underrepresented communities, doesn’t the scientific community have a responsibility to protect these participants? Given this context, how should we think about participant autonomy in how their own data is used? How many of us would be willing to include a question in consent forms asking participants: “Would you be willing to have your anonymous data shared with other researchers and publicly posted on online repositories?” We suspect that a large number of participants would decline, and people from communities of color would be even less likely to say yes. According to Pew Research, Black people have less trust in science than White people (41% vs 27%). What are the ethics of data sharing in this context? There are some solutions to this problem, such as multiple-stage consent or consenting to various levels of research sharing, but none of these practices are currently common practices in cognitive psychology.
As cognitive psychologists, we also have to consider whether participants would consent to data sharing if they can’t control how the data will be used. Latinx participants may be willing to participate in a study on visual working memory or attention, but they may be less likely to participate if they knew their data would be part of a data re-analysis project investigating links between race, ethnicity, and cognitive ability. Open data sharing on its face is helpful to science, but the dark side of data sharing is that once the data leaves the control of the participant or the primary researcher, it can be used in ways that are harmful to communities. This might be a small price to pay for the researcher, but for the participant, it may be the difference between participating in a study vs. not.
Given these issues, it might be tempting for White readers of this blog post to think: Who cares? The benefits of open sciences outweigh the costs. We think the benefits are obvious but are worth making clear. For one thing, if researchers ever hope to tackle the WEIRD problem, i.e. that much of the data in cognitive psychology comes from white, educated, industrialized, rich, and democratic populations, we as a field need to think of ways of reaching broader populations in a more inclusive way. Thinking through the issues we discussed above is also critical for diversifying the community of cognitive psychologists. Making open science easier for marginalized groups will increase the chances of these students pursuing careers in our area. Traditionally, questions in cognitive psychology have been examined through only a few lenses, and broadening participation can potentially allow us to answer old questions in new ways. Diversifying the field is an equity and justice issue, but it is also key to creating better science. Finally, fairly or unfairly, open science has an image problem. This stems in part from not taking into account the social and professional impacts of open science practices.
So what do we do? Open science has been of enormous benefit to Psychology, but there are ways to move forward that are equitable for everyone. One of the most important things we can do when reviewing and evaluating peers and research practices is to think about the social context in which methodological and experimental choices are made, and how those choices might impact the researcher. We should also get into the practice of thinking of the problems outlined above as being structural, i.e. a product of the systems in which cognitive psychology operates, which means the solutions to these problems also need to be structural. Can we build incentives, review practices, and standards for promotion in ways that incentivize open science but do less harm to underrepresented scientists? We think it is possible, but it will require reimagining how we approach open scientific practices.
1 Comment
Thank you for this thoughtful and informative post. Agreed that the open science movement would do well to take these concerns into account, and work with underrepresented scientists on ways of sharing data that respect concerns in communities of color about how their data are used, and that equitably reward scientists of color for their efforts to address difficult research questions.
My two cents: transparency in choices made during data analysis is probably more important than the data sharing itself, since the actual meaning of the p-value, and the validity of later replication attempts, hinge on these choices. This seems like an area in which the goals of the open science movement and behavioral scientists concerned with equity issues happily converge, because without making efforts to rescue the integrity of the p-value and demonstrate to users of psychological research that the literature isn’t a house of cards built on shoddy statistics and unintentional p-hacking, researchers of any demographic background will find the value of their work debased (either justifiably, or by association).