One of the greatest (and, in my opinion, most overlooked) aspects of science is that it is a process inherently founded on and driven by community. Almost all ongoing research questions are derived from prior research, and the outputs of ongoing research will inevitably inspire new lines of work and discovery. Scientific discoveries are ultimately meant to be shared, and the entire scientific cycle of question, hypothesis, investigation, and discovery is driven by people’s ability to engage openly with research that has come before.
It turns out, however, that sharing the outputs of a research project in a fair and accessible way is sometimes easier said than done. For one thing, there isn’t yet a standard set of practices for deciding what kinds of materials to share, where to make them available, or how to make them easily findable. These kinds of decisions essentially determine the ways that a broader community of scientists (or curious members of the public!) can engage with past research, and they consequently have a huge impact on the kinds of future discoveries that can be made.

One reason why the broader scientific community hasn’t yet come to an agreed-upon set of practices for sharing research outputs is that the decisions surrounding what, where, and how to share materials can be highly specific to different fields. With this in mind, the authors of a recent Behavior Research Methods article, “A sharing practices review of the visual search and eye movements literature reveals recommendations for our field and others,” decided to use their own area of expertise, visual search experiments using eye-tracking methods[1], as a case study for examining what kinds of data sharing practices are currently being used and how these practices might potentially be improved and standardized.
The team of researchers, consisting of Hayward J. Godwin, Haden Dewis, Peter. T. Darch, Michael C. Hout, Daniel Ernst, Philippa Broadbent, Megan Papesh, and Jeremy M. Wolfe (pictured below), began by conducting two types of searches. In what they referred to as a Literature Search, the team searched through the Web of Science database of academic writing to collect a list of published journal articles that used eye-tracking methods in visual search studies. In their other type of search, which they called a Repository Search, the authors collected a list of similar projects from the Open Science Framework repository, a website that many scientists use to upload research materials for later sharing.

For the results of each type of search, the team examined whether the materials shared from a given research project could perform three types of tests. First, they looked to see if there were materials that would allow an outside researcher to reproduce the project’s statistical analyses—essentially, whether project’s claims could be independently verified using the provided materials. Second, they examined whether there were files, media, and/or software that would allow for a direct recreation of the project’s experiment, which is useful in cases when other researchers want to conduct the same experiment with a new group of participants. Finally, they looked for whether a project’s shared materials included any raw, unprocessed data, which would allow for new kinds of analyses that were not performed during the original project. Besides these tests, the authors also noted other details related to the overall accessibility of the files, such as whether a project’s shared materials were provided in open formats, allowing the files to be accessed without proprietary software, and whether there were any other items meant for helping outsiders navigate through the project directories, such as guidebooks or metadata summarizing individual file contents.
The results of their examinations broadly suggest that when researchers in this field opt to share their materials and data with a wider community, they tend to do so mainly with the intent to help others reproduce (and therefore validate) the statistical analyses of the project. In contrast, based on how rarely researchers shared experimental code, raw data, or metadata about data files, there seemed to be much less of an investment in facilitating direct experimental replications or new analyses of existing data.

This is… a bit of a bummer! After all, replicating experiments and analyzing data from new angles are key to making science a communal and creative process. This is not to say that current researchers and their sharing practices are working against these ideals. Instead, it’s more of a clear sign that scientific communities need a better sense of shared expectations when it comes to deciding what, where, and how to share their research. In fact, it’s probably exactly this lack of a standard for sharing materials that hinders the practice of openly sharing research materials from being more common.

Luckily, the authors of this featured article have offered a solution in the form of the Find It, Access It, Reuse It scorecard, a checklist/questionnaire designed to help researchers make their projects and materials more findable and more capable of facilitating more flexible uses of the data. The scorecard includes reminders to include basic project identification information, like the title, abstract, and DOI number of an associated publication, while also encouraging researchers to verify other often-overlooked details, such as whether metadata file guides were included or if web links to the repository require logins. The hope is that the scorecard will ultimately become standard enough that it can serve not just as a guide for how individual researchers can make their work more discoverable but also as a way for future scientists to quickly identify how these datasets can be used to make new discoveries.
It’s always an exciting time to be a scientist because science is inherently exciting. (I might be exaggerating a bit; science is at least sometimes exciting, depending on who you ask and at what point in their career.) One of the most exciting things about it is its built-in capacity to generate new lines of work that uncover even more discoveries. Sharing research materials and outputs is one of the best ways to encourage the communal aspects of the scientific process, and now, with the development of standards for sharing practices, like the Find It, Access It, Reuse It scorecard, I imagine that we’ll have a lot of new scientific research to get excited about in the coming years.
Featured Psychonomic Society article
Godwin, H.J., Dewis, H., Darch, P.T. et al. (2025). A sharing practices review of the visual search and eye movements literature reveals recommendations for our field and others. Behavior Research Methods, 57, 235. https://doi.org/10.3758/s13428-025-02759-3
[1] For those unfamiliar, the term “visual search” refers to a type of psychological experiment where participants try to find a specific item on a display filled with other distractor items; think I-Spy or Where’s Waldo (or Wally, for my UK friends), but with less pizzazz. Visual search studies often rely on eye-tracking technology, where a high-speed camera is able to track a person’s fixations and eye movements so that the researchers can identify exactly where their participants are looking, and for how long, at any time point in the experiment.