Quests
A quest is a journey or mission to some goal, usually fraught with obstacles, twists and turns, and hopefully, some epiphanies along the way.
The Quest for Statistical “Truth”
Although most psychologists and statisticians “know” statistical “truth” to be an oxymoron, as scientists, we strive to determine the “best” or “most probable” explanation based on the statistical probabilities produced from a research study, its measurements, its sample, and the magic of statistical analyses. And while the goal of each study is to minimize error and maximize the probability of detecting a meaningful effect or relationship (yet another quest fraught with many temptations and pitfalls), if it is present, over the course of psychological science, the ability to replicate previously published research findings has produced questionable results, a.k.a. the replication crises in psychological science.
Multiple solutions have been offered to address the replication crisis, including framing it as a base rate fallacy, incorporating Bayesian statistics to address both statistical and scientific inferences, or conducting meta-analyses, a statistical tool in which the effect sizes of previously published and, ideally, unpublished research studies, are aggregated into a single study and statistic that provides a more accurate estimate of the effect size across samples and methodologies. Although some scientists do not agree that meta-analyses are the panacea, there does appear to be some “trust” in the conclusions when meta-analyses are done thoroughly and well.
Avoiding Lotus Flowers, Cyclops, and Sirens
In an article published by the Psychonomic Society’s Behavior Research Methods, Belén Fernández-Castilla, Sameh Said-Metwaly, Rodrigo Kreitchmann, and Wim Van Den Noortgate provided a set of guidelines and a checklist, the SEMI, for authors of primary research studies to follow when publishing the findings of their studies. Guidelines on reporting quantitative research already exist:
- American Psychological Association (APA) guidelines for reporting quantitative research (Applebaum et al., 2018)
- Preferred Reporting Items for Systematic reviews and Meta-Analyses PRISMA review guidelines (Page et al., 2020)
However, a comprehensive list of information needed for a quality meta-analysis has not been compiled to date. Fernández-Castilla and coauthors met this challenge and created a thorough set of steps to be followed by researchers interested in conducting a meta-analysis.
The Hitchhiker’s Guide to the Galaxy and Statistical “Truth”
Step one. Conduct a systematic literature review where the topic of interest will be searched, and resulting papers will be screened for pre-determined inclusion criteria. Recommended search strategies include
- identifying critical keywords that can be streamlined by restricting the keyword search to the title or abstract
- conducting a backward search by utilizing primary study references
- examining the full text of a relevant paper
APA guidelines recommend that authors incorporate critical keywords within their titles or abstracts and clearly present the research objectives and outcomes within the relevant sections of their papers (i.e., introduction or discussion).
Step two. Code the literature selected for inclusion for specific information that will aid in conducting the meta-analysis. Fernández-Castilla and coauthors recommended several different frameworks that can help identify critical, relevant information from a primary source: PICO, SPICE, SPIDER. The main takeaway from these frameworks is that they provide keys to maintaining power for the study (i.e., keep as many primary studies from the search as possible) by understanding the sample (all relevant characteristics), the study conditions, the variables of interest (explanatory/factors and outcomes), and evaluation tools that depend on study design.
Step three. Calculate the index or effect size measurement. This step is where the details really matter. It is not enough to just report the effect size. Authors should report the specific formula used to obtain the effect size, along with information that can help with the calculation, including sampling variance or sample size. Fernández-Castilla and coauthors provided a useful table summarizing commonly used formulas for effect size and their relationships to specific study designs.
Seeing the Future
Fernández-Castilla and coauthors provided their top recommendations of how primary study researchers can help meta-analysts. Beginning with the importance of open science, pre-registered studies, and publicly available datasets and code used to conduct analyses, the authors illustrated how these actions can remove barriers to achieving more accurate findings. (Too bad Odysseus didn’t have these opportunities!) Two additional but related recommendations were also provided:
- primary authors should report statistics on the whole sample both with and without excluded data,
- descriptive statistics regarding the relationship between all variables within the study, regardless of the type of variables (categorical or quantitative), should also be provided.
The final recommendation is to be aware of the possibility of unpublished negative findings. Whether that is to put a request out on relevant listservs to locate those studies or to encourage journals to publish quality studies with “null” results more frequently, the authors offered some excellent suggestions (i.e., registered reports) while also reminding us that it is our ethical obligation to utilize the information provided by our participants.
Finding the Grail
The SEMI checklist below lists the 28 pieces of information from primary sources that would aid future researchers interested in conducting meta-analyses on a related topic. As shown in the image below, the checklist is divided into five sections (Title and abstract, Background, Methods, Results, Open sciences and practices) and can be used concurrently with other checklists, such as PRISMA.
Item | Y | N | NA |
---|---|---|---|
Title and abstract | |||
1. The key concepts, constructs, and variables under investigation are clearly mentioned in the title and/or abstract. | |||
2. The abstract gives relevant details about the study objectives, methods, and results. | |||
Background | |||
3. Relevant literature (including reviews and meta-analyses) is summarized and clearly cited. | |||
Methods | |||
4. The sample size, including that of the entire sample and each subsample, is reported. The number of missing values is given for each variable, and the sample size used for each analysis is reported. In the case of longitudinal studies, the sample size at each time point is reported. | |||
5. Statistics describing participant characteristics (e.g., proportion identifying as men, mean age, proportion of sample by race/ethnicity), study context and procedures that may (substantially) influence the studied effects are reported. | |||
6. Other publications based on the same data, or a portion thereof, are clearly cited. | |||
7. There is a description of how each variable is operationally defined and measured. | |||
8. Details of how the measurement tools are administered and scored are provided, together with a measure of reliability on the current sample. | |||
9. Details of the type of study design (e.g., correlational, comparative, or experimental) are provided, possibly together with a bibliographic source for further details. | |||
10. Details of the study procedures are provided, including where, how, and when data are collected. | |||
11. There is a description of how data categories are defined or how continuous variables are categorized. When reporting data from a subsample, details on subsample descriptions and selection criteria subsample are provided. | |||
12. Details of the data-analytic methods used are provided (e.g., statistical tests, model fitted, estimation procedure, software, options chosen, significance level, whether the test is two-sided or one-sided, degrees of freedom, how cluster data are handled if needed, and whether missing data imputation methods were used and which ones). | |||
13. A risk of bias assessment tool is consulted to ensure the inclusion of all methodological details required for evaluating the study’s risk of bias. | |||
Results | |||
14. For categorical variables, frequencies of all categories are reported for the final sample and relevant subgroups, after removing dropouts. When studying the association between categorical variables, a cross-tabulation is provided with disaggregated frequencies. | |||
15. For quantitative variables, means and standard deviations are provided for the whole sample and relevant subgroups. | |||
16. For nested data structures, information on the intraclass correlation coefficient, the between-clusters variance, and the pooled within-cluster variance is reported. | |||
17. The correlation matrix between all quantitative variables under investigation is reported. When missing data are imputed, correlations based on the original incomplete data are provided. | |||
18. For longitudinal studies, the timing of measurements and the correlation between subsequent measures is reported, also for any relevant subgroup. | |||
19. Test statistics and associated p-values (and degrees of freedom where relevant) are reported, also for negative findings. | |||
20. Effect sizes related to the research questions are presented along with (references to) the corresponding formulas used for their calculation. | |||
21. The results are reported in sufficient detail and clarity, following the description of the analyses in the methods-section (e.g., following the same order). | |||
22. The results presented in the text align with those depicted in the tables and figures. | |||
23. Tables and figures are appropriately labelled, understandable and referred to in the text. | |||
Open science practices | |||
24. A statement indicating the availability and location of raw study data (and if applied of the protocol or registered report) is provided. | |||
25. If a protocol or registered report was developed before the investigation, it is clarified how the investigation deviates from the initial planning. | |||
26. A code book explaining the variables in the dataset is provided. | |||
27. Relevant codes/syntax that reproduce the analyses are provided. | |||
28. Additional information or materials that could enhance understanding of methods or results are included in appendices or supplementary materials. |