#psAmsterdam18: A retrospective on the meeting and expert opinions

The International Meeting of the Psychonomic Society in Amsterdam wrapped up on Saturday (12 May). The meeting was attended by around 700 delegates and featured keynote addresses by John Wixted and Dedre Gentner. Some photos of the meeting are available on the Society’s website.

The meeting also featured 7 symposia:

  • Tackling the Confidence Crisis with Statistical Scrutiny, Verifiable Credibility, and Radical Transparency
  • The Limits of Prediction in Language Processing
  • Shaping Attention Selection
  • Proactive Control: Mechanisms and Deficits
  • What Limits the Capacity of Working Memory? An ‘Adversarial’ Working Memory Symposium
  • Evidence and Scientific Knowledge in a “Post-Truth” World
  • Learning Words from Experience: The Emergence of Lexical Quality

The first symposium, on statistical and conceptual issues relating to replicability, was particularly noteworthy (at least for me) because Klaus Oberauer and I took the opportunity to conduct an expert survey among attendees to elicit their opinions about various factors that might affect replicability of a study.

More than 100 respondents volunteered to take the 7-item survey, and because several participants expressed an interest in finding out the results, I present the data in the remainder of this post.

Each item involved a quasi-continuous scale with marked end points. The scale was a 14 cm long horizontal line, and respondents indicated their opinions by placing a tick mark or cross along the scale. Responses were scored to a resolution of 0.5 cm (minimum 0, maximum 14). The histograms below show the distribution of responses for each item along that scale.

How informative is a replication?

The first two items asked “which type of replication is most informative”, by querying the nature of the replication and who was conducting it. The responses for the first item are shown in the next figure, which also shows the labelled points on the scale (“A direct (exact) replication” as the left anchor; “both equal” at the midpoint; and “A conceptual replication” as the right anchor point).

The modal opinion considered both types of replication to be equally informative, although a sizable segment of participants preferred the direct replication. This preference may reflect the fact that, unlike a direct replication which seeks to recreate all critical elements of original study (sample, stimuli, procedure, measures), a conceptual replication seeks to extend an original study and often involves a theoretical meaningful change to the method. In consequence, a conceptual replication can arguably be used as an “escape hatch” if the original effect does not replicate exactly but is, for example, detectable in a different measure.

The second item queried whether the source of a replication mattered—that is, would a replication be more informative (and instil greater confidence that an effect is “real”) if it was conducted by the original authors or by a different lab?

The data show that the vast majority of respondents, to varying extents, preferred a replication by a different lab over a replication by the same authors. The modal response was very much in favour of a replication by a different lab, although the second most common response was to be at or near the midpoint of the scale (reflecting indifference).

The overwhelming preference for a replication by different authors meshes well with research showing that replications in the same lab—especially if they are conceptual in nature—do not increase the replication success by independent authors.

How likely is a study to replicate?

The remaining 5 items addressed experts’ views on the likely success of a replication based on various attributes of a hypothetical initial study. The first two items queried the presumed effect of citations and media attention, respectively:

It appears that participants were mainly indifferent to those two variables: The modal response to both questions was at the “equal” point on the scale, suggesting that neither citations nor media attention are considered strongly predictive of replicability. Nonetheless, it must be noted that the mean for the media item, m=6.0, was significantly below the point of indifference; t(97)=-3.51, p < .001, suggesting that overall there was a tendency to associate media coverage with a reduction in presumed replicability.

The next item queried the effects of sample size, with the anchor points labeled with “N=20” and “N>100”, respectively, for the small and large samples.

Overall, participants were inclined to place more confidence in a larger sample, although around one fifth of respondents were indifferent about sample size.

The final two items queried the presumed effects of preregistration. We have discussed preregistration previously on this blog, and it forms an integral component of the movement towards increased methodological rigor in our science.

We asked our experts two questions: First, whether a study whose method was preregistered would be more likely to replicate than one without preregistration, and second, whether a preregistration of method and analysis plan would herald greater replicability than a preregistration of the method alone.

The results are quite clear: a strong majority of participants favored preregistration (over doing nothing) and an even stronger majority placed more faith in a preregistered analysis plan than in preregistration of the method alone. Those results reflect current thinking about the pitfalls of unconstrained analyses that resemble the strategies of the proverbial Texas sharpshooter.

Overall, the survey results provide a glimpse into the current views of experts on this important issue. This brief preview of the data is, of course, just one part of a larger research project, which hopefully will find its way into the scientific literature in due course.

The Psychonomic Society (Society) is providing information in the Featured Content section of its website as a benefit and service in furtherance of the Society’s nonprofit and tax-exempt status. The Society does not exert editorial control over such materials, and any opinions expressed in the Featured Content articles are solely those of the individual authors and do not necessarily reflect the opinions or policies of the Society. The Society does not guarantee the accuracy of the content contained in the Featured Content portion of the website and specifically disclaims any and all liability for any claims or damages that result from reliance on such content by third parties.

You may also like