Psychonomic Impact: Canopus or Pleiades?

The latest impact factors for the Society’s journals have just been released and are shown in the table below:

The data suggest that there may be an overall increase in the impact factors of the Society’s portfolio, although the number of years shown (3 years) is too short to permit us to be confident. However, it’s clearly good news if one considers impact factors (IFs) to be a good indicator of a journal’s standing in the field.

What is an impact factor?

An entire industry has been built around bibliometrics, defined by Wikipedia as the ”statistical analysis of written publications, such as books or articles.” It is now common practice to evaluate job applicants or applications for promotion or tenure at least partially on the basis of bibliometrics, and tools exist to help scholars in creating citation reports and so on.

When it comes to journals, the IF is a measure of number of cites in the literature overall to articles published in that journal in the preceding 2 years (or 5 years, in case of the 5-year IF). The computations for the Psychonomic Bulletin & Review are shown in the screenshot below, taken from the ISI Web of Knowledge.

Crucially, the IF is based on the average (i.e., mean) number of citations to articles published by a journal. The mean, of course, is known to be susceptible to outliers. For example, the average distance (in lightyears) of the 8 brightest objects in the night sky from Earth is 39.82. Take out Canopus and the mean is 1.29. Or leave in Canopus and take the median of the objects, and it is 0.000038—roughly 1,000,000 million times smaller than the mean.

Put another way, a single Canopian-sized article can skew a journal’s impact factor considerably. Because this phenomenon is quite interesting, I reproduce below a key figure from a recent comment on IFs in the literature.

The left-hand panel of the figure shows the relationship between the median number of citations per article for a journal and the corresponding mean, with each plotted observation representing a different year. For Psychological Review andPsychological Science, there is a strong relationship between the two measures, suggesting that it isn’t just a single (or a handful) of articles that underlie a journal’s impact factor. Perhaps surprisingly, this relationship does not hold for Nature, where the median is independent of the mean and also rather lower, suggesting that Nature publishes a relatively small number of very bright stars accompanied by many dim meteorites.

Now consider the right-hand panel of the above figure, which shows the number of citations per article per year as a function of time since publication. The graph reveals that articles in Psychological Review continue to accrue recognition for 15 years or more after publication, whereas articles in Nature go the way of most supernovae: A bright beginning and a rather rapid decay.

How would the Psychonomics journals compare to this? I picked the Society’s flagship journal, the Psychonomic Bulletin & Review, and I used Harping’s “Publish or Perish” application to access the last 5 years of articles published in the journal. (Google Scholar does not permit more than 1,000 downloads of information by a program, which prevented me from going further into the past.)

The results of the analysis are shown in the figure below, using the same layout and statistics as above.

Comparison with the earlier figure reveals that, gratifyingly, the impact of the journal is not due to the occasional supernova but represents a distribution across articles that is at the very least not grotesquely skewed. Moreover, our “growth” curve of citations is at least comparable to that of Psychological Science, although given the limited availability of data, it is impossible to tell when (and whether) citations enter a phase of decline.

Overall, these data present a healthy picture of the impact of our flagship journal.

You may also like