The curious case of impact factors

How can we assess the quality of someone’s research? It is very tempting to approximate the ‘impact’ or value of a piece of research with a number, so you don’t have to deal with the content of the research at all. Something to serve as a proxy – this is exactly what happens with the impact factor. Originally designed as a help for librarians to decide how to spend the limited funds for journal subscriptions, it has become a hallmark for ‘quality’ – if you publish your research in a journal with a high impact factor, it must be good.

Well, we all know that is utter nonsense. The impact factor is based on th number of citations that articles from a journal get over the last two years – so a single piece of new research is ‘assessed’ based on completely unrelated articles that happen to have been published in the same journal. It is a measure that has nothing to do with the new piece of research at all. The impact factor also has some statistical issues: the distributions of citations are highly skewed, so calculating the mean number of citations is silly. We use an aggregate measure to assess a single piece of data – also silly. We do not have any evidence that these metrics predict academic success (whatever that is, but no one has checked). And it turns out, as Steve Royle showed (highly recommended – go read that!), if you take a any random paper from a journal, the impact factor of that journal does a very bad job of predicting how many citations the paper is going to attract. Rather than calculating and boasting about an impact factor to the third decimal (statistically illiterate!), rounding it to the nearest 5 or 10 (!) would be better.

So why is the impact factor still around? Why aren’t we getting rid of it completely? I think there are two aspects at play: one, the impact factor provides the lazy assessor with an easy metric. And the second one is the belief, that young scientists need a high-impact paper to get a (permanent) job in academia. While there are some ideas that help – e.g.aspiring researchers can highlight 3-5 of their own articles which act as a basis for assessment (such regulations have been implemented by some funding agencies), a revolutionizing way of quality assessment has yet to be invented (and implemented!). Maby somebody will come up with a different, more meaningful number – because I agree with this comment: the numbers are not going to go away. Even if this is lamentable, and as David Colquhoun succinctly put it: ‘It seems to me that there will never be a way of measuring the quality of work without reading it.’

This blog has been inspired by posts that clever people wrote about impact factors: Stephen Curry, Michael Eisen, David Colquhoun, and others. Thanks! Finally, if you want scholarly treatise, Vanclay deals with the impact factor very thoroughly in this paper.

Leave a Reply

Your email address will not be published. Required fields are marked *