Earlier this year, I told you about an interesting trend: increasingly, scientists reject the methods used to determine the impact factors of scientific journals. Now, a new article at Phys.org, “The complex role of citations as measure of scientific quality,” notes that scientists have come to doubt another key metric of scientific impact: citation counts.
New research “shows that the number of citations is a poor measurement of the quality of research.” Why is that?
‘Citations occur when a researcher provides a reference to previous research results to for example back up a claim. However, references can be made for many different reasons,’ says the author of the thesis Gustaf Nelhans, PhD in theory of science at the University of Gothenburg and lecturer in library and information science at the University of Borås.
Researchers sometimes refer to previous research to indicate the source of certain influences or to identify past work that they want to develop further. But they may also cite previous work in order to argue against it or maybe even refute it entirely. And sometimes sources are referred to out of tradition or routine-like because everybody else in a field seems to do it.
Thus, the study finds, there are many reasons why a paper might be cited that have little to do with scientific merit:
As a result of the so-called citation culture that has emerged in the scientific community, an increasing number of researchers have started to present their studies not only with the obvious goal of promoting the content, but also with an aim to attract as many citations as possible. The purpose of this is to gain acknowledgement in the scientific community and secure research funding.
Obviously this doesn’t mean that a highly cited paper has no scientific merit. What it does mean is that citation statistics aren’t necessarily a measure of such value:
‘The problem is that citation statistics offer a complex measurement that hides at least as much information as it reveals. It is therefore important to see the whole extent of this phenomenon and not treat citations as an automatic measure,’ says Nelhans, who urges decision-makers to be more careful when basing allocation of research funding on citation statistics.
With this in mind, it also stands to reason that an otherwise scientifically sound paper might not be cited for reasons that are political, not scientific. In the intelligent-design movement, we’ve seen this many times in the past, where an ID paper is relevant and appropriate to cite, but is not cited because other anti-ID authors don’t want to grant legitimacy to the paper. Indeed, we’ve observed much the same in the critical response to Stephen Meyer’s book Darwin’s Doubt, where articles in journals clearly seek to rebut Meyer’s arguments about the Cambrian explosion but refuse to cite him by name or mention the name of his book.
In short, while citation counts definitely mean something, they don’t always mean what some people think they mean.