There are many reasons why a scientific article remains uncited. One of them is that the article is poor.
Photo: Einar Nilsen
In 2006, more than 18 000 original articles were published in 144 cardiovascular journals. Over the subsequent five years 15 % of them were uncited, and 33 % had only 1 – 5 citations, according to a recent study (1). In other words, nearly half of the cardiovascular literature remains uncited or unfrequently cited after 5 years. Does this mean that many researchers have wasted a lot of time, effort and money on useless research, as the authors claimed at a congress last autumn? (1)
The finding was presented as something extraordinary. It was claimed that in an «efficient system» virtually all studies worthy of publication would be cited in subsequent papers. When articles are rarely cited, the so-called impact factor is low, which is serious, because the impact factor is perhaps the strongest motivation behind current medical publishing. Even though the impact factor is only a measure of citation frequency, it is being widely used as a quality yardstick for individual articles, journals, researchers and research institutions alike.
It could have been me. I have published a number of articles that have rarely or never been cited. Should I have left well alone and rather devoted my time to more useful pursuits? It is uncomfortable to think that one has wasted one’s own time as well as that of others and squandered the taxpayers’ money on meaningless indulgences. But is this really the case?
It has previously been claimed that a full 98 % and 75 % of all scientific articles in the fields of humanities and social sciences are never cited (2). Medicine fared a little better (46 %). The assertion that most scientific articles are never cited is often repeated in the literature (3). But is it correct? (4). Many of the analyses include only citations from the Web of Science database, which forms the basis for the impact factor. Newer sources such as Scopus and Google Scholar comprise a greater number of citations. Books are key sources of citations in the humanities and social sciences, and these have traditionally not been captured by the publication databases. Moreover, for how long after the publication of an article should citations be registered? The impact factor is calculated on the basis of the first two years after publication. How should we relate to new and other ways to measure impact? In recent years, so-called altmetrics (article level metrics) has developed into an interesting alternative (5). It is provided by, for example, Nature, Elsevier, PLoS and BioMed Central, and involves collecting statistics on the impact of a research article. It includes, for example, the number of online viewings and downloads. Another important measure could be the clinical impact of an article.
The absence of citations is referred to as uncitedness. Studies of this phenomenon have come up with diverging results. Part of the reason may be different and occasionally vague definitions of the concept (6, 7). The reasons for absence of citations may be that the article is irrelevant, uninteresting or out of date, the results are not generalisable, the study is weak, or the article has been forgotten or has not been found. However, the selection of articles that end up being cited is affected by a number of factors, such as the restriction imposed by journals on the number of references, the fact that authors prefer to cite themselves rather than others (self-citation), that some journals require authors to cite articles in their own journal (reference juggling), that authors devote little effort to identifying the most relevant literature, that Americans would rather refer to each other etc. Matters become further complicated when we take on board that scientific literature is peppered with erroneous citations, misunderstandings and academic myths (8). Thus, circumstances completely unrelated to the value or quality of the article may decide whether it will be cited or not.
Most researchers can mention examples of important articles that had little impact in their own time and were not understood or valued until long after their publication. The most famous example is perhaps Gregor Mendel’s (1822 – 84) discovery of the laws of heredity and his 1866 publication, which was not retrieved from oblivion until 1900. Another example is Francis Peyton Rous’ (1879 – 1970) article from 1911, in which he identified a tumorigenic virus, one of the major breakthroughs in cancer research, for which he received the Nobel Prize as late as in 1966 (9). However, such hidden gems are most likely exceptions to the rule.
Researchers strive to publish in the most prestigious scientific journals possible. However, the impact factor of a journal says nothing about the quality of each individual article (10). For those who like to quantify research, the impact factor and the number of citations are measures that are well suited to the purpose. The problem, however, is that the quality of an article cannot be quantified.