The Chronicle of Higher Education reports in its Wired Campus blog on a recent study that finds no significant citation advantage for open access articles compared to traditional subscription based articles. This runs counter to other studies that suggest open access articles do see higher citation rates. But a mixed record of citation impacts should be no surprise to those who have followed this research area.
The headline-level synopsis of this and other studies attempting to discern the impacts of open access research obscure a multitude of methodological concerns. They also downplay significant differences found when breaking the data down by field, country, or other factors.
In the particular study at issue, roughly one-fifth of over 3,200 articles in 36 subscription-based journals were randomly made open access over 2007 and early 2008. (It is perhaps worth noting that the journal that published this study is produced by FASEB, an association of scientific societies, many of whom have fee-based journals that underwrite their operations.) While the study found a notable increase in downloads, there was no notable increase in citations.
In contrast, a much more comprehensive study found an eight percent increase in citations overall, with much greater increases in citations for researchers in developing countries. The lead researcher on the first study has speculated that most people that would be making the citations do not have issues with accessing the literature, which sort of supports the conclusions of the second study.
From a policy perspective, these results do not mean that open access is a failure.First, there is the issue of citation impacts for specific fields or locations of researchers. It certainly seems that more research is being made available to researchers that had not been able to access it before, regardless of the overall impact. But even if this was not the case, the policy is still not a failure; because increasing the citations of research is not the sole purpose of encouraging open access. The increase in downloads and other indications that more people are looking at available research is important, and arguably has motivated most of the push by the U.S. government to make more of the research it funds made freely available within months of publication, if not sooner.
What’s also worth noting is this line of discussion is the reminder that citation counts are an incomplete measure of article impact. For instance, with a much better ability to measure readership of journal articles, shouldn’t that make a difference in judging the spread of knowledge as well? This links to a related issue I’m trying to articulate about research citations in non-academic literature that I hope to be able to post sometime this month. Whether I’m successful with that or not, I think revisiting what is measured (and therefore valued) in research assessments is overdue.