As I promised/threatened at the beginning of April, I’ve been thinking about how citations of scientific and technological research in non-academic publications aren’t captured. Or rather, how they aren’t captured in the same was as citations in journals are. I’m far from the first one to try and work through this, though I think there’s a lot of conventional thinking to overcome in this area. What follows is a bit more ‘thinking out loud’ than usual, so please bear with me. Comments are always welcome here (email – pasco dot phronesis at yahoo dot com), and if there was a post that I’d really like discussion over, this would be it.
To help with this, I think it’s useful to go back to the fundamental criteria citation metrics are trying to measure – the ‘impact’ of a piece of research. There is an assumption that more valuable research correlates with more cited research. There are problems with this assumption, of course, but it seems reasonable that the more a paper is cited, the more influence it has on subsequent research. With the bias in research publications toward positive results, the presumed influence is likely positive as well.
However, this kind of measure is not focused on impact in a broad sense, but on the impact on subsequent research. What other impacts could scientific research have, and which of those impacts would be valuable in evaluating the return on research funding? Which of those other impacts would be valuable in evaluating researchers? From where I am in the big research universe, it seems like these questions are in the realm of what Donald Rumsfeld once called – unknown unknowns. We don’t know what we don’t know, and widening our measuring tools can help.