Narrow Measures In, Narrow Impact Out – What Gets Left Out?

As I promised/threatened at the beginning of April, I’ve been thinking about how citations of scientific and technological research in non-academic publications aren’t captured.  Or rather, how they aren’t captured in the same was as citations in journals are.  I’m far from the first one to try and work through this, though I think there’s a lot of conventional thinking to overcome in this area.  What follows is a bit more ‘thinking out loud’ than usual, so please bear with me.  Comments are always welcome here (email – pasco dot phronesis at yahoo dot com), and if there was a post that I’d really like discussion over, this would be it.

To help with this, I think it’s useful to go back to the fundamental criteria citation metrics are trying to measure – the ‘impact’ of a piece of research.  There is an assumption that more valuable research correlates with more cited research.  There are problems with this assumption, of course, but it seems reasonable that the more a paper is cited, the more influence it has on subsequent research.  With the bias in research publications toward positive results, the presumed influence is likely positive as well.

However, this kind of measure is not focused on impact in a broad sense, but on the impact on subsequent research.  What other impacts could scientific research have, and which of those impacts would be valuable in evaluating the return on research funding?  Which of those other impacts would be valuable in evaluating researchers?  From where I am in the big research universe, it seems like these questions are in the realm of what Donald Rumsfeld once called – unknown unknowns.  We don’t know what we don’t know, and widening our measuring tools can help.

The National Science Foundation has a broader impact criterion in its merit review process.  However, current citation measures are typically focused on what would be the other merit review criterion – intellectual merit.  Within the definition of broader impact there are a number of possibilities, which could include:

  • Diffusion of the research to the students, policymakers, and/or the general public;
  • Critical knowledge for patents, trademarks, other intellectual property elements, or innovations;
  • Establishment of tools to help teach/train students or researchers;
  • Work that helps improve research infrastructure; and
  • Supporting other public goods and/or policy goals.

There are at least two things that can help capture some of the lost impact: measuring citations in non-journal publications and capturing readership information.  They are related in that the both focus on usage outside of the current academic focus.  Neither of them strike me as easy, as it would be difficult to be sure that total activity would be captured.  To be fair, not all journals make citation indices, but that’s more a means of selecting for quality than an admission that not all research is captured.  After all, if one publishes a paper and nobody outside the journal reads it, the researchers don’t get credit.

The kind of usage measures needed likely resemble something like MESUR – MEtrics from Scholarly Usage of Resources.  But even MESUR (and the SEED article where I learned about the project) is focused on scholarly usage when the impacts of scholarly work are by no means limited to scholars.  There’s really no consideration of other impacts, just a way to better capture the kind of impact that’s the current emphasis of citation metrics.  This tacit assumption that research only has meaning and use for researchers may be as big of a hurdle to broadening the conception and value of research impact as any technical hurdles.

Readership statistics would be a much easier kind of statistic to grab, and with the prevalence of sharing tools on most online sites (even this blog!), you can also capture an added element of influence by looking at this sharing as a populist form of citation.

I don’t think any of this can be effectively automated, either now or in the foreseeable future.  Aside from remaining technical challenges to widespread monitoring and aggregation of reading and references to research, there are two important non-technical matters.  There has to be a much better habit of putting works online in machine-readable formats.  Without doing this, any people or programs that try and scrub online information for citations to academic research will not be able to capture them.  Those who are working to increase the amount of government information available online recognize this as a huge challenge in their work, and the scope and scale of work that will have to be scanned makes it at least as much of a problem here.

The other challenge is one of resources.  People and systems will need to be dedicated to doing the work of measuring impact.  This kind of evaluation work is sufficiently far away from traditional academic paths that it’s not something that can be assigned to existing structures or institutions and easily integrated into what they already do.  MESUR seems to have struggled in maintaining support, so a broader kind of MESUR will have struggles absent patronage and/or a persistent communication effort to change how we evaluate our research.  Because without measuring all the different impacts our research has, we really aren’t valuing it as much as we could.  In an era of tightening budget belts, showing the full value of research is a possible way of weathering the financial storm.

I will come back to this topic again.  There are a host of issues associated with how scientific and technical knowledge is transferred and used, and a more comprehensive valuing of the impact of this research could be useful in this transfer.  Any comments, pointers, and other feedback is most welcome.

About these ads

2 thoughts on “Narrow Measures In, Narrow Impact Out – What Gets Left Out?

  1. Interesting questions. I wonder how much of these types of metrics can be integrated early on in a researchers’ career (when they’re entirely focused on getting tenure), vs. later on (after they get tenure). From what I’ve seen, the early career faculty are mostly focused on achieving tenure, and may sometimes view broader impact type stuff as a distraction.

    As I’m sure you know, the STAR METRICS group is very focused on how science knowledge is researched and used. I believe the answers you’re looking for don’t currently exist!

    As for non-journal publications…the only thing I can think of is tracking how often a specific article is emailed.

    And this might be tough as well…but is there any way to track activity in the science blogosphere? I think that type of data could also be interesting.

    Don’t think my rambling has helped…but hopefully it wasn’t a complete waste!

  2. Well, web analytics, depending on which software is being used, should be able to track usage for individual posts and for blogs pretty well. A wrinkle there would be those nominally science blogs that don’t have a lot of science content.

    Similarly, activity in Twitter and Facebook (along with other relevant social networking platforms) can be tracked, and with a little developer incentive, could probably be specialized for the content we’re interested in.

    I think focusing on individual researchers first without integrating the metrics into institutional thinking is putting the cart before the horse. The institutions making the judgments about value (of articles and researchers) need to accept the new measures and new emphases in order for the individual researchers to come on board. Science is really small-c conservative in that respect.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s