This Week’s New S&T Nominee: National Institute of Justice Director

From this week’s list of nominees we find one for the Department of Justice’s research and development arm (yes, they have one), the National Institute of Justice (NIJ).  The Institute funds research, development and evaluation in areas of interest to law enforcement and criminal justice.  John Laub, a distinguished criminologist from the University of Maryland, has been nominated to lead the Institute.  The position requires Senate confirmation, which might happen before the end of the year.  The Director reports to the head of the Office of Justice Programs, an Assistant Attorney General whose responsible for the Bureau of Justice Statistics, along with the NIJ and other offices.


This Year’s Output Metrics Show What, Exactly?

The U.K. Department of Business, Innovation and Skills is crowing over the results of the 2009 International Benchmarking Study of UK Research Performance (H/T Nature News).  The study, now in its sixth edition, was published by Evidence, Ltd.  The bulk of it is indicators of research output – both in terms of absolute numbers and in percentage share.  It’s pretty comprehensive in its reach, comparing U.K. trends to those of seven other countries (the U.S., Canada, France, Germany, Italy, Japan and China – whose output is growing rapidly) in its global comparisons.  Unfortunately, they opted to address the U.S. lead in most measures by leaving that trendline off the chart rather than adjusting the axes appropriately.

There is an interesting aggregate measure they use called the Research Footprint, a web graph that combines measures of papers produced, research and development dollars, and researcher workforce.  It does manage to show relative strengths of the research enterprise of various countries, but still prompts a nagging problem I have with most measurements of research activity.

Between painting U.S. research indicators as an elephant in the room, and a discussion of bibliometrics that dwarfs the considerations of most of the other indicators, I’m reminded that these counts are in some way at best a proxy measure.  You can gain some sense of whether countries are conducting more or less research, graduating and/or employing more or fewer researchers, and spending more or less money on research.  But the connection between these outputs and related outcomes remains tenuous.

I know it’s a tough measurement problem, and I’m not saying anything terribly new.  But could we at least spend more time pointing out the limitations of the measurements we do use?  I know there are people at the Center for Science, Policy and Outcomes that are familiar with this blog (or I wouldn’t be part of this feed), so I would encourage them to weigh in on this disconnect between outputs and outcomes, and the apparent inability to acknowledge it.  Am I missing something here?