PLOS One published an article earlier in the week with the clunky title of “A Collaboratively-Derived Science-Policy Research Agenda” In the grand tradition of other scientific efforts, the authors submitted over 200 questions, and whittled them down to 40 through voting and discussion. While predominantly from the U.K., the questions generated strike me as pretty generalizable for other countries (the full list of questions is available online in Word format). While the U.K. civil service was represented, the bulk of participants were from universities.
The questions fall into the following broad categories:
- Understanding the role of scientific evidence in policymaking
- Framing questions, sourcing evidence and advice, shaping research
- Advisory systems and networks
- Democratic governance of scientific advice
- How do scientists and policy makers understand expert advisory processes
- Policy making under conditions of uncertainty and disagreement
From my observations of the U.S. National Science Foundation program on the Science of Science and Innovation Policy, I’d be surprised if more than 5 of the 40 questions have been funded in the United States. That program appears focused more on questions about the production of science and technology knowledge rather than the more functional or operation focus on science and technology policy. Perhaps this results from what the authors characterize as a “bottom-up, approach, bringing together researchers, policy makers and practitioners with interests in relations between science and policy to identify priority, researchable questions in this field.”
In its notice on the paper, Nature solicited opinion from some of those who might be considered part of a top-down approach. Senior U.S. researchers in science and technology studies were, naturally, skeptical of the project, thinking attention was better focused elsewhere, or that an ‘uncritical’ consideration of science and policy led to what Professor Sheila Jasanoff considers a “surface consensus here which is probably misleading.”
I tend to bristle at the kind of dismissiveness I infer from Jasanoff’s remarks. That because the assembled scholars and practitioners don’t ‘properly’ outline the interrelationships between science and policy the effort is somehow fruitless. Coming from a field – science and technology studies – that claims to appreciate and encourage reflexive thought in scholarship and action, she comes across to me as more than a little bit hypocritical. She is cited in the paper, and there was at least an acknowledgement of the complexity of the exercise that Jasanoff notes in the Nature article. Did she read it that far? Would there be something so wrong with a bit of muddling through, provided we were conscious of being muddlers? Periodic reflexivity can help us see what we can and adjust as best we can. If Jasanoff can’t see that potential in this exercise, that’s disappointing.
So there’s a list of questions. Great. I have a list of my own.
- What happens next?
- What more is needed in order for such work to be supported?
- Will the U.K. government (or others) commit to supporting such work?
- How about the authors’ institutions?
- Does this work need to be contracted, or are current peer review funding mechanism willing to accept such work?
(That is, would a review panel seriously consider a proposal to answer any of these questions?)
- How can the output of this research inform further research and practice in the field?
I could think of more. My thanks to the researchers for their efforts, and I hope that other countries see fit to run their own versions of the question generating exercise. The similarities (and the differences) would be worth exploring.