Ian Boyd, Chief Scientific Adviser for the U.K. Department for the Environment, Food and Rural Affairs (Defra), is concerned about an increase in research bias. As he writes in Nature, (H/T Steven Hill)
“To counsel politicians, I must recognize systematic bias in research. Bias is cryptic enough in individual studies, let alone in whole bodies of literature that contain important inaccuracies.”
Boyd goes on to list different kinds of biases he sees in research, giving examples specific to his sphere of policy. He’s more concerned about research that tests complex phenomena, like toxicity, rather than experiments that are comparably easier to replicate and confirm (example – does this experiment produce these compounds).
I’m no fan of the kinds of bias Boyd is writing about. He’s right to be concerned about them, and his criticisms of current methodologies around statistically independent studies need additional attention. But Boyd’s focus – systematic bias in research – strikes me as too narrow for his intended purpose – counseling politicians.
Bias from researchers and their methods is just one part of the bias puzzle. While Boyd hints at this when he considers how research is commissioned and used as possible contributors to systematic bias in research, he doesn’t turn a critical eye on those who commission and use the research. At least in the U.S., a significant percentage of those who commission and use the research are the politicians and policymakers that Boyd would be advising.
And that’s where it gets complicated. Much more complicated than I think a ‘kite-mark’ (comparable to a trust mark) could effectively address. As Boyd envisions this system,
“It would also consider conflicts of interest, actual or implied, and more challenging issues about the extent to which the conclusions follow from the data. Any research paper or journal that does not present all the information needed for audit would automatically attract a low grade.”
The conflicts of interest for policy-relevant research are going to be a factor for researchers and those policymakers seeking the research (whether they commission it or are simply looking for existing materials). Boyd’s concerns about research being funded that focuses on only one possible hypothesis seems to be more likely to happen in these circumstances, and I’m not sure how that can effectively be captured in the current system of research papers.
Let me raise a few questions, and see if Professor Boyd (or anyone else) have some answers. What research papers currently in the literature would not attract a low grade under this system? Do any of the papers Boyd cites pass this audit he proposes? Why or why not? How would Boyd suggest we control for the biases in the policymakers that would commission and/or use research for policy decisions?
We can certainly be more careful about how to review research and those conducting the research. But to completely separate that research from the larger context in which it was created and could be used will not capture all the potential for bias that rightly concerns Professor Boyd.