When I posted about this graphic on ‘bad science’ that both gives detail and simplifies the problem, I gave just brief mention to the post where I found it. Titled Bad science is normal (pseudoscience is neither), the author, an avowed pseudoscience anti-fan, raises the notion that woo is not necessarily as big a problem as we think it is. And that argument doesn’t depend on that bad science graphic, though it helps explain his thesis.
Let me be clear (and I think the author may agree) – pseudoscience (or woo) is a problem. Scientific-seeming explanations that deviate from the norms and conduct of science undercut the strength of scientific knowledge demonstrated through repeated experiment and well-reviewed research and theory. It understandably attracts a lot of anger from scientists and others who support the proper production of scientific and technical knowledge.
Bad science – misrepresentation, fabrication, and questionable research practices – is also problematic, but not just because of the inherent abuse of trust. Because much of science is reputational, and highly linked, failure(s) at one place have effects on the institutions, colleagues and cited work that either relied on the researcher or the research that violated trust and community norms.
It’s also harder to find. A lot of pseudoscience is plainly outside the normal consensus or practices of science – sometimes willfully so. Perpetual motion machines would be an excellent example. But lead researchers in a well respected lab that have been fudging their data collection, falsifying results, and/or plagiarizing others aren’t so obvious. They’re not trying to make waves (or money from those who send in for those perpetual motion machines.
And intent is sometimes hard to determine. Questionable research practices can hide an intent to deceive or a lack of proper technique, understanding of methods, or lack of discipline. Both the willful and the negligent bad science need correction, but the former need sanction while the latter need guidance or nudging to different careers.
Bad science, when found, is not strongly punished, absent a long-term pattern of abuses. In those cases, like the case of Diederik Stapel, careers should be clearly ended. But smaller violations often get limited bans from federal funding, or can be mitigated by jumping institutions. Issues around conflicts of interest requirements get even less scrutiny or approbation. Long-time readers may remember my general disapproval of one Charles Nemeroff, who managed to get a brush of the wrist when he failed to report $800,000 in speaking fees.
Frankly, I would gain some satisfaction from seeing someone conducting bad science go to jail, but I’ll settle for more serious punishments than what seem to get handed out. I suspect it would be easier to instill the fear of punishment for bad behavior rather than address the pressures that prompt some to conduct bad science.
But it’s not enough to beef up the punishment. Ethics education needs additional attention, and the peer review and replication of results that help build the trust many have in scientific knowledge need to be made easier to conduct. Research data needs to be more available, and reviewers for journals must continue to take their responsibilities seriously. If you are going to pass off that review request to one of your grad students, at least train them in how to review.
And the next time you get mad at the next ‘scientific’ defense of intelligent design, or the cherry picking of climate data, try and transfer that level of agitation into scrutiny for that next paper you read in a scientific journal. Because all of those things make it harder for people to trust in the processes of science.