In this week’s issue of Science the Policy Forum section includes an essay from several senior researchers and research administrators discussing the challenges to improving the incentives for ensuring high integrity in research. The group was convened by the National Academies and the Annenberg Retreat at Sunnyland.
The essay covers a number of concerns about vetting research results that have been heard before (publishing negative results, need for additional mentoring, independent validation/replication, etc.) and the initiatives several journals and institutions are taking to improve those processes. But one particular item caught my attention, and that of others: distinguishing between retractions due to fraud or misconduct and those needed for other reasons. From the essay:
“[V]oluntary withdrawal of findings by a researcher eager to correct an unintended mistake is laudatory, in contrast to involuntary withdrawal by a duplicitous researcher who has published fraudulent claims. Alternative nomenclature such as “voluntary withdrawal” and “withdrawal for cause” might remove stigma from the former while upping it for the latter.”
In other words, the authors suggest folks aren’t so inclined to report unintended mistakes because of the stigma attached to the word retraction. Whether or not withdrawal for cause has a more negative stigma than retraction is unclear to me.
Changing the nomenclature may help, but as the essay also notes, the infrastructure for checking research results has not matured at a rate comparable to either the increase in scientific research output or the increasing ease of committing scientific fraud and other misconduct.
What might be more effective, but possibly more challenging, is implementing this goal from the essay – “We believe that incentives should be changed so that scholars are rewarded for publishing well rather than often.” I think this is an excellent goal, but there are two sets of stakeholders that have locked into a notion of scientific research quantity as a proxy for quality. Not only is it embedded within the university reward structure, but it is also integrated into policymakers discussions of scientific research support. With Nobel Prize winner numbers often cited (often as a scientific equivalent of ‘mine’s bigger than yours’) efforts to encourage fewer publications are going to be looked at a little oddly from those who hold the purse strings.
There will be a National Academies report coming later in the year that should give more details on some of the ideas broached in this essay. Hopefully it can also prompt the dialogue desired by the authors.
Earlier this month I noted that the White House is seeking input on its third iteration of the National Action Plan for Open Government. You can submit comments via email or on a Hackpad collaborative platform (you will have to register on Hackpad to submit via that platform).
Guidelines are pretty broad, and the Hackpad provides some categories to guide submissions. The organizers have populated many of the pages with content from an Open Sunshine Week brainstorming event in March. Since I mouthed off about submitting comments on the National Action Plan related to scientific integrity policies, I thought I’d share what I submitted (via the Other Topics section of the Hackpad platform).
It’s not terribly detailed, but it’s at a level of detail consistent with other submissions on the platform. Ideally, there should be a website where interested members of the public can get information on how agencies have been implementing their scientific integrity policies. I’m not proposing a massive data dump of information, but to have enough summary information that interested parties can pursue additional information with the agency. It would also, I hope, prompt the Office of Science and Technology Policy (OSTP) to continue monitoring the issue across the government. It seems that once the agency policies were posted, the OSTP acted like the job was done. But it’s only just started, and having a public reminder of that strikes me as a good thing to do.
The U.S. Fish and Wildlife Service (FWS) has formally classified captive chimpanzees as endangered, the same status as chimpanzees in the wild (H/T Nature News). The action also removes several exemptions to the Endangered Species Act that applied to captive chimpanzees. The removal of these exemptions will further limit what research can be done on captive chimpanzees. The final rule will take effect in mid-September.
The rule comes after the National Institutes of Health had already reduced the number of chimpanzees it uses for research. Back in 2013 the agency retired more than 300 chimpanzees, retaining roughly 50 for research purposes.
This does not completely eliminate legal research on chimpanzees. It will still be legal to import chimpanzees into the United States and conduct research provided that such research is “to benefit wild chimpanzees or to enhance the propagation or survival of chimpanzees, including habitat restoration and research on chimpanzees in the wild that contributes to improved management and recovery.”
Those opposed to the changes argue that the requirements for obtaining research permission will be so onerous as to prevent the activity. They also argue that captive chimpanzees have been bred for research purposes, making them sufficiently distinct from their wild cousins as to warrant the present separate treatment. However, with the NIH already winding down its supported research involving chimpanzees, it seems likely that such research would become more difficult to conduct in the U.S. with or without the new FWS rule.
(Rumors of this being prompted by the recent revival of the Planet of the Apes films are greatly exaggerated, or solely my fault.)
Back in 2012 Universities UK released The concordat to support research integrity. The document was developed by representatives from UK research universities, funding institutions and government agencies. Among other things, the concordat recommended that employers of researchers should submit annual statements to their governing body outlining activities done with research integrity and research misconduct.
The UK Research Integrity Office (UKRIO, and I didn’t know about it either) decided to survey institutions about whether they were submitting these reports. In a survey of 44 institutions that subscribe to UKIRO, 27 responded and 9 of them had submitted the annual reports. Of another 44 institutions that did not subscribe to UKIRO, only 3 institutions submitted those reports. (The survey will be published at a later date, so I do not know the response rate of the non-UKIRO subscribing institutioins.)
As described in this Nature article, there is a difference of opinion on the meaning of should in this context. Not all institutions assumed that should means the reports are required. (I wouldn’t automatically assume it did, but I’d advocate for submitting such reports regardless.)
The survey should provide additional insight once it’s released later this year.
As occasionally happens both the Presidential Commission for the Study of Bioethical Issues and the President’s Council of Advisers on Science and Technology (PCAST) will meet in the same month. The PCAST meeting is on May 15th, and while I have posted about it already, there is now a draft agenda available for review. The session on business had already piqued my curiosity, and the agenda has only fueled my speculative interest. The panelist named in the agenda, Rebecca Henderson of Harvard Business School, has been working on disruptions to capitalism and how that economic system could manage major transitions.
The Bioethics Commission will meet in Philadelphia on May 27. No agenda is currently available, but the Federal Register notice indicates the meeting will focus on public engagement on bioethics issues (using deliberation) and bioethics education (also involving deliberation).
Deliberative democracy is a research interest of the Commission’s Chair, and ethics education was part of the Commission’s recommendations in its Gray Matters report, so it makes sense that these would be subjects of a Commission meeting. Additionally, the Commission issued this request for comment in April for information relating to both public deliberation and education on bioethical issues. The call will be open until July 20, so you may wish to watch the meeting (in person or online) before submitting your comments.
From today’s Washington Post is word that the American Psychological Association (APA) has settled a class-action lawsuit with its membership. At issue was the contention that the association had required several of its members to pay a voluntary fee to the group’s lobbying arm, the APA Practice Organization. The alleged deception dates back to 2001. The total cash payments of the settlement will be nine million dollars.
The settlement process is ongoing, and the court will either approve or reject the settlement later this year. As you might expect, the terms of the settlement agreement include language asserting that there is no admission of any claims levied in the suit or any acknowledgment of liability. The agreement also means the association will be more explicit in communicating that fees paid to the APA Practice Organization are optional.
The way the APA has a separate lobbying arm brings up the matter of how scientific societies do or do not engage with advocacy. The particular issue here is only tangentially related to the question, in that members can opt to provide support of association lobbying independent of their membership dues. I don’t have any particular recommendation of the best way to handle this matter, but I think it highlights how agreement of a scientific association on the state of the field doesn’t necessarily translate into agreement on policy choices. Nor do I think it should, but that’s a separate discussion.
In light of Chinese researchers reporting their efforts to edit the genes of ‘non-viable’ human embryos, the National Institutes of Health (NIH) Director Francis Collins issued a statement (H/T Carl Zimmer).
(For what it’s worth, the research indicated a very low success rate in editing the gene.)
The statement mentions the various legal and regulatory prohibitions on funding the kind of research the Chinese conducted. In this case, the editing was of a gene responsible for a particular blood disorder. But the changes to the gene would be heritable by the descendants (if the embryos in question were viable), and that is the source of concern.
From the Director’s statement (CRISPR/Cas9 is the editing technique in question):
“NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed. Advances in technology have given us an elegant new way of carrying out genome editing, but the strong arguments against engaging in this activity remain. These include the serious and unquantifiable safety issues, ethical issues presented by altering the germline in a way that affects the next generation without their consent, and a current lack of compelling medical applications justifying the use of CRISPR/Cas9 in embryos.”
While Collins also notes the federal laws and regulations that restrict funding, I do not expect the statement to be the end of the discussion around the gene-editing research reported on in China (which is probably continuing). I suspect many would find the use of non-viable embryos in this research acceptable, even if it punts on the questions of consent to changes for future generations and the safety of the techniques on viable embryos. After all, stem cell research lines have been derived from non-viable embryos. I think that the need to (eventually) work with these technologies on viable human embryos makes the stem cell comparison problematic, but that won’t likely matter in the policy debates to come.