A paper in F1000 Research (H/T Nature News) suggests that the Research Resource Identification marker (RRID) pilot has be well received in the biomedical research community. While the mark adds one new means of tracking research to a growing list, the RRID may be useful in the drive to make more research reproducible, or at least more amenable to thorough peer review.
The purpose of the RRID is to standardize citation of research materials such as reagents, data analysis software, and model organisms. While there are existing identifiers for these kinds of resources, it’s not always clear which databases must be consulted to review the linked resource. Put another way, the citation to the resource isn’t enough for an interested reader to check it out. The RRID connects to its own database, the Resource Identification Portal, allowing for one place to help clarify the location of an associated item. The RRID is also machine readable, making it possible for computers to pull out the identifiers from articles that use them.
While there has been broad adoption of the RRID, it has been almost entirely within neuroscience research. Perhaps that success, emphasized in this article will encourage the adoption of the RRID for other research tools, and *maybe* in other disciplines?
I think this kind of meta- (and mega-) linkage will be necessary if the U.S. government’s efforts to expand open access to research data is going to be effective. Many of the open access plans on this point are taking the linkage route – linking to existing repositories of research data. If members of the public are going to make effective use of this data it’s critical that they can find the information linked to, not just in the federal agency open access system, but in the databases linked to by those systems. For if the data, or the associated resources, can’t be found for lack of effective citation, how open is that access?