News
.Back to listing
Thu, Jul 25
Duke Students Develop Rubric to Measure Retraction Notice Quality
When a published scientific paper is officially removed from the scientific literature, publishers release a retraction notice. Retractions may be due to substantive error, falsified data, poor methodology, or several other criteria that undermine the quality or credibility of the scientific work. Students in Duke University’s Science and Public Undergraduate Certificate developed a rubric for measuring the quality of these notices and published research that reveals an inconsistency that compromises this important research process.
The students, in partnership with Retraction Watch, published their open-access paper, “Taking it back: A pilot study of a rubric measuring retraction notice quality” in Accountability in Research. Their research sought to determine how well notices of retracted papers from two science publishers were doing with respect to transparency, informativeness, and accessibility; and whether retractions have changed over the past decade as the number of overall retractions have increased.
Retractions are a layer of quality control that is in place to ensure new research is as truthful and trustworthy as possible. However, retraction notices lack uniform standard criteria between publishers which leads to confusion for readers for whom the notices are meant to inform. Or, in worse cases, may lead to further error, wasted effort in subsequent research, or contribute to human-rights violations.
The new paper highlights several factors that complicate efforts to standardize retraction notifications:
- There is no consensus on what key pieces of information should be shared.
- There is no legally binding guidelines governing how publishers make notifications.
- There are few incentives for publishers to disclose the internal processes that lead to retraction.
To evaluate quality the research team developed a series of questions based on criteria published by the Committee on Publication Ethics (COPE) and Retraction Watch. Two researchers then assigned separate quality values for each question on a scale from 0 to 2 and took the average. Their results showed some improvement in one major publisher’s retractions, while a second publisher showed no improvement or a decrease in ratings for some categories.
The authors recognize that their relatively small sample size from only two major publishers may not yet allow the data to be generalized across all research publishers. However, the data suggests that retraction notices are failing to provide readers with clear and consistent information and are a likely impediment in research. Their analysys supports the effort of COPE and retraction watch in calling for a binding normative approach to the retraction process.