For journal editors, retraction is a scary word. No editor wants to have to publicly pull a paper. Every retraction represents a lot of wasted time and resources as well as a reputation hit. You might think that retractions are infrequent events, but the retraction universe is actually quite vast. Despite the volume of retractions, editors had no way to check submitting authors’ names to determine if any of their papers had ever been retracted—until now.
Retraction Watch, a blog that has reported retractions since 2010, took the service to a whole new level last week with the launch of the Retraction Watch Database. This database of >18,000 retractions is searchable by author, article title, doi number, publisher, journal name, URL, and more.
Although most retractions occur when deliberate dishonesty and fraud come to light (plagiarism, figure manipulation, fake peer reviews, lack of approval from institutional review boards), some papers are retracted because of an honest error or reproducibility issues. When the reason for the retraction is available, the database record provides it.
According to an analysis of the database by Science, the annual number of retractions has increased since 2012 but so has the number of published papers, so the annual rate of increase has remained rather constant since 2012. Further, the average number of retractions per journal has remained nearly the same since 1997. Analysts give some credit for the flat rate to journals’ use of plagiarism detection software.
The Ochsner Journal began scanning every submission for plagiarism in 2013, and we have detected several instances of duplication. Those papers are summarily rejected. Now that Retraction Watch has made this incredible database available, we have also begun scanning every submitting author’s name to ensure that no author published in the Journal has had a paper retracted. For us, it’s about quality, reputation, and the integrity of the literature.