A Recently Published Dataset Identifies Hundreds of Self-Citing Scientists

In a previous post, we discussed citation manipulation. A paper published in PLoS Biology in September 2019 addresses self-citation, one form of citation manipulation. The authors have published a dataset that lists approximately 100,000 of the most-cited researchers and shows that at least 250 of them have amassed more than 50% of their citations from their own papers or papers published by their coauthors. An extreme example is a researcher who received 94% of his citations from himself or his coauthors as of 2017. Scientific_citations.png
All data came from Elsevier’s proprietary Scopus database. John Ioannidis, a physician at Stanford University who specializes in meta-science led the work. Richard Klavans and Kevin Boyack, from the analytics firm SciTech Strategies, and Jeroen Baas, director of analytics at Elsevier, assisted with the data collection. The goal of this study was to identify citation-driving factors. The researchers didn’t intend to focus on self-citation, but it’s arguably the most striking part of the dataset.
The researches decided to define self-citation as citations by the principal author and any coauthors. Another way to potentially identify citation manipulation is to examine the ratio of citations to the number of papers in which those citations appear. For example, one scientist included in the dataset received 10,458 citations from 1,029 papers, averaging more than 10 citations in each paper. Ioannidis says this metric combined with the self-citation metric can serve as a red flag.
Some researchers doubt that the self-citation dataset will be a helpful tool.
Some scientists suggest that evaluating scientists by adding more individual metrics will lead to more problems because metrics are already a driving factor in citation manipulation. Cassidy Sugimoto, an information scientist at Indiana University Bloomington, believes that the best way to address excessive self-citing is by asking editors and reviewers to look out for unwarranted self-citations. Sugimoto continued, “And maybe some of these rough metrics have utility as a flag of where to look more closely. But, ultimately, the solution needs to be to realign professional evaluation with expert peer judgement [sic], not to double down on metrics.”
The Ochsner Journal team takes reference lists very seriously. We check every citation. We look for patterns of self-citation. We search every list for duplicates, and we check data in every paper against the sources cited.