We’ve written before (here, here, and here) about the problems with using the Clarivate Analytics journal impact factor to judge the quality of a paper or an author and about international efforts to decrease the emphasis on impact factor as a criterion for rewards, including cash, recognition, promotion, and tenure.
A representative of Clarivate Analytics is on record as stating that the journal impact factor should not be used in these ways. Jonathan Adams, director of the Institute for Scientific Information at the Web of Science (the citation indexing service owned by Clarivate Analytics) is quoted in the article “Impact Factors: Payment by Results,” “…we worry about misuse of the journal impact factor as the JIF is not, and has never been an indicator of research performance.”
We know. And so do others. But the impact factor still dominates the evaluation of academic performance.
The discussion just got amped up via an article in Nature, The International Journal of Science. An international group of academics has announced an initiative “to create a constructive role for journal metrics in scholarly publishing and to displace the dominance of impact factors in the assessment of research.”
The authors describe the initiative and its challenges in this way: “…a group of bibliometric and evaluation specialists, scientists, publishers, scientific societies and research-analytics providers are working to hammer out a broader suite of journal indicators, and other ways to judge a journal’s qualities. It is a challenging task: our interests vary and often conflict, and change requires a concerted effort across publishing, academia, funding agencies, policymakers and providers of bibliometric data.”
Some of their suggested indicators include curating (expertise and diversity of the editorial board, acceptance rate, and transparency of acceptance criteria), data (citations or reporting standards), and evaluating research (transparency of the process and the number and diversity of peer reviewers and their timeliness).
The authors expose the inadequacy of the journal impact factor as a measure of academic excellence. The journal impact factor, they write, “…was specifically intended to support librarians who wanted to evaluate their collections and researchers who wished to choose appropriate publication venues, as well as to provide insights for scholars, policymakers and research evaluators. Its inventors never expected the broad use and rampant misuse that developed.”
The authors propose four criteria to prevent abuse of the system they are seeking to develop and suggest the creation of a governing organization that would focus on journal indicators.
They’re recruiting: “We invite all interested stakeholders to contact us to join this initiative. On the basis of these responses, we aim to launch the governing body at a second workshop in 2020.”