Peer review is a critical aspect of scholarly publishing. Peer reviewers play a vital role in helping to determine the quality of research and results, if the research is a relevant contribution to a specific field, and whether protocols and standards were met. As important as this process is, peer review can be problematic. Assessing the quality of science can be a difficult task, even for scientists. Reviewers can disagree about aspects of an article and consequently may push an editor to make a conservative decision and reject a manuscript altogether. An article published in The New Republic suggests that the peer review process can prevent sound science from being published, and an opinion piece at the Enago Academy website proposes that removing the human aspect from the process would help eliminate tensions among authors, reviewers, and publishers.
Artificial intelligence (AI), through which machines can identify patterns to “think” like humans, has been introduced to scholarly publishing to potentially improve the peer-review process. Various AI tools can flag issues with the quality of a study by assessing the consistency of authors’ statistical reporting and their compliance with the statistical methods they’ve selected (tests, sample sizes, information about blinding, and baseline data). AI can also apply semantic analysis to text to identify and extract main statements and key phrases that are likely study claims or findings. It can then highlight whether these claims are similar to those in previously published papers to detect plagiarism or place the manuscript in context with other relevant works in the literature.
Another problem with the peer-review process is that editors often have difficulty finding competent reviewers. In 2012, 28,100 peer-reviewed journals existed, and that number has steadily increased. Reviewers are in demand and finding the right reviewer for a paper can be time consuming. AI tools attempt to address some of these issues by suggesting reviewers based on the paper’s content and checking reviewers’ profiles, scientific performance, and conflicts of interest. AI tools can also prepare correspondence, provide reminders to reviewers, remove them from the system if they do not respond, and invite alternate reviewers.
Balancing these promising applications, AI also has limitations. Writer and editor Douglas Heavens points out that “machine-learning tools trained on previously published papers could reinforce existing biases in peer review.” Computer scientist Christian Berger further cautions, “Blindly using any research engine doesn’t answer every question automatically,” suggesting that even as AI advances, human insight will still be required to assess AI-generated results or analysis. Despite these potential challenges, AI will increasingly be applied to science and research—and to the evaluation of papers submitted for publication.