A new study revealed divergent policies on artificial intelligence (AI) use in peer review among leading medical journals, with confidentiality concerns driving prohibitions.
The rise of artificial intelligence (AI) has introduced both opportunities and challenges to the world of medical publishing, especially in the peer review process.1
According to new research just published in JAMA Network Open, 78% of the top 100 medical journals now provide guidance on AI-assisted peer review. However, these policies vary widely, with most focused on safeguarding confidentiality. The study found that 46 of the top 100 journals explicitly prohibit the use of AI in peer review, while 32 permit limited use under strict conditions, such as requiring reviewers to disclose AI involvement and respecting confidentiality and authorship rights.
“Despite GenAI’s potential benefits to enhance review efficiency, concerns remain about its inherent problems, which could lead to biases and confidentiality breaches,” the authors wrote.
Among the journals that offer AI guidance, 91% prohibit uploading manuscript-related content to AI tools, reflecting fears of data leaks or breaches of privacy. Popular AI tools like chatbots and large language models—such as ChatGPT—were mentioned in 47% and 27% of the guidance, respectively.
Why Do Journals’ AI Policies Vary So Much?
AI policies differed across publishers and regions. Publishers like Wiley and Springer Nature allowed limited AI use, while Elsevier and Cell Press maintained stricter prohibitions.
“Internationally based medical journals are more likely to permit limited use than journals’ editorial located in the US or Europe, and mixed publishers had the highest proportion of prohibition on AI use,” the authors said.
Interestingly, 22% of journals linked to external statements from organizations like the International Committee of Medical Journal Editors or the World Association of Medical Editors that permit limited AI use. However, 5 journals’ policies contradicted these statements, highlighting the lack of consensus in the field.
“This divergence in policy may be the ultimate reason for the observed variations in guidance,” the authors noted.
While 32% of journals permit limited AI use, standards for disclosing AI involvement varied, and important areas like innovation, reproducibility, and reference management remain underexplored. Additionally, scattered AI-related guidance creates challenges for reviewers, potentially leading to misuse or confidentiality breaches. Confidentiality was the leading reason for prohibiting AI use, cited by 96% of journals with such policies. Experts suggest that clearer editorials and better adherence to AI usage policies could address these issues effectively.2
While AI is unlikely to replace human peer review, its role is expected to expand as the technology advances.1
“Used safely and ethically, AI can increase productivity and innovation,” the authors said. “Thus, continuous monitoring and regular assessment of AI’s impact are essential for updating guidance, thereby maintaining high-quality peer review.”
As AI continues to evolve, medical journals face the challenge of balancing its benefits with potential risks, ensuring that the peer review process remains both rigorous and ethical. This study offers a snapshot of the current landscape, signaling the need for greater collaboration and standardization in crafting AI-related policies.
References
New Research Suggest Benefits of Telehealth BCBT for Treating Suicidal Thoughts, Mitigating Attempts
November 12th 2024Researchers evaluated the lesser-known impact of telehealth treatments, including brief cognitive behavioral therapy (BCBT), to address the needs of individuals at risk of or recovering from suicidal ideation.
Read More