Title: More Than Half of Researchers Use AI Tools in Peer Review, Survey Finds
A global survey of 1,600 academics across 111 countries reveals a turning point in scientific publishing: well over half have used artificial intelligence tools to assist with peer review, signaling both growing efficiency and new questions about rigor, confidentiality, and intellectual property. The study, conducted over the past year, shows a shifting landscape in which researchers increasingly lean on AI to draft reviews, summarize manuscripts, and flag potential issues, even as many institutions warn against uploading unpublished work to external AI platforms.
Historical context: a slowly accelerating trend toward AI-assisted scholarship
The adoption of artificial intelligence in academia has evolved in stages. Early forays focused on data analysis and manuscript editing, but the peer-review processâan established keystone of scientific validationâhas recently become a venue for AI-assisted workflows. The latest survey underscores a pivotal moment: AI usage in peer review has become mainstream among researchers, reflecting both the rapid maturation of AI capabilities and the mounting pressure to accelerate the publication cycle without compromising quality. Historically, peer review has relied on the expertise and time of scholars; now, AI tools are increasingly viewed as augmentations to human judgment rather than replacements, a distinction that sits at the center of ongoing policy development within journals and research institutions.
Key findings: how AI is used in peer review
- Drafting review reports: Among researchers who incorporate AI, 59% report using these tools to help draft the content of their review reports. This includes outlining critique, structuring arguments, and formatting recommendations for editors.
- Summarizing manuscripts: About 29% of AI users employ automated summarization to distill complex manuscripts, extract core methods and results, and identify salient strengths and weaknesses.
- Identifying gaps and verifying references: A similar share (roughly 29%) leverage AI to spot missing considerations, methodological gaps, or to verify citation accuracy against bibliographic records.
- Detecting potential misconduct: Roughly 28% use AI-enabled checks to flag potential issues such as plagiarism, image manipulation, or inconsistencies in data reporting.
- Growth over the past year: Nearly a quarter of respondents said their AI use in peer review has increased over the previous 12 months, signaling ongoing momentum rather than a plateau in adoption.
Confidentiality, data security, and intellectual property at the forefront
Despite the practical benefits, researchers frequently operate under strict confidentiality expectations. Many institutions and publishers caution against uploading unpublished manuscripts to external AI platforms, citing risks to confidentiality, sensitive data, and intellectual property. The survey highlights a tension between convenience and governance: AI can streamline the review process, but safeguards are essential to prevent leakage of unpublished material or the inadvertent disclosure of proprietary data. In response, publishers and research associations are emphasizing controlled use: disclosing the role of AI in the review, maintaining access controls, and ensuring that AI-assisted outputs are subject to human verification.
Policy responses emerge: how publishers and institutions are addressing AI in peer review
- Disclosure and accountability: Several publishers permit limited AI assistance provided reviewers disclose the AI's role in their process and accept clear human accountability for the final critique. This approach seeks to balance efficiency with transparency.
- Training and guidelines: Institutions and journals are increasingly offering training sessions on responsible AI use in peer review. These programs cover best practices for interpreting AI-generated summaries, recognizing hallucinations or errors, and understanding when AI suggestions should be questioned or rejected.
- Safeguards against bias and misrepresentation: There is a growing emphasis on ensuring that AI tools do not inadvertently introduce bias into reviews or give undue weight to algorithmically generated conclusions. Reviewers are urged to maintain critical judgment and verify AI outputs against primary data and manuscripts.
- Data governance: Policies often require reviewers to avoid uploading full manuscripts to third-party AI services and to leverage in-house or institution-approved tools that provide stronger data protections.
Regional comparisons: adoption patterns vary by income level, research intensity, and publishing environments
- North America and Western Europe: These regions report higher absolute usage of AI tools in peer review, driven by well-funded research ecosystems, higher publication volumes, and broader access to commercial AI platforms. In many cases, researchers cite time savings and enhanced ability to manage complex papers with dense methodological sections.
- Asia-Pacific: Rapid adoption is observed in several countries with burgeoning research output. Local publishers are experimenting with AI-assisted workflows to handle rising submission volumes while sustaining review quality.
- Africa and Latin America: Adoption is growing, albeit more gradually, with researchers often leveraging AI to bridge gaps in time for peer review and to extract actionable insights from diverse datasets. Cross-border collaborations and regional consortia are helping to disseminate responsible-use practices.
- Emerging economies: In settings where access to bandwidth or premium software is limited, researchers frequently rely on open-source AI tools or institutionally provided platforms, highlighting the importance of inclusive digital infrastructure to scale responsible AI use.
Economic impact: efficiency gains, costs, and the broader publishing ecosystem
- Time savings and throughput: AI-assisted peer review can shorten the evaluation cycle by handling routine tasks such as summarization and initial screening. This may translate into faster manuscript decisions and a more rapid dissemination of research findings.
- Labor dynamics: By handling repetitive or mechanical aspects of reviewing, AI tools can free experts to focus on nuanced critique, potentially enhancing the depth of assessment in complex, interdisciplinary manuscripts.
- Costs and access: For publishers, licensing AI software represents an ongoing expense, while researchers benefit from time savings. Open-source AI options can democratize access but may require more technical expertise to deploy effectively.
- Quality considerations: The economic model of AI-assisted peer review hinges on maintaining rigorous standards. If AI-generated outputs are relied upon without adequate human verification, there is a risk of inaccuracies that could affect reputations and the credibility of published work.
Historical context: how past technological shifts reshaped peer review
The peer-review landscape has weathered multiple technological inflection points. Digital submission systems, plagiarism detection software, and editorial management platforms transformed workflows over the past two decades. Each wave introduced new efficiencies, but also highlighted gaps in standards and governance. The current AI-enabled shift resembles prior transitions in that it promises productivity gains while requiring robust oversight to preserve the integrity of the scholarly record. The surveyâs findings fit into a longer arc of technology-driven change, where tools support but do not supplant the critical function of expert evaluation.
Public reaction and perceived trust in AI-assisted peer review
- Perceived benefits: Many researchers view AI as a valuable ally for managing large reading loads, identifying key methodological components, and producing concise summaries that can guide more informed judgments.
- Concerns: A sizable portion of academics express caution about accuracy, potential overreliance on algorithmic outputs, and the possibility of overlooking subtle flaws that require expert intuition and domain-specific knowledge.
- Trust-building measures: Transparent disclosure of AI assistance, clear human oversight, and rigorous validation of AI outputs are consistently cited as essential to maintaining trust in the peer-review process.
Methodology and limitations
The survey spanned 1,600 academics across 111 countries, capturing a diverse cross-section of disciplines and career stages. Respondents reported on their use of AI tools in peer review, types of tasks performed with AI, and changes in usage over the preceding year. While the findings illuminate broad trends, they may not capture nuances across subfields or account for variations in editorial cultures between publishers. Additionally, self-reported data can be influenced by respondent interpretation of what constitutes âAI-assistedâ review.
Implications for the future of scholarly publishing
- Evolving guidelines: The results reinforce the need for standardized guidelines that define permissible AI use, disclosure requirements, and responsibilities for reviewers and editors. Clear policies will help harmonize practices across journals and publishers.
- Training and capacity building: Universities and research organizations may expand training programs to equip scholars with the skills to evaluate AI-generated outputs critically, recognize errors, and apply AI tools responsibly.
- Editorial workflows: Journals may increasingly integrate AI-assisted triage, screening for methodological rigor, and automated cross-checks as part of standard editorial workflows, while preserving human editorial judgment at decision points.
- Global equity: Ensuring equitable access to responsible AI tools will be crucial to prevent widening disparities in research quality and publication opportunities between well-resourced institutions and those with fewer resources.
Conclusion: a watershed moment in careful integration
The survey signals a watershed moment in scientific peer review. With more than half of researchers using AI toolsâand nearly a quarter reporting increased usage in the last yearâthe academic community stands at a crossroads. The benefits in speed, efficiency, and the ability to manage vast literature are compelling, yet they are balanced by legitimate concerns about confidentiality, data integrity, and the depth of critique. As policy makers, publishers, and researchers navigate this evolving landscape, the path forward will hinge on robust governance, transparent disclosure, and a shared commitment to maintaining the rigor that underpins credible scientific progress. The coming years are likely to see continued experimentation, ongoing refinement of best practices, and a broader, more nuanced understanding of how AI can support high-quality peer review without compromising the core foundations of scholarly integrity.
