More than half of researchers now use artificial intelligence in peer review

A global survey from publisher Frontiers reports that more than 50% of researchers have used artificial intelligence while reviewing manuscripts, often in ways that conflict with existing journal guidance. The findings highlight growing pressure on publishers to update peer review policies and clarify responsible use of large language models.

More than 50% of researchers have used artificial intelligence while peer reviewing manuscripts, according to a survey of some 1,600 academics across 111 countries conducted by the publisher Frontiers. Nearly one-quarter of respondents said that they had increased their use of artificial intelligence for peer review over the past year. The publisher, based in Lausanne, Switzerland, says the results confirm suspicions that tools powered by large language models such as ChatGPT have become embedded in review workflows, even as many policies caution against uploading confidential manuscripts to third-party platforms.

Frontiers’ director of research integrity, Elena Vicario, says the poll shows that reviewers are using artificial intelligence in peer-review tasks “in contrast with a lot of external recommendations of not uploading manuscripts to third-party tools”. Some publishers, including Frontiers, allow limited use of artificial intelligence in peer review but require reviewers to disclose it, and they typically forbid uploading unpublished manuscripts to chatbot websites to protect confidentiality and intellectual property. The survey report urges publishers to adapt policies to this emerging “new reality”, and Frontiers has launched an in-house artificial intelligence platform for reviewers to use across its journals, with Vicario emphasizing the need for clear guidance, human accountability and training. A spokesperson for Wiley says the company agrees that publishers should communicate best practices and disclosure requirements, and notes that in a similar survey it found that “researchers have relatively low interest and confidence in artificial intelligence use cases for peer review”.

Among respondents who use artificial intelligence in peer review, Frontiers’ survey found that 59% use it to help write their peer-review reports. Twenty-nine per cent said they use it to summarize the manuscript, identify gaps or check references. And 28% use artificial intelligence to flag potential signs of misconduct, such as plagiarism and image duplication. Research-ethics scholar Mohammad Hosseini describes the survey as a valuable attempt to gauge both acceptance and prevalence of artificial intelligence in different review contexts. Separate experiments are probing how well large language models actually perform as reviewers. Engineering scientist Mim Rahimi tested the large language model GPT-5 on a Nature Communications paper he co-authored, trying four different prompting set-ups, from simple review instructions to adding literature to assess novelty and rigour. He found that GPT-5 could mimic the structure and polished language of a review but failed to provide constructive feedback and made factual errors, and that more complex prompts produced the weakest reviews. Another study cited in the article reported that artificial intelligence generated reviews of 20 manuscripts that broadly matched human assessments but did not deliver detailed critique, leading Rahimi to conclude that these tools “could provide some information, but if somebody was just relying on that information, it would be very harmful”.

62

Impact Score

OpenClaw pushes autonomous Artificial Intelligence agents into enterprises

OpenClaw’s rapid growth is accelerating interest in persistent, self-hosted autonomous agents that run continuously instead of waiting for prompts. NVIDIA is positioning NemoClaw as a more secure reference implementation for organizations that want local control, auditability and hardened deployment defaults.

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.