Nature Catalysis editorial policies on Artificial Intelligence

Springer Nature outlines how authors, reviewers and editors may use Artificial Intelligence across authorship, images and editorial workflows. The guidance stresses human accountability, transparency and restrictions on generative imagery.

Springer Nature is monitoring developments in Artificial Intelligence and will update its policies as needed. For authorship, large language models such as ChatGPT do not meet the journal’s authorship criteria because authors must be accountable for the work. Any use of a large language model should be documented in the Methods section or a suitable alternative part of the manuscript. The use of an Artificial Intelligence or other tool for Artificial Intelligence assisted copy editing does not need to be declared. In this context, Artificial Intelligence assisted copy editing is defined as tool-assisted improvements to human-generated text for readability, style and basic correctness, without generative editorial work or autonomous content creation. In all cases, the final text must have human accountability and author agreement that edits reflect the original work.

On images, the rapid rise of generative Artificial Intelligence has created unresolved legal and integrity issues. Springer Nature journals therefore do not permit Artificial Intelligence generated images and videos for publication at this time. Exceptions include artwork from contracted agencies that created content in a legally acceptable manner; images and videos directly referenced in pieces specifically about Artificial Intelligence, reviewed case by case; and the use of generative tools built on defined scientific datasets that can be attributed, checked and verified, provided ethics, copyright and terms of use are followed. All exceptions must be labeled clearly as generated by Artificial Intelligence within the image field. The policy will be reviewed regularly. Not all Artificial Intelligence tools are generative; the use of non-generative machine learning tools to manipulate, combine or enhance images or figures should be disclosed in the relevant caption upon submission for case-by-case review.

For peer review, the journals emphasize the irreplaceable expertise and accountability of reviewers and the trust-based nature of the process. Given that generative Artificial Intelligence tools may be outdated and can produce nonsensical, biased or false information, and because manuscripts may contain sensitive or proprietary material, reviewers are asked not to upload manuscripts into such tools. If any part of the evaluation was supported by an Artificial Intelligence tool, reviewers should transparently declare that use in their report.

Regarding editorial workflows, Nature Portfolio journals occasionally use internal Springer Nature developed Artificial Intelligence tools to help generate accessory content, such as summary points. This content is always edited and fact-checked by authors and editors to meet publication standards. Any substantive use of Artificial Intelligence beyond accessory content will be declared on a per-article basis. Accessory content can include key points, editorial summaries, glossary terms, plain language summaries and social media posts.

55

Impact Score

Busting weather myths and artificial intelligence heart attack prediction

A roundup from MIT Technology Review that explores why weather control conspiracies persist and how startups are using Artificial Intelligence to screen CT scans for hidden heart-attack risk. The newsletter also highlights a selection of top tech stories, a design thinking critique, and lighter features.

The Business Times global edition: top stories

The Business Times global edition rounds up top ASEAN and international headlines, from Malaysian developers expanding into Vietnam to Indonesia naming former president Suharto a national hero. OpenAI’s chief urges governments to build Artificial Intelligence infrastructure while COP30 spotlights climate adaptation finance for Asean.

Function calling with NVIDIA NIM for large language models

NVIDIA NIM supports function (tool) calling so large language models can return structured function arguments for external services. Enable and control tool calling with environment variables and the tool_choice and tools request parameters.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.