Scientific journals are now asking authors to specify whether, and if so, how, they have used artificial intelligence to write articles.
Photo: Einar Nilsen
The scientific journal Nature recently reported on a Japanese research group that had published an article on the arXiv preprint server. Not only was the preprint article written using artificial intelligence (AI), but the literature search, hypothesis formulation and testing, as well as the explanation of conclusions and discussion of results were all carried out by AI (1, 2) . This is reportedly the first time that AI has been behind the entire research process (1) . The quality of the article is uncertain because it has not been peer-reviewed – except by the AI chatbot itself. According to Nature , AI handled that aspect as well (1) .
It seems like a long time ago that we would have a chuckle about AI and ChatGPT. Now we are reading daily about new and improved platforms for such technology, new potential applications and new threats
It seems like a long time ago that we would have a chuckle about AI and ChatGPT (3) . Now we are reading daily about new and improved platforms for such technology, new potential applications and new threats. Many have called for regulations and control mechanisms (4) , and such measures are gradually being put in place. This spring, the EU issued the Living Guidelines on the Responsible Use of Generative AI in Research (5) . Research institutions are expected to facilitate responsible use, while those funding the research should support transparency. However, any regulations pertaining to AI will always be at least one step behind the technological development.
How should scientific journals approach the use of AI in research? We must ensure transparency, where readers are told which AI tools have been used in all parts of the scientific process (6, 7) . Consequently, most medical science journals, including the Journal of the Norwegian Medical Association, now include a section in their author guidelines about the use of AI. In line with guidelines from, for example, the Committee on Publication Ethics (COPE) and the International Committee of Medical Journal Editors (ICMJE), we ask authors to always disclose use of such tools in the preparation of a manuscript (8) .
Any regulations pertaining to AI will always be at least one step behind the technological development
So far, no authors have reported such use in our journal. This could be because no one is using AI, or that few have noticed the changes to the author guidelines, or perhaps authors are unsure of exactly what they should disclose. If so, they are not alone: in a study published this summer, around 300 academics were asked what they thought they should disclose (9) . Only 20 % believed they should report the use of ChatGPT to correct grammar, while 50 % responded that they should be transparent about asking the language model to rewrite a manuscript. Authors have rightly called for more specific guidelines on what exactly should be disclosed (9) .
Such guidelines are on the way: the International Association of Scientific, Technical, and Medical Publishers (STM), an umbrella organisation for publishers, recommends that 'basic author support', such as improving grammar, should not need to be disclosed (6) . However, distinguishing between grammatical improvements, rewriting and generating new text (7, 9) is not always easy. It may therefore be advisable to disclose all usage. The European Association of Science Editors (EASE) refers to a good example in its guidelines (7) . Authors report having used ChatGPT to improve the readability of a manuscript and subsequently reviewing and checking the text themselves to confirm its accuracy and integrity. In our journal, we recommend that authors do the same. If you have asked a language model to generate text, you must clearly outline the actions you took and how the quality control of the text was carried out. We believe caution is needed with such an approach. The language models use content from published material. Even though the generated text is not, strictly speaking, plagiarism, it can be difficult to know if the correct people have been credited, and the language model cannot be cited as a primary source (7) .
And then what about peer reviews? Several publishers and organisations prohibit all use of AI in the peer review process (7) . However, language models can be a great help for busy reviewers if used correctly (10) . In our journal, we currently ask reviewers to contact us if they wish to use any form of AI (11) . Unpublished material and peer reviews must never be uploaded to open systems as this would constitute a breach of confidentiality.
Finally – our journal has not yet used any form of AI in the processing of manuscripts, but if we do, we will of course inform our readers.