Generative AI Policies

INTRODUCTION 

This policy is based on and refers to the guidelines outlined in the Generative AI Policies for Journals, as provided by:

  • STM        : Recommendations for classifying AI use in academic manuscript preparation
  • Elsevier   : The application of generative AI and AI-assisted technologies within the review process. 
  • WAME    : Chatbots, generative AI, and scholarly manuscripts 

The Institute of Advanced Engineering and Science (IAES) understands the importance of artificial intelligence (AI) and its potential to help authors with their research and writing processes. IAES is excited about the new possibilities that generative AI tools bring, especially for helping to come up with ideas, speed up research, analyze results, improve writing, organize submissions, assist authors who write in a second language, and speed up the research and sharing process.  IAES is offering guidance to authors, editors, and reviewers on the use of such tools, which may evolve given the swift development of the AI field.  

Generative Artificial Intelligence (AI) tools, including large language models (LLMs) and multimodal models, are continually developing and evolving, particularly in their applications for businesses and consumers. While generative AI possesses significant potential to enhance creativity for authors, it is important to acknowledge the associated risks that come with the current generation of these tools. Generative AI can produce a wide variety of content, encompassing text generation, image synthesis, audio, and synthetic data. Notable examples of such tools include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, DALL-E, Midjourney, and Runway. 

Some of the risks associated with the operation of generative AI tools today are:

  1. Inaccuracy and bias: Generative AI tools are fundamentally statistical in nature rather than factual. Consequently, they can introduce inaccuracies, falsehoods (often referred to as hallucinations), or biases that may be difficult to detect, verify, and rectify.
  2. Lack of attribution: Generative AI frequently fails to adhere to the established practices within the global scholarly community regarding the correct and precise attribution of ideas, quotes, or citations.
  3. Confidentiality and intellectual property risks: Currently, generative AI tools are often employed on third-party platforms that may not provide adequate standards for confidentiality, data security, or copyright protection.
  4. Unintended uses: Providers of generative AI may repurpose input or output data generated from user interactions (for instance, for AI training). This practice has the potential to infringe upon the rights of authors and publishers, among others.

 

AUTHORS 

Authors may use generative AI tools (e.g., ChatGPT, GPT models) for specific tasks, such as enhancing the grammar, language, and readability of their manuscripts. However, authors remain responsible for the originality, validity, and integrity of their submissions. When opting to use generative AI tools, it is essential that authors do so in a responsible manner and adhere to our journal's editorial policies concerning authorship and publication ethics. This responsibility encompasses reviewing the outputs produced by any AI tools and ensuring the accuracy of the content.

The IAES endorses the responsible use of generative AI tools, ensuring that high standards of data security, confidentiality, and copyright protection are maintained in instances such as the following:

  • Idea generation and idea exploration 
  • Language improvement 
  • Interactive online search with LLM-enhanced search engines 
  • Literature classification 
  • Coding assistance 

Authors are responsible for ensuring that the content of their submissions meets the required standards of rigorous scientific and scholarly assessment, research, and validation and is created by the author. 

Generative AI tools should not be credited as authors, as they are unable to assume responsibility for the content submitted or to manage copyright and licensing agreements. Authorship necessitates accountability for the content, consent to publication through a publishing agreement, and the provision of contractual assurances regarding the integrity of the work, among other essential principles. These uniquely human responsibilities cannot be fulfilled by generative AI tools.  

Authors must clearly acknowledge any use of generative AI tools in their articles by including a statement that specifies the full name of the tool (along with its version number), how it was used, and the reason behind it. For article submissions, this statement should be placed in either the Methods or Acknowledgements section. This transparency allows editors to assess the employment and responsible use of generative AI tools. The IAES will maintain discretion over the publication of the work to ensure that integrity and guidelines are upheld.  

If an author intends to use an AI tool, they must ensure that it is suitable and robust for their intended purpose. Additionally, they should verify that the terms associated with such a tool offer adequate safeguards and protections, particularly concerning intellectual property rights, confidentiality, and security.

Authors should avoid submitting manuscripts that use generative AI tools in ways that compromise fundamental researcher and author responsibilities, for example: 

  • text or code generation without rigorous revision 
  • synthetic data generation to substitute missing data without robust methodology  
  • generation of any types of content that are inaccurate, including abstracts or supplemental materials 

These types of cases may be subject to editorial investigation.  

IAES currently prohibits the use of generative AI in the creation and manipulation of images and figures, as well as original research data, for inclusion in our publications. The term “images and figures” encompasses pictures, charts, data tables, medical imagery, snippets of images, computer code, and formulas. “Manipulation” refers to augmenting, concealing, moving, removing, or introducing specific features within an image or figure.

Human oversight and transparency must consistently inform the use of generative AI and AI-assisted technologies throughout every stage of the research process. Research ethics guidelines are continuously being revised to reflect advancements in generative AI technologies. The IAES will continue to update our editorial guidelines as both the technology and ethical standards in research develop.

 

EDITORS AND PEER REVIEWERS 

IAES is committed to maintaining the highest standards of editorial integrity and transparency. Editors and peer reviewers using manuscripts in generative AI systems could risk breaking confidentiality, ownership rights, and privacy, including personal information. Consequently, editors and peer reviewers are prohibited from uploading files, images, or information from unpublished manuscripts into generative AI tools. Non-compliance with this policy may violate the intellectual property rights of the rightsholder. 

Editors  

Editors play a crucial role in maintaining the quality and integrity of research content. Consequently, it is imperative that editors maintain confidentiality regarding submission and peer review details.

The use of manuscripts within generative AI systems could pose significant risks related to confidentiality, as well as potential infringements on proprietary rights, data security, and other concerns. Therefore, editors are prohibited from uploading unpublished manuscripts, along with any associated files, images, or information, into generative AI tools.

Peer reviewers 

Peer reviewers, who are selected as subject-matter experts, should refrain from utilizing generative AI to evaluate or condense submitted articles, or any portion of them, when writing their reviews. Consequently, peer reviewers must not upload unpublished manuscripts or project proposals, nor any associated files, images, or information, into generative AI tools.

Generative AI may only be employed to assist in enhancing the language of the review; however, peer reviewers will always be accountable for ensuring the accuracy and integrity of their assessments.