Artificial Intelligence Usage Policy
RATS – (eISSN: 2980-3063)
At RATS, we recognize the growing role of Artificial Intelligence (AI) and AI-assisted tools in the academic publishing ecosystem. Following the guidance of the Committee on Publication Ethics (COPE) and aligned with the principles of transparency, integrity, and reproducibility in open science, the journal outlines the following policy regarding the use of AI technologies in manuscript preparation, editorial processes, and peer review.
- Responsible and Transparent Use
AI technologies—including but not limited to generative AI and large language models (LLMs)—may be used as supportive tools during the stages of research design, manuscript writing, editing, translation, and production. However, their use must be:
- Fully disclosed in the manuscript, including the tool’s name, version, provider, and specific purpose (e.g., grammar correction, data visualization, literature summarization).
- Reviewed and approved by human authors or editors, who must remain fully accountable for the accuracy, originality, and integrity of the content.
- Declared in a dedicated section such as “Acknowledgments,” “Methods,” or “Author Contributions,” and, if relevant, in a separate “AI Usage Disclosure Statement.”
- Limitations and Risks
Authors, editors, and reviewers must exercise critical oversight when using AI tools, particularly with regard to:
- Fabricated or inaccurate references
- Unverifiable or biased outputs
- Plagiarism risk or improper paraphrasing
- Inconsistent or irrelevant content
- Hallucinations (false information generated by the model)
Content generated or assisted by AI must always be validated and corrected by a qualified human contributor. RATS does not accept AI or non-human entities as credited authors under any circumstances.
- Privacy, Data Protection, and Ethical Considerations
When using AI tools, authors are encouraged to:
- Select platforms or tools that offer data encryption, content privacy, and local or secure server processing
- Avoid uploading unpublished data or sensitive personal information into open-access AI systems
- Ensure that no part of the content generated by AI violates ethical research principles or participant confidentiality
- Editorial and Peer Review Use of AI
Editors and reviewers may use AI-based tools (e.g., for language refinement, plagiarism checking, or data validation), but:
- These tools are not a substitute for expert judgment
- Any usage must comply with confidentiality, impartiality, and editorial independence
- AI tools should never be used to generate peer review content
- Citation and Referencing of AI Tools
When AI tools significantly contribute to the development of a manuscript, their use must be properly referenced, including:
- Tool name, developer, version, and access date
- A description of how the tool was used
- If applicable, a formal citation as recommended by the tool’s developer
- Policy Evolution and Oversight
Given the rapid evolution of AI technologies, this policy will be periodically reviewed and updated. The Editorial Board of RATS remains committed to promoting ethical and responsible use of emerging technologies in scientific communication.