AI Tools Usage Policy
1. Introduction
The increasing integration of artificial intelligence (AI) tools in research and scholarly publishing necessitates clear guidance on their ethical and transparent use. IJASCE supports responsible use of AI technologies in line with COPE discussion documents, WAME statements, and ICMJE guidance. This policy outlines acceptable use of AI tools by authors, reviewers, and editors.
2. Scope
This policy applies to:
-
Authors using AI tools in manuscript preparation, data analysis, or research design
-
Reviewers using AI to assist in manuscript evaluation
-
Editors using AI to support editorial decision-making
3. Policy Statement and Guidelines
AI tools (e.g., ChatGPT, Grammarly, DeepL, coding assistants) can support research and publishing processes but must not replace human accountability, originality, or ethical judgment.
3.1 Acceptable Use by Authors
Authors may use AI tools to:
-
Improve language and grammar
-
Assist in data analysis or visualization
-
Generate code or scripts (with appropriate documentation)
Authors must disclose any use of AI tools that contributed substantially to:
-
Writing or content generation
-
Data interpretation or synthesis
-
Coding, modeling, or figure creation
Disclosure should be included in a dedicated “AI Usage Statement” section of the manuscript.
3.2 Prohibited Use by Authors
AI must not be listed as a co-author under any circumstances.
Authors must not:
-
Use AI to fabricate data or references
-
Submit AI-generated content without critical oversight or verification
-
Use AI tools to generate entire manuscripts
Authors bear full responsibility for all content, including that generated with AI assistance.
3.3 Use of AI by Reviewers
Reviewers are discouraged from using AI tools to generate or summarize their review reports, particularly if this compromises confidentiality.
If AI tools are used to:
Then reviewers must disclose such use in their confidential comments to the editor. No proprietary or unpublished manuscript content may be uploaded to public AI platforms without explicit permission.
3.4 Use of AI by Editors
Editors may use AI tools to:
-
Screen manuscripts for language issues
-
Identify common reporting deficiencies
-
Support decision-making workflows (e.g., reviewer recommendations)
Editorial AI usage must not compromise confidentiality or override human judgment.
4. Responsibilities
Authors must disclose AI usage transparently and ensure the accuracy and integrity of all content.
Reviewers must maintain confidentiality and integrity when using AI tools.
Editors must verify disclosures and ensure AI use supports, not replaces, editorial oversight.
5. Process for Handling Breaches
Failure to disclose substantial AI use, or misuse of AI tools (e.g., AI-generated data, falsified results), may result in:
-
Manuscript rejection or withdrawal
-
Retraction of published work
-
Notification to the author’s institution
-
Sanctions for reviewers or editorial staff where applicable
Investigations will be handled according to COPE misconduct protocols.
6. Related Policies and References
-
COPE Discussion Document on AI and Publication Ethics
-
ICMJE Recommendations on AI in Scholarly Publishing
-
WAME Statement on Use of Generative AI Tools
-
IJASCE’s Authorship and Misconduct Policies
7. Review and Updates
This policy was last reviewed in August 2025 and will be reviewed annually or as AI technologies and ethical standards evolve.