GENERATIVE AI POLICY

Polyhedron International Journal in Mathematics Education (PIJME)
INTRODUCTION
This policy is established to ensure scientific integrity, transparency, and accountability in the use of Generative Artificial Intelligence (AI) tools throughout all stages of the publication process at the Polyhedron: International Journal in Mathematics Education (PIJME), published by Nashir Al-Kutub Indonesia Publisher. This policy is aligned with standards set by Elsevier, Springer Nature, the Committee on Publication Ethics (COPE), and internationally recognized best practices in scholarly publishing. PIJME is committed to ensuring that the use of AI tools does not compromise the originality, integrity, or scientific rigor of published research.
DEFINITIONS AND SCOPE
Generative Artificial Intelligence (Generative AI), within the context of this policy, refers to systems based on Large Language Models (LLMs) and similar technologies capable of generating text, images, code, or other content. Tools covered by this definition include, but are not limited to:

  • Text-based AI tools: ChatGPT, Claude, Gemini, Microsoft Copilot, and equivalents.
  • Image-based AI tools: DALL-E, Midjourney, Stable Diffusion, and equivalents.
  • AI-based code generation tools: GitHub Copilot, Cursor, and equivalents.
  • AI-based paraphrasing and grammar tools: Grammarly AI, QuillBot, and equivalents.

This policy applies to all components of a scholarly manuscript, including but not limited to: abstract, introduction, literature review, methodology, data analysis, results, discussion, conclusion, and references
POLICY FOR AUTHORS
Core Principles
Authors bear full and non-transferable responsibility for the accuracy, originality, and integrity of all content submitted in their manuscripts, including any sections generated or assisted by Generative AI tools. Scientific accountability cannot be delegated to any AI system under any circumstances.
Permitted Uses
Authors are permitted to use Generative AI tools for the following purposes, provided that such use is explicitly and transparently disclosed in accordance with Section 2.4:

  • Improving grammar, writing style, and spelling (language editing and proofreading).
  • Enhancing readability and clarity of text originally written by the authors.
  • Assisting with translation from the authors' native language into English as a supplementary tool, subject to thorough human review and revision.
  • Generating a preliminary structural outline that is subsequently developed entirely by the authors.
  • Assisting with visualisation and formatting of data already collected and analysed by the authors.

Prohibited Uses
Authors are strictly prohibited from:

  • Listing any Generative AI tool as an author or co-author of the manuscript.
  • Generating research data, empirical findings, or statistical results entirely through AI tools.
  • Creating, fabricating, or manipulating research figures, tables, or graphs using AI without full disclosure.
  • Producing a literature review using AI without manual verification against primary sources.
  • Submitting a manuscript whose content is wholly or substantially generated by AI without meaningful intellectual contribution from the human authors.
  • Using AI to circumvent plagiarism or AI detection systems (e.g., automated mass paraphrasing of existing works).
  • Using AI-generated content in the Methods or Results sections without explicit disclosure and human verification.

Mandatory Disclosure Requirements
All use of Generative AI tools must be explicitly disclosed within the manuscript. The disclosure statement must be placed in a dedicated section titled "AI Use Disclosure", positioned after the Conclusion and before the References section, using the following standardised format:
If AI tools were used:
"During the preparation of this manuscript, the authors used [name of AI tool, version, developer] for [specific purpose, e.g., language editing of the Introduction and Discussion sections]. All AI-generated or AI-assisted content was subsequently reviewed, revised, and verified by the authors. The authors take full and sole responsibility for the integrity of the published content. No AI tool was used as an author or co-author in this work."

If no AI tools were used (mandatory negative declaration):
"The authors declare that no Generative AI tools were used at any stage in the preparation of this manuscript."
Omission of the AI Use Disclosure statement — whether affirmative or negative — constitutes a breach of this policy and will result in the manuscript being returned to the authors prior to peer review.
Authorship and CRediT Statement
In the Author Contribution Statement, each author must identify their individual intellectual contributions using the CRediT (Contributor Roles Taxonomy) framework and explicitly affirm that scientific accountability has not been delegated to any AI tool. AI tools may not be listed under any CRediT role category.

POLICY FOR REVIEWERS
Principles of Confidentiality and Integrity
Reviewers receive manuscripts in a strictly confidential capacity. The use of Generative AI tools during the peer review process poses serious risks of breaching the confidentiality of unpublished manuscript content, potentially exposing sensitive findings to third-party AI training datasets. Reviewers are therefore subject to heightened restrictions.
Permitted Uses
Reviewers are permitted to use Generative AI tools solely for:

  • Improving the language and writing quality of review reports that have been independently written by the reviewer.
  • Searching for general contextual information from publicly available and already-published literature.

Prohibited
Reviewers are strictly prohibited from:

  • Uploading, copying, or entering any part of a manuscript's content into any Generative AI platform.
  • Using AI to automatically generate review comments, evaluations, or editorial recommendations.
  • Delegating any part of the intellectual peer review process to any AI system.
  • Using AI in any manner that could expose confidential manuscript content to third parties or external systems.

Reporting Obligations
If a reviewer uses AI tools for language editing within their review report, this must be reported to the handling editor through the online submission system at the time of submission. Reviewers who are unable to comply with this policy must promptly return the manuscript to the editor without conducting a review.

POLICY FOR EDITORS
Editorial Responsibilities
Editors are the primary guardians of the journal's integrity standards and are responsible for upholding and enforcing this policy at all stages of the editorial process, from initial desk screening through to the final publication decision.
Permitted Uses
Editors may use Generative AI tools for:

  • Checking language clarity and readability of editorial communications (decision letters, reviewer invitations).
  • Assisting with identification of potential reviewers based on general keyword analysis from publicly available information.
  • Analysis of submission metadata and editorial workflow trends for planning purposes.

Prohibited Uses
Editors are prohibited from:

  • Using AI to make editorial decisions (accept, revise, reject) without substantive human intellectual evaluation.
  • Entering manuscript content under review into any AI system without explicit author consent.
  • Using AI to replace or substitute for any part of the genuine peer review process.


AI DETECTION PROCEDURE
This section establishes the mandatory, systematic, and verifiable AI detection mechanisms applied to all manuscripts submitted to PIJME. Detection is not voluntary or discretionary: it is a required component of the editorial workflow for every submission.
Approved Detection Tools
PIJME uses the following approved tools to assess AI-generated content in submitted manuscripts:

Detection Tool

Primary Function

Applied Stage

Turnitin AI Detection

AI-generated text detection; similarity analysis

Desk screening (all submissions)

iThenticate

Advanced similarity and AI content analysis

Desk screening; post-revision check

Copyleaks AI Detector

AI content percentage scoring; sentence-level flagging

Secondary verification (flagged manuscripts)

GPTZero

AI authorship probability scoring

Secondary verification (flagged manuscripts)

The Editorial Board reserves the right to add, replace, or supplement detection tools as technology evolves, provided that any changes are announced on the journal's website with a minimum of 30 days' notice.
Detection Thresholds and Actions

AI Content Score

Editorial Action

Below 20%

Manuscript proceeds to peer review. Minor AI-assisted content is acceptable provided disclosure has been made per Section 2.4.

20% – 40%

Manuscript is returned to authors for revision. Authors must revise AI-flagged sections and resubmit with a revised AI Use Disclosure and a new detection report confirming a score below 20%. Response required within 2 weeks.

Above 40%

Manuscript is immediately rejected. The manuscript is considered to have been substantially generated by AI without adequate human intellectual contribution. Resubmission of the same manuscript will not be accepted.

Any score with undisclosed AI use

Manuscript is immediately rejected regardless of the AI content score, and the case is escalated to the misconduct procedure under Section 6.

Detection Workflow
The mandatory detection workflow for each submission is as follows:
Stage 1 — Automated Screening (all submissions)
Upon receipt of a manuscript, the editorial system automatically initiates an AI detection scan using Turnitin AI Detection simultaneously with the plagiarism similarity check. Both reports are generated before any human desk review is conducted.

Stage 2 — Desk Review Assessment
The Editor-in-Chief or handling Section Editor reviews both the AI detection report and the plagiarism similarity report as part of the desk screening. The editor assesses the consistency between the detected AI content level and the disclosure statement provided by the authors.
Stage 3 — Secondary Verification (where applicable)
If the primary detection report returns an AI content score of 20% or above, or if there is a discrepancy between the declared AI use and the detected AI content level, the manuscript is subjected to secondary verification using Copyleaks and/or GPTZero. Secondary verification results are documented in the editorial record.
Stage 4 — Author Notification
The corresponding author is notified of the detection results and required action within 5 business days of manuscript submission. Where resubmission is required following detection, the editorial clock for peer review begins only upon receipt of the revised manuscript and new detection report.
Stage 5 — Documentation and Audit Tracking All AI detection reports — including tool name, version, date of scan, and score — are retained in the journal's editorial management system as part of the permanent editorial record for each manuscript. This audit trail is available for inspection in the event of a post-publication integrity query.

Limitations and Human Judgement
PIJME acknowledges that AI detection tools are probabilistic instruments and are not infallible. Detection scores provide an evidence-based indicator, not a definitive judgement. Accordingly:

  • No manuscript will be rejected based solely on an AI detection score without human editorial review of the flagged content.
  • Editors are required to exercise independent professional judgement in assessing the context, nature, and significance of flagged content before taking any action.
  • Authors have the right to contest a detection result in accordance with the appeals procedure in Section 7.

 

  1. VIOLATIONS AND CONSEQUENCES

 

Violations of this policy are handled in accordance with COPE guidelines and the journal's research integrity standards. The severity of consequences is proportionate to the nature and degree of the violation.

 

6.1  Classification of Violations

Violation

Severity

Failure to include AI Use Disclosure statement

Minor

Inaccurate or incomplete AI Use Disclosure

Moderate

AI-generated content above threshold without disclosure

Serious

AI used to fabricate data, results, or references

Critical

AI used to circumvent detection systems

Critical

Repeated violations across multiple submissions

Aggravated

 

6.2  Consequences for Authors

Violation Severity

Consequence

Minor

Manuscript returned for correction of disclosure statement. No further action if resolved within 5 business days.

Moderate

Manuscript returned for full revision. Author issued a formal written warning. Violation recorded in the editorial system.

Serious

Manuscript immediately rejected. Author subject to a 12-month submission ban. Violation recorded.

Critical

Manuscript immediately rejected or article retracted if already published. Author subject to a 24-month submission ban. Formal notification issued to the author's affiliated institution. Case referred to COPE for guidance.

Aggravated

Permanent ban from submitting to PIJME. Notification to affiliated institution and, where applicable, to other journals in the publishing network.

Consequences for Reviewers

  • Immediate withdrawal of the reviewer invitation.
  • Permanent removal from the PIJME reviewer database.
  • Notification to the Editor-in-Chief of other journals in the publishing network.
  • In cases of serious confidentiality breach, formal notification to the reviewer's affiliated institution.

Consequences for Editors

  • Re-examination of all editorial decisions made during the period of non-compliance.
  • Formal disciplinary review in accordance with the policies of Nashir Al-Kutub Indonesia Publisher.
  • Mandatory retraining on AI policy compliance before resuming editorial duties.

Post-Publication Violations

If a violation is discovered after an article has been published, the Editorial Board will:

  • Issue an Expression of Concern immediately upon opening an investigation.
  • Conduct a full investigation in accordance with COPE Retraction Guidelines, with a target resolution of 60 days.
  • Publish a formal Retraction Notice if the investigation confirms a critical or aggravated violation.
  • Ensure that the retraction notice and the original article remain permanently linked and accessible via their respective DOIs.

APPEALS PROCEDURE
Authors, reviewers, or editors who consider that an action taken under this policy is factually incorrect or procedurally flawed may submit a formal written appeal to the Editor-in-Chief within 14 calendar days of notification of the decision. The appeal must:

  • State the specific grounds for the appeal with supporting evidence.
  • Include any technical or contextual information relevant to the AI detection results.
  • Be submitted via the online submission system or to the editorial email with the subject line: AI POLICY APPEAL — [Manuscript ID].

All appeals will be acknowledged within 5 business days and resolved within 30 working days. Appeals based solely on disagreement with the detection tool's probabilistic output, without supporting evidence, will not be entertained.

 

  1. LEGAL BASIS AND REFERENCE STANDARDS

 

Reference

Relevance

COPE Guidelines on AI and Authorship (2023)

Ethical guidance on authorship and AI use in scholarly publications.

Elsevier AI Policy (2024)

Prohibition of AI as author; mandatory disclosure requirements.

Springer Nature AI Policy (2023)

Standards for transparency in the use of generative AI.

ICMJE Recommendations

Authorship criteria and author responsibilities.

UNESCO Recommendation on AI Ethics (2021)

Ethical framework for AI development and use.

COPE Retraction Guidelines (2022)

Procedures for post-publication integrity investigations.

CRediT (Contributor Roles Taxonomy)

Framework for specifying individual author contributions.

Creative Commons Attribution 4.0 (CC BY 4.0)

License governing all content published in PIJME.

POLICY REVIEW AND UPDATES
Given the rapid pace of development in Generative AI technology, this policy will be reviewed and updated at a minimum every twelve (12) months, or whenever there are significant changes in:

  • Scholarly publishing industry standards or indexing requirements.
  • COPE guidelines on AI and publication ethics.
  • The capabilities or availability of AI detection tools.
  • Applicable legal or regulatory frameworks governing AI use in research.

 

All journal stakeholders — authors, reviewers, and editors — will be formally notified of any policy changes through the journal's official website and via official correspondence.

 

  1. AGREEMENT AND ACCEPTANCE

 

By submitting a manuscript, accepting a reviewer invitation, or undertaking editorial responsibilities at PIJME, all parties are deemed to have read, understood, and fully agreed to this policy in its current version.

 

This policy is effective for all manuscripts submitted to PIJME and is subject to annual review.

For enquiries: polyhedron.journal@gmail.com  |  pijme@nakiscience.com

Subject line: AI POLICY ENQUIRY — [Manuscript ID or Name]