Artificial Intelligence (AI) Policy
Artificial Intelligence (AI) Policy
The journal’s editorial board supports the use of emerging technologies, including Artificial Intelligence (AI), to improve the research and writing process. This policy aims to help authors, reviewers, and editors evaluate whether AI technologies have been used ethically, transparently, and in a manner consistent with scholarly integrity.
This policy may evolve as AI technologies and related academic standards develop. For the latest updates, please visit the journal’s AI Policy page:
https://management.fon.bg.ac.rs/index.php/mng/AIPolicy.
1. Definitions and Legal/Ethical Basis
The journal adopts definitions of academic integrity, plagiarism, authorship, and responsible research conduct in line with the COPE guidelines, Code of Professional Ethics of the University of Belgrade, the Code of Conduct in Scientific Research, and the General Guidelines on the Use of Generative Artificial Intelligence at the University of Belgrade Faculty of Organizational Sciences.
Plagiarism includes, among other forms, “the presentation of another person’s ideas, text, or work, in whole or in part, without appropriate attribution or citation”, as defined in the University of Belgrade Code.
All authors must comply with relevant legal regulations related to:
- copyright and intellectual property,
- protection of personal data,
- ethical standards of academic work,
- confidentiality and responsible handling of sensitive information.
2. Assistive-AI vs Generative AI (GAI)
Assistive-AI tools (e.g., proofreading or grammar-improvement software) help refine text already written by the author.
Generative AI (GAI) tools create content based on prompts and may introduce risks such as plagiarism, fabricated content, biases, copyright infringement, or disclosure of sensitive information.
Because content produced by GAI models is derived from unknown datasets and may replicate biases or copyrighted material, authors should use such technologies with caution and full responsibility.
3. Guidance for Authors
Authors must be aware of the risks associated with using Generative AI, including plagiarism, misrepresentation, incorrect or incomplete content, biased outputs, and confidentiality breaches.
The following principles apply:
- Authorship
- GAI cannot be listed as an author; authorship requires human accountability.
- Permitted Use
- GAI may be used for improving the readability, structure, clarity, or language of text created by the authors.
- All use of GAI must remain under complete human oversight, with authors verifying accuracy, validity, and appropriateness of all included content.
- Restricted Use
- The use of GAI to generate substantive scientific content (e.g., complete paragraphs, literature reviews, data analysis, results interpretation, or statistical outputs) is strongly discouraged. It must be approached with caution to avoid plagiarism, incorrect content, or violations of ethical standards.
- Transparency and Disclosure
Authors must clearly disclose any use of GAI in: - the cover letter, and
- a statement at the end of the manuscript,
specifying:- the tool used,
- the purpose of its use,
- the scope and level of human revision.
Statement to be included in the manuscript:
“During the preparation of this manuscript, the author(s) used [NAME OF TOOL] for [PURPOSE]. The author(s) reviewed and edited the content as needed and take full responsibility for the accuracy and integrity of the final version.” - Protection of Data and Confidentiality
- Authors must not include personal data, confidential information, or proprietary materials in prompts used with GAI tools.
4. Guidance for Editors and Reviewers
Editors and reviewers must maintain confidentiality in the peer review process.
- GAI must not be used to generate peer-review reports.
- Any use of AI for limited tasks (e.g., language checking) must be disclosed to the editorial office.
- Reviewers remain fully responsible for the accuracy, fairness, and integrity of their evaluations.
5. Summary
Use of GAI tools may introduce risks such as plagiarism, misattribution, fabricated content, or compromised confidentiality. Authors, editors, and reviewers are fully responsible for ensuring that any AI-assisted content adheres to the highest standards of academic integrity, ethical conduct, and legal compliance.


