AI Policy
Policy on the use of artificial intelligence and AI-supported technologies
The editorial policy of the journal Algologia regarding the use of artificial intelligence (AI) technologies is based on the recommendations of COPE, WAME, the JAMA Network, and ICMJE, as well as on the principles of transparency and responsibility in the use of AI tools in scientific research and publishing.
The Editorial Board expects that, if AI or AI-supported tools are used, authors will voluntarily declare their use when preparing materials for publication by including a corresponding statement in the manuscript (to be placed before the list of references). The statement must specify the AI tools used and indicate their role. Authors bear full responsibility for the content, accuracy, and originality of the materials. Failure to disclose the use of AI is considered a violation of the principles of transparency and research ethics and may be grounds for rejection of the manuscript. Authors who did not use generative AI and/or AI-supported technologies during manuscript preparation are not required to provide such a statement.
The use of AI tools for language editing, style improvement, or translation is permitted only if authors retain full control over the content, critically review the results, and take responsibility for all statements in the article. The use of generative AI is strictly prohibited in cases where it may undermine the reliability or academic integrity of the work. This applies in particular to the generation of scientific content (formulation of research objectives, description of methods, analysis of results) without proper author verification, as well as to the concealment of plagiarism through paraphrasing or automated rewriting of others’ texts. The use of generative AI to create or modify images and illustrations in submitted manuscripts is prohibited.
To protect authors’ rights and the confidentiality of their research, the Editorial Board currently does not allow the use of generative AI or AI-supported technologies (e.g., ChatGPT or similar services) by reviewers or editors during the peer review or manuscript evaluation process. Reviews must be the result of an independent expert assessment conducted by a human who assumes responsibility for the conclusions and recommendations. This approach to editorial processing may be revised in the future as relevant AI tools evolve.
If inappropriate or undeclared use of AI is detected during the manuscript review process, the manuscript will be rejected. If such use is identified in a published article, it will be considered falsification and data manipulation, with subsequent application of the journal’s retraction policy.
