Considering using AI tools in your AUT postgraduate research? You need to check out the new guidance on appropriate use of AI as published in the latest edition of the AUT PG Handbook (from p.81). Here’s the lowdown.
Appropriate Use of Artificial Intelligence Tools
Artificial Intelligence (AI) tools, including generative AI tools such as ChatGPT, can be used as learning tools when preparing to write your research component.
AI tools also have capability to assist with data modelling, data analysis and data visualisation.
However, the research component you submit for examination must be substantively your own work. AI tools cannot solely be used to generate content when writing or creating an artwork/artefact, as this constitutes plagiarism.
If you do plan to use AI tools, you must:
- Discuss the use of tools with your supervisor and demonstrate you understand the tool’s functionality and any potential risks of its use, including any potential biases;
- Clearly identify and reference any text created or amended by AI tools;
- Clearly identify and reference any images, diagrams, graphics, tables, or other visual media created or amended by AI tools;
- Clearly identify and reference any data generated, amended, or analysed by AI tools, and this use of AI tools must be in alignment with your AUTEC ethics approval;
- Sign the Attestation of Authorship (required on the first page after the table of contents in the thesis) declaring that AI tools have not been used outside of these conditions.
Further details on AI and Academic Integrity can be found on Canvas, courtesy of Te Mātāpuna AUT Library. The Library has also prepared examples of how to cite your use of generative AI tools in APA, Chicago, and Harvard styles.
Guidelines for Use of AI Tools to Support Your Research
Artificial Intelligence (AI) tools have incredible potential across many disciplines, and as a postgraduate researcher you are encouraged to explore this rapidly changing field. AI tools can support the efficiency and quality of research, but you must use these tools ethically, and in an informed and transparent manner.
Examples of Using AI Tools Ethically
- Summarise research articles to aid decisions about reading in full or relevance to your research topic.
- Organise research article sections into common themes.
- Improve grammar, sentence structure, headings or chapter organisation in your thesis text.
- Test survey questions used in a quantitative study.
- Analyse a large quantitative or qualitative dataset (if such use is approved within your AUTEC approval and any affected participants have given informed consent for their data to be analysed in this way).
- Create visualisations of your data.
- Prompt a Large Language Model (LLM) tool to ask you questions to help practice for your oral examination.
Being Informed and Transparent About Your Use of AI Tools
The following steps are recommended when using AI tools.
Initial Exploration and Understanding of the AI Tool:
- Research and gain understanding of the specific AI tool’s capabilities, limitations, and intended use.
- Investigate how the tool has been applied in other research projects, particularly those within your field.
Functionality and Suitability Assessment:
- Assess whether the AI tool aligns with your research objectives and methodology.
- Evaluate the tool’s functionality against your specific research needs.
Risk Assessment:
- Identify any potential risks associated with using the AI tool, including data privacy issues, biases in the tool, and ethical considerations.
- Consider any possible negative impacts on research participants or data integrity.
Consultation with Supervisor:
- Present your initial findings and assessments to your supervisor.
- Discuss the potential benefits and risks of using the AI tool in your research.
Ethical Considerations and Compliance:
- Ensure your proposed use of the AI tool complies with AUT Ethics guidelines and data protection laws.
- Any use of an AI tool for data generation or data analysis will have to be explicitly approved via the ethics process.
- Prepare for the ethics approval process, if required, by drafting a clear explanation of how the AI tool will be used and addressing any ethical concerns.
Training and Skill Development:
- Acquire the necessary skills and knowledge to use the AI tool effectively. This may be through LinkedIn Learning, which is free for AUT students, or other online resources.
- Test the tool on sample data to understand its practical application and limitations.
Disclosure and Referencing:
- Keep detailed records of how the AI tool is used throughout your research process.
- Clearly articulate the role of the AI in your research findings and acknowledge any limitations it may have presented.
- Clearly identify and reference any text created or amended by AI tools.
- Clearly identify and reference any images, diagrams, graphics, tables, or other visual media created or amended by AI tools (for instructions on referencing, see the guide on Canvas).
- Clearly identify and reference any data generated, amended, or analysed by AI tools, and this use of AI tools must be in alignment with your AUTEC ethics approval.
- Sign the Attestation of Authorship (required on the first page after the table of contents in the thesis) declaring that AI tools have not been used outside of these conditions.
Preparation for Your Oral Examination:
- Be prepared to discuss and justify the use of the AI tool in your research during your oral examination.
Examples of AI Use That Compromise Academic Integrity
Text Generation and Paraphrasing: Advanced AI-driven text generators like Generative Pre-training Transformer (GPT) models can create content that closely resembles existing work. You may not use these tools to generate papers or sections of papers that are derivative of others’ work without proper referencing.
Data Fabrication or Manipulation: AI can be used to create or manipulate data sets, making them appear original when they are in fact based on or copied from other researchers’ work without acknowledgement.
Automated Literature Reviews: AI tools can summarize existing literature, and if a researcher presents these summaries as their own analysis without citing the original sources, it constitutes plagiarism.
Replicating Code or Algorithms: Using AI to reverse-engineer algorithms or code from published work and then presenting them as one’s own creation is plagiarism.
Misrepresenting AI-Assisted Work as Fully Hand-Crafted: Presenting artwork created with substantial AI assistance as entirely handcrafted or manually created, misleading viewers or critics about the nature of the creative process.
Cultural Appropriation: Utilizing AI to amalgamate or replicate cultural symbols, motifs, or aesthetics in ways that constitute cultural appropriation or disrespect towards the cultural significance of the original works.
Image and Data Visualization Plagiarism: AI tools can replicate or slightly modify images, graphs, and data visualizations from other research works. Presenting these as original without crediting the source is unethical.
Using AI to Bypass Plagiarism Detection Software: Some AI tools might be used to alter text minimally to evade detection by plagiarism checking software, while still retaining the essence of someone else’s work.
Ignoring Consent and Autonomy: Using AI to analyse behaviours, trends, or sentiments without considering individual consent undermines personal autonomy and ethical research principles.
Bias and Discrimination: AI systems can perpetuate and amplify biases present in their training data. In criminal justice or human rights research, using biased AI tools can lead to discriminatory conclusions or reinforce stereotypes, particularly against marginalized groups.