Despite the dangers and drawbacks of using AI, such as data breaches, GDPR violations, and ethical objections, we do not want to prohibit the use of ChatGPT and other generative AI. Our students will also encounter AI in their professional fields. And the use of it is only increasing. We want our students and teachers to learn how to work with AI. However, be cautious with the use of AI and the knowledge (data) you share.
Apply the following general principles for responsible use of GenAI (KU Leuven, 2024a):
Guidelines for green higher education institutions: The green universities of applied sciences are in the process of establishing guidelines or have established guidelines specific to the individual institution. Consult the guidelines of your university before working with AI in education and research.
It is never allowed to submit work that has been fully developed by GenAI as your own work. If this does happen, it is considered fraud and the definitions of fraud/plagiarism as described in the Education and Examination Regulations of the program apply. The advice is therefore to consult the EER of your own program. The use of GenAI may also be specifically described in this document.
In addition to general principles, there are also more specific principles and guidelines per target group to take into account:
It is useful to use a systematic approach when evaluating generative AI output. There are different frameworks for this, such as the CRAAP test, which was specifically developed for evaluating sources, or the EDIT step-by-step plan, which was created for working with output from language models (e.g. ChatGPT).
Use the CRAAP-test completion form. Answer the questions and always give a score of 1-10. The total score gives you an idea of the quality of the output. The EDIT step-by-step plan is outlined in the Prompt engineering Prompt engineering tab of this library guide.
Here is a summary overview of the criteria for critical evaluation from these frameworks (Last & Sprakel, 2024):
You can also use the following decision tree (Aleksandr Tiulkanov, 19 januari 2023) to determine if it's safe to use a language model (e.g. ChatGPT) for your task.
The extent to which GenAI should be referenced depends on how it is used.
Follow these guidelines for referencing (and citing) generative AI according to APA guidelines (the most commonly used reference style in green higher education) as established by the APA Working Group:
As with other sources, the name of the author and the year are mentioned in the text. For GenAI, the company that developed the tool is mentioned as the author.
In the reference list, according to the American guideline, in addition to the developer and year, the title (in italics), version (in parentheses), description [in square brackets], and web link are mentioned:
The APA Working Group follows the advice not to mention a consultation date (Accessed on ...). For a report, GenAI may be used over a longer period, and mentioning the consultation date adds little value.
For some GenAI, version numbers follow each other in rapid succession or are not clearly stated. In such cases, the working group advises omitting the version number. In the case of ChatGPT, a date version was used in the first year, but this is no longer mentioned. For ChatGPT, the version number can be stated: GPT-4o mini for free, GPT-4o for paid. The year or date is not always available either; if in doubt, note n.d. (no date).
In the description between square brackets, [Generative AI] is noted for GenAI that processes written prompts. The type of generative AI, such as written or spoken, is not specified. See the manual for examples.
Download 'The APA-guidelines explained Generative AI' - Version1.0 (pdf)
Referencing is the first step towards transparency (KU Leuven, 2024). A good practice for being transparent is to keep track of when, how, and why you used GenAI. This information may be requested. Keeping track can take various forms, for example:
AI can be misused, for example by students. Due to digital developments, texts generated by GenAI will become increasingly difficult to distinguish from self-written texts in the future. AI detection tools are only capable of indicating whether certain text patterns occur that may suggest the use of GenAI and are therefore not reliable for proving fraud through GenAI. Nevertheless, certain indicators may point to misuse (Last and Sprakel, 2024):
Misuse of AI can certainly occur not only in education but also outside of it, such as with deepfakes or voice cloning. While these techniques existed before AI, they have become even easier to create now (Neumann, 2024). However, how can one recognise a fake (generated by AI)?
The basics: check your source, verify metadata, analyse the context.
Pay attention to the following characteristics:
Assessment Criteria:
In the era of AI, school students need to be prepared to become active co-creators of AI, as well as future leaders who will shape novel iterations of the technology and define its relationship with society. This is exactly the ambition of UNESCO’s AI competency framework for students – the first ever global framework of its kind. It aims to support the development of core competencies for students to become responsible and creative citizens, equipped to thrive in the AI era. This will help students acquire the values, knowledge and skills necessary to examine and understand AI critically from a holistic perspective, including its ethical, social and technical dimensions.
UNESCO’s AI competency framework for teachers defining the knowledge, skills, and values lecturers must master in the age of AI. Developed with principles of protecting lecturers’ rights, enhancing human agency, and promoting sustainability, the publication outlines 15 competencies across five dimensions: Human-centred mindset, Ethics of AI, AI foundations and applications, AI pedagogy, and AI for professional learning. These competencies are categorized into three progression levels: Acquire, Deepen, and Create. As a global reference, this tool provides strategies for lecturers to build AI knowledge, apply ethical principles, and support their professional growth.
To effectively implement GenAI in education and research, attention must be given to digital literacy: critical skills necessary for effective learning and working in today's digital society, and thus for working with GenAI. We are entering an era where distinguishing between real and fake is becoming increasingly difficult. As a result, critical thinking is becoming ever more important (Last & Sprakel, 2024).
Make use of the publication "Wise with Technology", a guide with practical methods for ethical reflection on the impact of technology that you can immediately apply in your teaching and research (Last & Sprakel, 2024).
Meanwhile, in addition to digital literacy, there is now discussion about AI literacy, which refers to the competencies needed to critically evaluate AI technologies, communicate and collaborate effectively with them, both at home and in the workplace, so that students are prepared for a world full of AI (Last & Sprakel, 2024). Read more about AI literacy in the publication "Why Generative AI Literacy, Why Now and Why it Matters in the Educational Landscape? Kings, Queens and GenAI Dragons" (Bozkurt. A, 2024).
In 2009, Professor of Educational Sciences Brand-Gruwel and colleagues created a model describing five core competencies for digital information skills (solving information problems in the digital age, one of the domains within digital literacy). The advent of generative AI necessitated an update to the digital information skills model. Teacher and PhD researcher in the didactics of digital information skills, Josien Boetje, defines the following core competencies in the GenAI information literacy model, her update of Brand-Gruwel's model (Boetje, 2023):
These competencies enable students to use AI as a tool rather than a replacement for their own thinking. Throughout the entire process, monitoring and adjusting are important. This includes reformulating a prompt when receiving an unsuitable answer; selecting the appropriate AI tools for the problem and desired output, and knowing when to use or not use AI. Prerequisites for effectively applying the aforementioned core competencies are:
Finally, there is the Digital Competence Framework for Citizens (DigComp) - developed by the Joint Research Centre, a research body of the European Union (Daniels, 2023). The framework describes the digital competencies for citizens for learning, living and working in a digital society. It can serve as a basis for the development of teaching materials and curricula. DigComp has served as the starting point for the European Union's policy plans in the area of high-quality, inclusive and accessible digital education, as described in the Digital Education Action Plan 2021-2027. Additionally, commissioned by HAN University of Applied Sciences, a digital literacy self-assessment for students has been developed based on DigComp (Dutch document). See also the DigComp website.
Het conceptuele referentiemodel van DigComp:
Applicable regulations concerning copyright, GDPR, and security also apply to AI usage.
When developing and using AI, there is a tendency to collect as much data as possible. The assumption is that the more data used to train the systems, the better these systems become. This means that a perverse incentive may arise to collect, store, and further process too much data, for too long, and unnecessarily. This includes the data entered by the user of AI. Therefore, be extremely cautious when using AI and do not disclose any personal data or commercially sensitive information. Follow the rules when using AI and algorithms.
The AI Act is on its way: the world's first comprehensive law on artificial intelligence. The act will be implemented in phases and will be fully in force by mid-2027. Some AI systems are likely to be banned as early as the end of 2024. The AI Act sets out the rules for the responsible development and use of AI by companies, governments, and other organisations. Find out more about the AI Act through the following sources:
It is somewhat unclear who actually holds the copyright for AI output. According to AI terms of use, users have the right to reproduce the output for any purpose. However, AI output is not always unique. This can lead to potential legal issues if the same output is used by different users. The terms of use for AI also indicate that users are legally responsible for the content of such outputs, meaning users may be liable if they reproduce output containing copyrighted material. However, it is unclear how users can know if this is the case, as AI cannot provide accurate citations or other source references. As a user, you must be aware of these copyright implications. Use AI-generated output as a source of inspiration rather than reproducing it verbatim (Scharwächter, 2023). Search for scientific references yourself and analyse source documents on your own. Interpret, analyse and process the information obtained; do not simply copy it.
When working with GenAI, be aware of the presence of biases (examples: the tendency to seek, interpret, and remember information that confirms our own beliefs; prejudices based on someone's gender or background; negative stereotyping) in the data used, which find their way into the output. Realise that AI cannot consciously detect these biases. They are unable to determine what is right or wrong, ethical or unethical. Also realise that the responsibility to prevent and remove bias lies partly with AI suppliers and developers, but also partly with the end user. The quality of the input (the prompt) is your responsibility as a user, and partly determines the biases that find their way into the output. Therefore, it is essential that you critically analyse the results produced by GenAI to recognise and correct biases. Adjust your prompt and make it more specific. For example: specifically ask for a female CEO because if you ask an image generation tool for a CEO of a company in the Netherlands, there is a significant chance you will be presented with a man. The possibility that GenAI generates inaccurate content alongside discriminatory content also exists. Users who adopt this inaccurate information may potentially suffer reputational damage or, in extreme cases, even be sued for defamation. It is therefore important to verify the accuracy of AI output against a reliable source and to think critically about the risk of bias for each topic. "These systems are designed to give plausible answers based on statistical analysis - they are not designed to answer truthfully," explains AI expert Carissa Véliz of Oxford University to New Scientist.