Skip to Main Content

Artificial intelligence: Responsible use

A guide on the use of AI (artificial intelligence) in education and research at the green universities of applied sciences

Guidelines

Despite the dangers and drawbacks of using AI, such as data breaches, GDPR violations, and ethical objections, we do not want to prohibit the use of ChatGPT and other generative AI. Our students will also encounter AI in their professional fields. And the use of it is only increasing. We want our students and teachers to learn how to work with AI. However, be cautious with the use of AI and the knowledge (data) you share.

Apply the following general principles for responsible use of GenAI (KU Leuven, 2024a):

  • Be transparent about the use of GenAI.
  • Verify the correctness of the generated output with attention to proper source citation.
  • Respect copyrights, privacy, and confidentiality by not inputting copyrighted material, personal data, or confidential information on platforms managed by external parties. You can only do this if you have explicit permission from those who own or have rights to that data, information, or material.
  • Take responsibility for the correct use of GenAI (primarily as help and support) and for the output you publish (in the context of research) or submit as a student (in the context of education).
  • Use GenAI thoughtfully and considerately, keeping in mind the sustainability character of green higher education. The use of GenAI consumes a lot of energy and water. Also read this article about the invisible costs of AI.

Guidelines for green higher education institutions: The green universities of applied sciences are in the process of establishing guidelines or have established guidelines specific to the individual institution. Consult the guidelines of your university before working with AI in education and research.

It is never allowed to submit work that has been fully developed by GenAI as your own work. If this does happen, it is considered fraud and the definitions of fraud/plagiarism as described in the Education and Examination Regulations of the program apply. The advice is therefore to consult the EER of your own program. The use of GenAI may also be specifically described in this document.

In addition to general principles, there are also more specific principles and guidelines per target group to take into account: 

  • For a student (KU Leuven, 2024b), the following additionally applies:
    • You are fully responsible for what you submit;
    • You ensure that your student product unambiguously allows for the evaluation of which competencies you as a student have acquired;
    • You certainly do not use GenAI during on-campus exams or other evaluations where it has been indicated that the use of GenAI is not allowed;
    • The use of GenAI as a language assistant for processing or improving self-written texts is allowed when the model does not add new content;
    • GenAI may be used as a search robot to get initial information about a topic or for a first step in searching for literature;
      • When you subsequently search for scientific references yourself, analyze the source documents yourself, interpret and process the obtained information without simply copying it, and compose a text by yourself, you do not need to mention the use of GenAI;
      • However, when you do literally adopt certain parts of GenAI output (e.g., due to the nature of the assignment), you mention your sources and cite;
    • When the teacher EXPLICITLY allows it, you may have code generated by GenAI as a partial aspect within a larger assignment. 
  • For a teacher (KU Leuven, 2023c), it additionally applies that:
    • You clearly inform students about whether or not they are allowed to use GenAI for student products such as visual, writing, and programming assignments.
  • For a researcher (KU Leuven, 2023a; KU Leuven 2023c), it additionally applies that: 
    • When using GenAI, you respect the Netherlands Code of Conduct for Research Integrity:
      • You report results and methods, including the use of external services or AI and automated tools, in a way that is compatible with accepted standards in the field and facilitates verification or replication, if applicable;
      • It is a violation of scientific integrity when you conceal the use of AI or automated tools in creating content or writing publications;
    • You should take into account the views of journals/publishers/grant providers regarding the use of GenAI; although there is some consensus that technology can be used to support the writing of scientific texts, there are also journals where the use of GenAI is (currently) not allowed.
    • STM - the International Association of Scientific, Technical & Medical Publishers - has established Ethical and Practical Guidelines for the Use of Generative AI in the Publication Process.  

Click on the image above for a clearer view.

Critical evaluation

It is useful to use a systematic approach when evaluating generative AI output. There are different frameworks for this, such as the CRAAP test, which was specifically developed for evaluating sources, or the EDIT step-by-step plan, which was created for working with output from language models (e.g. ChatGPT).

Use the  CRAAP-test completion form. Answer the questions and always give a score of 1-10. The total score gives you an idea of the quality of the output. The EDIT step-by-step plan is outlined in the Prompt engineering Prompt engineering tab of this library guide.

Here is a summary overview of the criteria for critical evaluation from these frameworks (Last & Sprakel, 2024): 

  • RELIABILITY: Is the information from a reliable source? Are there signs of manipulation/hallucination? 
  • VALIDITY: Are the data and methods used to obtain the information valid and reliable?
  • RELEVANCE: Does the information align with the context and purpose of the question or problem? Is the information current?
  • CREDIBILITY: Is the information factually correct and free of errors? Are there sources that support the information? 
  • CONSISTENCY: Is the information consistent with other known facts or knowledge? Are there contradictions? 
  • PERSPECTIVE: Are different viewpoints and angles addressed in the information? Is there bias?
  • STRUCTURE AND LOGIC: Is the information well-structured and logically presented? Are there clear connections between ideas?
  • ARGUMENTATION: Are the arguments well-supported and convincing? Have counter-arguments been presented and refuted?
  • QUALITY: Is the output of high quality? For example, are there errors in the images, or does the voice sound slightly robotic?

You can also use the following decision tree (Aleksandr Tiulkanov, 19 januari 2023) to determine if it's safe to use a language model (e.g. ChatGPT) for your task. 

Citation and being transparent

The extent to which GenAI should be referenced depends on how it is used.

  • When GenAI is used as a writing aid, such as for improving texts, a reference is not applicable.
  • When GenAI is used to obtain initial information about a topic or as a first step in searching for literature, and you then search for the source documents yourself (which is preferred), you reference these documents and do not need to mention the use of GenAI.
  • When GenAI is used to create texts (you literally adopt certain parts of GenAI output) or when the creation of output using GenAI is explicitly allowed, then it should be referenced.

Follow these guidelines for referencing (and citing) generative AI according to APA guidelines (the most commonly used reference style in green higher education) as established by the APA Working Group:

As with other sources, the name of the author and the year are mentioned in the text. For GenAI, the company that developed the tool is mentioned as the author.

In the reference list, according to the American guideline, in addition to the developer and year, the title (in italics), version (in parentheses), description [in square brackets], and web link are mentioned:

Developer. (Year). AI tool name (Version) [Generative AI]. https://xxxx

 

The APA Working Group follows the advice not to mention a consultation date (Accessed on ...). For a report, GenAI may be used over a longer period, and mentioning the consultation date adds little value.

For some GenAI, version numbers follow each other in rapid succession or are not clearly stated. In such cases, the working group advises omitting the version number. In the case of ChatGPT, a date version was used in the first year, but this is no longer mentioned. For ChatGPT, the version number can be stated: GPT-4o mini for free, GPT-4o for paid. The year or date is not always available either; if in doubt, note n.d. (no date).

In the description between square brackets, [Generative AI] is noted for GenAI that processes written prompts. The type of generative AI, such as written or spoken, is not specified. See the manual for examples.

Download 'The APA-guidelines explained Generative AI' - Version1.0 (pdf)

Referencing is the first step towards transparency (KU Leuven, 2024). A good practice for being transparent is to keep track of when, how, and why you used GenAI. This information may be requested. Keeping track can take various forms, for example:

  • Save the complete exchange with GenAI by sharing the chat link, downloading the chat, or taking screenshots. Highlight relevant parts if necessary.
  • Provide an explanation of how GenAI was used (for example, for generating ideas, text fragments, longer pieces of text, arguments, evidence, illustrations of concepts, ...).
  • Note why GenAI was used: to save time, to tackle writer's block, to stimulate thinking, to manage increasing stress, to better understand a concept, to translate, to experiment with GenAI, etc.
  • How you report on this may vary. Check if the reporting method is clarified somewhere, for example, for students in the assignment description of the (professional) product (if no clarification is provided, consult with the instructor).
  • In addition to source citation and inclusion in a reference list, it may be desirable to explain the above elements in the (materials and) methods section or as an appendix.

Recognising abuse

AI can be misused, for example by students. Due to digital developments, texts generated by GenAI will become increasingly difficult to distinguish from self-written texts in the future. AI detection tools are only capable of indicating whether certain text patterns occur that may suggest the use of GenAI and are therefore not reliable for proving fraud through GenAI. Nevertheless, certain indicators may point to misuse (Last and Sprakel, 2024):

  • The absence of correct citations and/or references where these are required or expected; also check if the used references actually exist.
  • Incorrect or inconsistent use of first- and third-person perspectives.
  • American spelling, currency, and terms that don't belong, or noticeable anglicisms such as 'thinking outside the box'.
  • Language use or vocabulary that doesn't match the qualification level, such as noticeably few errors.
  • No reference to events after a certain date that corresponds with the data collection of a specific AI tool.
  • Difference in language style (for example, excessive use of "fancy words" or exaggerated language) compared to the style of previous work by the same person or compared to other parts of the same work.
  • Lack of specific local or current knowledge.
  • Absence of graphs, tables, or other visual aids where these would normally be expected.
  • Strange use of concluding statements or repetition of text structures within the work; language models tend to use much of the same structure.
  • Warnings or disclaimers generated by AI, which, for example, emphasize its limitations or hypothetical nature.

Misuse of AI can certainly occur not only in education but also outside of it, such as with deepfakes or voice cloning. While these techniques existed before AI, they have become even easier to create now (Neumann, 2024). However, how can one recognise a fake (generated by AI)?

The basics: check your source, verify metadata, analyse the context.

Pay attention to the following characteristics:

  • For text: Lack of a coherent 'thread' or narrative
  • Strange perspective
  • Incorrect lighting
  • Physical impossibilities

Assessment Criteria: 

  • Context: platform, medium, Subject matter
  • Source: Verifiable? Well-known? AI disclosed?
  • Consistency: Unexplainable changes, ‘morphing’ (transitions) 
  • 'Pragmatism': Contextual understanding, Purposefulness, Context-awareness
  • Details: Errors in periphery/background, Absence of 'noise'

Impact on skills

In the era of AI, school students need to be prepared to become active co-creators of AI, as well as future leaders who will shape novel iterations of the technology and define its relationship with society. This is exactly the ambition of UNESCO’s AI competency framework for students – the first ever global framework of its kind. It aims to support the development of core competencies for students to become responsible and creative citizens, equipped to thrive in the AI era. This will help students acquire the values, knowledge and skills necessary to examine and understand AI critically from a holistic perspective, including its ethical, social and technical dimensions. 

UNESCO’s AI competency framework for teachers defining the knowledge, skills, and values lecturers must master in the age of AI. Developed with principles of protecting lecturers’ rights, enhancing human agency, and promoting sustainability, the publication outlines 15 competencies across five dimensions: Human-centred mindset, Ethics of AI, AI foundations and applications, AI pedagogy, and AI for professional learning. These competencies are categorized into three progression levels: Acquire, Deepen, and Create. As a global reference, this tool provides strategies for lecturers to build AI knowledge, apply ethical principles, and support their professional growth.

To effectively implement GenAI in education and research, attention must be given to digital literacy: critical skills necessary for effective learning and working in today's digital society, and thus for working with GenAI. We are entering an era where distinguishing between real and fake is becoming increasingly difficult. As a result, critical thinking is becoming ever more important (Last & Sprakel, 2024).

Make use of the publication "Wise with Technology", a guide with practical methods for ethical reflection on the impact of technology that you can immediately apply in your teaching and research (Last & Sprakel, 2024).

Meanwhile, in addition to digital literacy, there is now discussion about AI literacy, which refers to the competencies needed to critically evaluate AI technologies, communicate and collaborate effectively with them, both at home and in the workplace, so that students are prepared for a world full of AI (Last & Sprakel, 2024). Read more about AI literacy in the publication "Why Generative AI Literacy, Why Now and Why it Matters in the Educational Landscape? Kings, Queens and GenAI Dragons" (Bozkurt. A, 2024).

In 2009, Professor of Educational Sciences Brand-Gruwel and colleagues created a model describing five core competencies for digital information skills (solving information problems in the digital age, one of the domains within digital literacy). The advent of generative AI necessitated an update to the digital information skills model. Teacher and PhD researcher in the didactics of digital information skills, Josien Boetje, defines the following core competencies in the GenAI information literacy model, her update of Brand-Gruwel's model (Boetje, 2023):

These competencies enable students to use AI as a tool rather than a replacement for their own thinking. Throughout the entire process, monitoring and adjusting are important. This includes reformulating a prompt when receiving an unsuitable answer; selecting the appropriate AI tools for the problem and desired output, and knowing when to use or not use AI. Prerequisites for effectively applying the aforementioned core competencies are:

  • Knowledge of the world, for example, to critically evaluate output.
  • Practical ICT skills, for instance, to know how to technically handle certain tools.
  • General language proficiency: for composing prompts and processing generated information.

Finally, there is the Digital Competence Framework for Citizens (DigComp) - developed by the Joint Research Centre, a research body of the European Union (Daniels, 2023). The framework describes the digital competencies for citizens for learning, living and working in a digital society. It can serve as a basis for the development of teaching materials and curricula. DigComp has served as the starting point for the European Union's policy plans in the area of high-quality, inclusive and accessible digital education, as described in the Digital Education Action Plan 2021-2027. Additionally, commissioned by HAN University of Applied Sciences, a digital literacy self-assessment for students has been developed based on DigComp (Dutch document). See also the DigComp website.

Het conceptuele referentiemodel van DigComp: 

Legal implications

Applicable regulations concerning copyright, GDPR, and security also apply to AI usage.

When developing and using AI, there is a tendency to collect as much data as possible. The assumption is that the more data used to train the systems, the better these systems become. This means that a perverse incentive may arise to collect, store, and further process too much data, for too long, and unnecessarily. This includes the data entered by the user of AI. Therefore, be extremely cautious when using AI and do not disclose any personal data or commercially sensitive information. Follow the rules when using AI and algorithms.

The AI Act is on its way: the world's first comprehensive law on artificial intelligence. The act will be implemented in phases and will be fully in force by mid-2027. Some AI systems are likely to be banned as early as the end of 2024. The AI Act sets out the rules for the responsible development and use of AI by companies, governments, and other organisations. Find out more about the AI Act through the following sources:

  • The EU AI Act page on the Dutch Personal Data Authority website
  • The AI Act page on digitalgovernment.nl 

It is somewhat unclear who actually holds the copyright for AI output. According to AI terms of use, users have the right to reproduce the output for any purpose. However, AI output is not always unique. This can lead to potential legal issues if the same output is used by different users. The terms of use for AI also indicate that users are legally responsible for the content of such outputs, meaning users may be liable if they reproduce output containing copyrighted material. However, it is unclear how users can know if this is the case, as AI cannot provide accurate citations or other source references. As a user, you must be aware of these copyright implications. Use AI-generated output as a source of inspiration rather than reproducing it verbatim (Scharwächter, 2023). Search for scientific references yourself and analyse source documents on your own. Interpret, analyse and process the information obtained; do not simply copy it.

When working with GenAI, be aware of the presence of biases (examples: the tendency to seek, interpret, and remember information that confirms our own beliefs; prejudices based on someone's gender or background; negative stereotyping) in the data used, which find their way into the output. Realise that AI cannot consciously detect these biases. They are unable to determine what is right or wrong, ethical or unethical. Also realise that the responsibility to prevent and remove bias lies partly with AI suppliers and developers, but also partly with the end user. The quality of the input (the prompt) is your responsibility as a user, and partly determines the biases that find their way into the output. Therefore, it is essential that you critically analyse the results produced by GenAI to recognise and correct biases. Adjust your prompt and make it more specific. For example: specifically ask for a female CEO because if you ask an image generation tool for a CEO of a company in the Netherlands, there is a significant chance you will be presented with a man. The possibility that GenAI generates inaccurate content alongside discriminatory content also exists. Users who adopt this inaccurate information may potentially suffer reputational damage or, in extreme cases, even be sued for defamation. It is therefore important to verify the accuracy of AI output against a reliable source and to think critically about the risk of bias for each topic. "These systems are designed to give plausible answers based on statistical analysis - they are not designed to answer truthfully," explains AI expert Carissa Véliz of Oxford University to New Scientist.