Skip to Main Content

Artificial intelligence: What is AI?

A guide on the use of AI (artificial intelligence) in education and research at the green universities of applied sciences

The basics

In short, artificial intelligence (AI), is a collective term for algorithms and methods that perform tasks that were thought to require human intelligence (Rijksinspectie Digitale Infrastructuur, n.d.). Artificial intelligence refers to systems that exhibit intelligent behaviour by analysing their environment and - with a degree of independence - take action to achieve specific goals. It is not just about computational power, but the ability to learn and make decisions (independently). Learning ability is thus typical of artificial intelligence. Thereby, AI uses rules formulated by humans or compiled by the algorithm based on the data and trains itself with data (Europees Parlement, 2020).

Generative AI (GenAI) is a form of artificial intelligence which automatically generates text, images, audio and other content. In GenAI, the user issues a command or asks a question. The input is almost always text-based. An AI model interprets the command and generates content based on the command. The basis for that created content is a very extensive set of data with which the generative model has been trained. So the outcome or answer looks like that training data (Kennisnet, 2023).

This Library guide focuses mainly on generative AI, and specifically on the applications of generative AI in teaching and research at green universities of applied sciences and is built on the guidelines released by colleges about AI. Until recently, there was little focus on the application of AI at universities of applied sciences, mainly because AI applications and tools were still in their infancy and/or only available for a fee (Ding et al., n.d.). The way GenAI works, the critical assessment of the output and the right way of asking questions (prompts) to form the best output, make it that teachers as well as students and researchers have to pay attention to this in education and research.

Below you find a video and podcast in which give you briefly an introduction to AI. 

What is artificial intelligence? Explained in 5 minutes:

In 2022, ChatGPT introduced the general public the possibilities of generative AI. What is it and what will be the next step with this technology in education and research? And, do we know in the future what is real or not? Get to know the reference 'pre 2022' en 'after 2022' for the internet and make a wishlist. Machine Learning Teamlead at SURF Damian Podarianu explains (English): 

How does it work?

Generative AI starts with a learning journey (Basten, 2023). It gains access to immense data sets, ranging from books and articles to musical pieces and images. This collection of data is called training data. One of the breakthroughs of GenAI models is its ability to exploit different machine learning methods, such as unsupervised or semi-supervised learning. At the core of this learning lies the concept of neural networks and how AI models use neural networks to identify patterns and structures in training data. A neural network is a complex system of algorithms designed to function similarly to human brains. They are able to learn and improve as they process more information. When these networks have multiple layers and can perform deeper analysis, we refer to them as deep learning

After the AI model is sufficiently trained, it can start creating new content. It uses the patterns it has internalised to create something somewhat similar to the training data, yet completely unique. The exact way this works can vary, but in general it means that the model ‘anticipates’ the next step, so to speak, based on the knowledge gained earlier. Let's take text generation, for example. The AI can start a sentence with ‘It was a dark and stormy night...’ and then predict which words are likely to follow, based on patterns it has internalised from reading thousands of novels. The result? a unique text passage that has not occurred before.

So it is about a language model, specifically a large language model, that guesses the probability of the next word. And which makes connections: translates words into numbers and starts calculating with them.

Models are also trained for image, video or audio generation based on training data. For video, the model learns to understand the relationship between consecutive frames and model time-dependent patterns. This eventually allows AI to generate new videos based on the learned patterns and sequences. Or AI can create new and unique sounds after learning the characteristics of rhythms and melodies through pattern recognition. 

Terms

Some AI-terms explained (Devoteam, n.d.).

An algorithm is a set of clear and logical steps (instructions or rules) that a computer follows to perform a specific task or solve a problem (Blue, 2019). 

Machine learning enables computers to recognise and learn patterns without explicit instructions (i.e. without a programmer formulating the rules one by one). Like chefs who prove and refine their recipe, computers teach themselves how to perform complex tasks ever better (Last & Sprakel, 2024).

Deep learning is a specific form of machine learning in which layered neural networks learn from large amounts of data. The neural network consists of the input layer (training data), the hidden layers (black box, the learning process itself: learning features through pattern recognition) and the output layer (generating new content) (Last & Sprakel, 2024).

Neural networks mimic the human nervous system with a web of digital ‘neurons’ and allow computers to learn through experience. Just as a young chess player learns from his moves, computers refine their knowledge and skills by repeatedly learning from the input they receive (Last & Sprakel, 2024).

Large language models are generative AI models that work with human language, using both text and other inputs in the prompt and generating textual output. By combining natural language processing (see below) with the transformer architecture (see below), AI models have taken a huge step forward in their capabilities (Last & Sprakel, 2024).

Natural language processing is a technique that allows machines to understand, interpret, and even generate human language (Last & Sprakel, 2024).

Transformer is a type of neural network architecture that is central to AI models (specifically language models) (Basten, 2023). It enables the model to understand which words are important in a sentence, how words relate to each other, and thus estimate the context across multiple sentences and even entire paragraphs (Last & Sprakel, 2024).

Generative Pretrained Transformer (GPT) is an AI model that can generate language (generative), has been trained in advance (pretrained) on training data and by humans, and works with the transformer architecture. And ChatGPT therefore means that you can chat with this language model.

Retrieval-Augmented Generation (RAG) is a technique where a large language model (LLM) is linked to an (external) knowledge base, providing users with better and more up-to-date answers. The model has access to various sources, such as documents, files, APIs, and databases (similar to how a search engine like Google works). When entering a prompt, the RAG retrieves information from the sources and generates relevant answers. The difference with ChatGPT, for example, is that the model has access to real-time data, while ChatGPT does not (Pot, 2024).

Advantages: what you can do with AI

Artificial intelligence can have advantages. The recognition of speech, images, and patterns, self-driving systems, translation machines, walking robots, and question-answering systems can help in terms of convenience, user experience, and efficiency (Mediawijsheid, n.d.). Algorithms can recognize patterns that humans would not see (Van Belkom, 2020).

Generative AI technology is often flexible and can be deployed for multiple tasks instead of specializing in just one task. This creates opportunities to explore its use in a wide range of contexts (Hussaarts, 2023).

This technology can make all processes related to generating text or other content (for example, writing emails, planning projects, creating images) much more efficient (Hussaarts, 2023).

When you're at a dead-end, you can use AI to brainstorm (KU Leuven, 2023b). This can generate new ideas that may lead you in a completely different direction.

Example activities for which you can use AI (Ding et al., n.d.):

  • Answering questions
  • Providing information
  • Creating creative content
  • Offering language and grammar assistance
  • Making translations
  • Creating/improving code
  • Answering general knowledge questions
  • Making product recommendations
  • Offering planning and organization suggestions
  • Conducting conversations

Generative AI also has numerous advantages for education and research, such as creating new, original content, enhancing learning processes, accelerating data analysis, and reducing workload.

One of the applications of GenAI in education is the development of AI-powered learning materials and platforms. In this context, GenAI offers the possibility to adapt learning material to the level of individual students. A specific application of GenAI is the use of AI-powered language tools that provide immediate feedback and suggestions to refine and strengthen students' language skills. AI-created simulations and virtual environments offer students the opportunity to visualize and understand complex concepts. The use of AI for question and answer systems promotes self-directed learning, and when used in the classroom, it enhances interactivity. AI makes interaction with technology more human-like and intuitive. On page 16 of the report "Artificial Intelligence and the Future of Teaching and Learning", you'll find a table with examples of future interaction possibilities that AI can bring. AI can also be a powerful tool to support and enrich the learning process. The article "Assigning AI: Seven Approaches for Students, with Prompts" proposes seven approaches to using AI in the learning process: as a tutor, coach, mentor, teammate, tool, simulator, and student. Finally, AI offers various possibilities to reduce the workload in education. A significant part of the burden comes from administrative or repetitive processes, such as preparing lessons, grading assignments, and general management tasks. With the use of AI, many of these time-consuming tasks can be automated.

survey conducted by the editorial board of Onderzoek, NWO's relationship magazine, among scientists in various disciplines at Dutch knowledge institutions, shows that four out of five Dutch scientists foresee that AI will radically change society. AI makes global collaboration easier by removing language barriers. The power of AI also lies in accelerating data analysis. This allows researchers to work more efficiently with large datasets. Advanced ML models can be quickly applied to collected data, eliminating time-consuming data processing. AI can also perform complex calculations that were previously unthinkable and learns from the data it is trained on, leading to continuous improvements.

AI, as a generic technology, will eventually find its application in all business sectors and societal challenges (Bytesnet, n.d.). There are also many opportunities to use AI in the green sectors to work more efficiently and sustainably. Examples of this are the development of: usage of AI for Meteorologists (Scragg, 2024), how will it be used in greenhouses, indoor farms and commercial horticulture (Higgins, 2024) or AI Farming Technology Used in Crop Monitoring and Plant Disease Detection (Onome, 2024).

In the other sections of this library guide, you will find information about responsible use and examples of applications and working methods of AI in education and research.

Limitations: what you can't do with AI

Generative AI is powerful and developing rapidly, but it's certainly not perfect yet. It's important to understand where GenAI excels and where it still struggles. These are the limitations and challenges you need to consider when using GenAI.

AI is not always right
Although GenAI can deliver impressive results in some situations, they are also prone to errors and can produce incorrect or misleading information. Read "ChatGPT is wrong a lot when it answers programming questions, study says" for some examples. An AI chatbot will not readily question the validity of the query. The output of image and video generators can also contain illogical details: not blinking with eyes, unrealistic hands, strange shadow shapes, odd light reflections in eyes, AI regularly struggles with laws of nature (Ding et al., n.d.).

Take the 'AI or REAL?' quiz on britannicaeducation.com to practice evaluating whether GenAI output contains errors.

AI does not monitor its own quality
GenAI is based on complex mathematical calculations and patterns in large datasets. They have no understanding of the meaning behind their generated output and therefore do not know if the output is correct. They also don't always understand the full context of a topic, which can lead to nonsensical or overly literal answers. Finally, the quality of the answers also strongly depends on the quality of the input (prompts) that is entered into the chatbot. Good use of ChatGPT requires skill in 'prompting' (the way questions are formulated).

Biases can appear everywhere
Although AI models are designed to understand and generate human text and language, they can unintentionally reflect biases and discrimination embedded in the training data. This phenomenon is often referred to as bias in AI. The term bias refers to a systematic tendency or prejudice in the way information is collected, interpreted, or presented, which can lead to inaccurate or unfair results. Much of the training data for AI models comes from the internet, where a wide range of perspectives and opinions are reflected. This can lead to the adoption of stereotypes, prejudices, and limited worldviews in the generated output (Basten, 2023). Read Google pauses AI image generator after 'problems'. More information "UNESCO finds ‘pervasive’ gender bias in generative AI tools"

Intellectual property rights are a point of concern
Many AI companies are not transparent about the training data they use. As a result, the copyright of creators of copyrighted material included in the training data is not properly respected. Copyright, in short, is the legal right of the creator of an original 'work' to decide how their work is used and distributed (Auteursrecht.nl, 2020). The creator also has the right to be credited and have the source mentioned. However, in general, GenAI tools cannot tell which sources they use for their output.

And privacy is also a point of concern
In addition to AI not being transparent about the training data used, they are often also not transparent about the data you input when using AI. It is not clear what happens to prompts and ChatGPT output. Therefore, it is not advisable for users to input (privacy) sensitive or confidential information when using AI.

It's also good to know that it's possible to prevent the data you input into a Generative AI system like ChatGPT from being used to train the model. This can be done by choosing to use a temporary chat, also known as an incidental session. In this case, the data you input is not stored and used for future training purposes of the model. When you use such a temporary session, privacy is better protected. This is a good option when you don't want the information to be stored or used for further development of the AI. However, it remains important to always check how the AI platform you're working with handles data, and to carefully review the privacy guidelines or terms and conditions.

Commercial interests may be at play
Much of AI is in commercial hands, which can conflict with values that are important in education and research, such as ownership, authenticity, transparency, and privacy. AI may create a profile of you based on your usage. Additionally, there may be a risk of inequality of opportunities when the use of AI involves costs: not everyone can afford this.

Generative AI can consume a lot of energy
Energy is needed to train and maintain AI. AI creators are becoming increasingly aware of the need to reduce their CO₂ footprint, but there is still a long way to go (Adobe Firefly, n.d.-b). Also read this article about the invisible costs of AI (Gothoskar, 2024).

Lack of human insight
Although AI is capable of formulating coherent answers to specific prompts or questions, the chatbot is not human. Therefore, AI can only mimic human behavior, but AI does not have human experiences (George & Merkus, 2023). AI lacks emotional intelligence and does not recognize or respond to sarcasm, irony, or humor. AI does not always recognize idioms, regional expressions, or jargon. AI might interpret an expression like "it's raining cats and dogs" literally. AI has no senses and cannot see, hear, or communicate with the world as humans do. As a result, AI cannot understand the world based on direct experience, but only based on sources. AI answers questions very robotically, making it clear that the output is machine-generated and often based on a template. AI does not understand subtext, so they cannot "read between the lines" or take sides. Although neutrality is often a good thing, for some questions you need to take a side. AI does not have practical experiences or common sense and cannot understand situations that require this kind of knowledge. They also cannot respond appropriately to such situations. AI can summarize and explain a topic, but cannot provide unique insight. People need knowledge to offer a new perspective, but experiences and opinions are also crucial for this process.

And AI also cannot (yet) do this: 

  • Provide information from after the date their training was completed. The output may therefore lag behind current events.
  • Provide sensitive information: AI does not have access to personal, confidential, or specific information about individuals unless it is publicly available.
  • Give medical advice: for medical advice, it's always best to consult a professional doctor or healthcare provider.
  • Provide legal advice.
  • Understand emotions.
  • Complex reasoning and analysis: answers are based on patterns in the training data and can sometimes be superficial.

AI are not search engines. AI can process large amounts of information, but does not understand the meaning of the information itself.

The size of the training data encourages AI to approach a topic from multiple angles and answer questions in all possible ways. The tendency to explain a lot can make AI output unnecessarily formal, cumbersome, and long-winded.

Know more

Publications in the library and on Greeni

We offer a wide collection focused on artificial intelligence, both physically and digitally. We closely monitor the changes and adapt our collection accordingly, when possible.

Through Greeni Global Search, you can find our entire collection, both digital (e.g., databases, publishers) and physical. If you only want an overview of our books about AI, you can do so via this link. Physical books can be reserved or directly borrowed from your library. Discover our collection and stay informed about the latest developments on this topic.

EDUCAUSE is a nonprofit association whose mission is to lead the way, advancing the strategic use of technology and data to further the promise of higher education. Below you will find the messages from EDUCAUSE about Artificial Intelligence.

Loading ...

Keep ahead in the realm of AI in education with Kangaroos leading AI for Teachers blog. 

Loading ...
Loading ...
Loading ...