ChatGPT and other Generative AI Systems with an Impact on Ghent University Education

Latest update: 14 May 2024 

Read this Education Tip to find out more about ChatGPT and other generative AI (GenAI) systems. Take a look at the possibilities outlined below to integrate these tools in your education practice, and discover how you can foster AI literacy.  Find out more about Ghent University’s 2023-2024 guidelines on generative AI as well as the guidelines we are currently preparing for the upcoming academic year.  

What is Gen(erative) AI? 

A subfield of artificial intelligence (AI), generative AI creates new content (e.g. texts, images, sounds) based on patterns it has learnt from existing data. The newly generated content closely resembles existing contents. Generative AI uses machine learning models like deep learning neural networks to learn what specific content looks like and how it works. Subsequently, it generates new content that is very comparable to the content that was used to train the model. The fields of application are manifold, ranging from the arts, entertainment and communication, all the way to academic research. 

The paragraph above was written by ChatGPT, a generative AI tool (disclaimer: but translated by a non-generative human being). In addition to producing texts, GenAI systems also offer a wide range of other applications. They are capable of generating audio, programming languages, slides and images. For an overview of existing tools, take a look at the website or the website Generative AI Tools: Types and Models.  

Have you never used generative AI tools such as ChatGPT before? These experimentation guidelines will lead first-time users on a step-by-step journey through this “Brave New World” of GenAI.   

ChatGPT and Other AI Writers 

ChatGPT (Generative Pre-trained Transformer) is the best-known generative AI system to date. The system was launched on 30 November 2022 by the American research lab OpenAI. ChatGPT is an open access artificial intelligence (AI) writer that answers questions, the so-called prompts. Within seconds, the system generates an answer to your question in the form of a (brief) text. If you are dissatisfied with the result, you simply ask the system for a new answer, which will be different from the first. In addition, the system can also summarise, translate, restructure and correct texts in different languages. You can ask it to copy the references you used in your text, or to adopt any other reference style you prefer. Finally, the bot can also generate computer code with correct syntax in several programming languages such as JavaScript, Python and React. 

The free version of the chatbot relies on GPT-3.5, which is a language model that has been trained by means of billions of words from various sources. This language model allows the chatbot to predict the most likely next word in certain contexts. The creators of ChatGPT fed their chatbot examples of human dialogue to teach it to chat, and then fine-tuned the results based on human feedback. 

There are many other AI writers besides ChatGPT. Certain existing search engines now have built-in chatbots, e.g. Gemini (Google) and Co-Pilot (Microsoft). Microsoft has integrated Co-Pilot in their other applications, like Word and PowerPoint, although only in specific licenses. In the context of academic education and research, the following AI writers are currently the most interesting ones:  Consensus, Elicit, Scite_, Research Rabbit and ChatPDF. Be sure to give them all a try to find out what they are capable of. 

Generative AI in the Teaching Practice  

The applications of AI tools in the teaching practice are plentiful. Below you will find an overview containing a number of specific examples as well as possible tools you might use. 

Role Action Possible tool
ideas generator brainstorming, working your way around a writer’s block, ... ChatGPT, BingChat, Bard, Perplexity, … 
advanced search engine searching online sources

in general: BingChat, Bard, Perplexity, … 

academic sources: Consensus, Research Rabbit, SciSpace, Scite_, Elicit, … 

writing assistent 

checking the grammar and spelling of a text, revising texts to come up with a more academic style, paraphrasing, ...

As a generator of programming languages

ChatGPT, BingChat, QuillBot… 

ChatGPT, … 

designer of learning materials making differentiated exercises, finding examples, writing syllabi (chapters), making slides, converting slides to a syllabus, ...

Text: ChatGPT, BingChat, Perplexity, …

Slides: SlidesAI, …

Images: Adobe Firefly, DALL-E, Midjourney, …

Audio: AudioCraft, SoundFul, … 

giver of feedback giving feedback on specific ideas, on student assignments, ...  ChatGPT, ... 
assessment designer making rubrics , (multiple choice )exams, …  ChatGPT, MagicSchool, … 
translator translating texts  Deepl, Google Translate, … 

analysing data


analysing text

GitHub Copilot, ChatGPT (betalende versie), … 

ChatPDF, UPDF, … 


Several online references offer more extensive overviews of specific practical applications of AI. Take, for instance,   

Fostering AI Literacy 

Education has a major role to play in fostering AI literacy. Think, for instance, of the ability to use AI critically, or the ability to reflect on the use of AI, to interact with it correctly, and to be aware of its risks and limitations. 

Study programmes, in other words, should contain teaching and learning activities that foster and assess AI literacy.  And for that purpose, they can incorporate the UFORA learning path Generatieve AI: van leren tot creëren (in Dutch) in the teaching materials of lecturers. This implementation urges lecturers to be(come) well-versed in AI, too. Familiarise yourself with GenAI by going through the UFORA leraning path Generatieve AI: over leren, creëren en doceren (in Dutch).

ChatGPT and its Limitations 

As indicated above, ChatGPT is currently the most frequently used generative AI tool. Although it yields some impressive results, it still has some limitations, too. Most of these limitations manifest themselves in the free version of the app. SAs of May 13, 2024, a new free version of ChatGPT has been made public, namely GPT-4o. This  partly makes up for some of the limitations described below. 

  1. In terms of content, the information it generates is unreliable, a feature that in AI-speak is called “hallucination”. You can see that clearly in the use of references. ChatGPT does not have a database of references but will add references to a text in a reference style of your choice if you ask it to do so. These references, however, do not always exist: the system can make them up as it goes along, and will imitate their typical patterns in order to make them seem real.
  2. This lack of references means that the user has no idea where the information has been retrieved. This, in turn, means that users are unable to reference the original author and causes them to unconsciously commit a form of intellectual theft.
  3. The free version GPT-3.5 does not directly tap into the internet, but has been trained by a database. The data input took place in January 2022, which means that the system contains (and thus generates) no information on events from February 2022 onwards. The new free version GPT-4o was updated in October 2023, so will not generate information as of October 2023.
  4. The makers store all the data you input and are less than transparent on what happens with these data afterwards. Most likely, the data are used to train the system further, which means that you unknowingly and unwillingly make your texts available for free.
  5. The generated text may contain unnatural language and/or grammatical/linguistic errors, and often relies on the same language patterns. English texts contain fewer mistakes than other language texts.   

Using Generative AI: Risks 

The limitations described above apply less to other generative AI systems. Just like the free version of ChatGPT, however, they bring in their wake a number of risks and ethical implications. 

  1. The use of GenAI may entail an invasion of your privacy. Never use sensitive data as input. Feeding any such system data governed by GDPR is punishable by law unless the systems have explicit rules and regulations on data processing. Please read up on GDPR and AI on
  2. The tools can spread disinformation, not only because they hallucinate (cfr. above) but also because they are able to generate fake images, photos, etc. These, in turn, create fake news which then contaminates the internet.
  3. The answers may also contain biases, because the system has been trained on “coloured” references that have been fed into it by trainers with specific prejudices.
  4. Normally, the developers build in safety mechanisms to prevent the system from producing answers that are clearly ethically reprehensible. However, these safety mechanisms can be bypassed easily by giving certain commands.
  5. In order to purge their systems of biases as well as of ethically reprehensible answers, the companies behind these tools recruited people from all over the world to give feedback on the generated answers. The working conditions of these people are highly unclear, a fact that has already been denounced in multiple media reports.
  6. Another ethical implication of GenAI is its impact on research integrity. It is the responsibility of the students and the researchers using these tools to ensure the quality of their research. Among other things, this means not presenting false findings, being transparent on the providence of their references, ... .
  7. At first sight, it may seem that GenAI has the potential to eradicate inequality: all students have access to the tools, which means that e.g. relying on a private tutor for written assignments is no longer a prerogative of the rich. However, the developers are increasingly inclined to launch paying versions of their tool, which perform better/more efficiently than the free versions. If anything, this enhances inequality.
  8. Another aspect that must not be underestimated, is the ecological footprint of using GenAI tools. The additional IT infrastructure, data storage and data transmission cause a significant increase in energy consumption.
  9. Finally, there is the risk of anthropomorphism: the computer seems to think and talk like a human being. This may result in a decline of human interaction. It is important to understand that the tools may have learnt to use a number of thought patterns from texts, but they contain no explicit logic and are in fact very limited in their reasoning. 

ChatGPT and GenAI Policy at Ghent University   

The easy access to AI systems forces study programmes and lecturers to think about the implications on their intended learning outcomes, on teaching and learning activities, and on assessment. What is more, the professional field and society at large demand that we teach our students to use GenAI tools appropriately.  

The 2023-2024 Guidelines (current academic year) 

In the current academic year, Ghent University has chosen to ask of its study programmes to determine their own guidelines. In close consultation with the lecturers, study programme management estimates the consequences of GenAI, and determines whether or not, and if so, how, students are allowed to use it.  

An important point to consider is that, according to the Education and Examination Code (Art. 78, §2), the submission of a text or a product created or revised by a generative system is considered to be a form of plagiarism. Settling plagiarism disputes is the prerogative of the Examination Board. The Board decides whether or not there is sufficient evidence of plagiarism, and whether or not to impose disciplinary measures. 

This approach will be maintained until the end of the 2023-2024 academic year.  


Unauthorised Use? 

If you suspect that ChatGPT or another GenAI system has been used while it was formally banned, immediately notify the chair of your study programme’s Examination Board. After consultation, the Board will decide whether or not to initiate disciplinary exam proceedings. The Examination Board will then determine whether or not there is sufficient evidence and whether or not to impose disciplinary measures. 

Proving whether or not students have availed themselves of GenAI, however, will not always be a straightforward matter. Existing detectors like e.g., Openai detector, GPTzero en, are unreliable. And although the anti-plagiarism software Strikeplagiarism has also activated a functionality that could detect AI-generated texts, for the time being, the detector only works on longer English-language texts. Since the detection is still far from reliable and transparent, this functionality has been temporarily disabled at Ghent University. This way, we avoid giving lecturers a false sense of security, and we prevent students from (possibly) being wrongfully accused. 

In the context of online exams, too, it is still very difficult to detect the use of GenAI. In the case of online on-campus exams, physical surveillance is still highly recommended. Shutting down internet access is not an option, either: any student with a modicum of IT skills will be able to go online on their mobile phones or by using previously installed plug-ins on their laptop. In the case of remote online exams, proctoring may be an option, albeit not a watertight one either. 

Preparing for the 2024-2025 Academic Year 

The 2024-2025 Guidelines 

These guidelines have been drawn up in close consultation with our Directors of Studies and/or our faculties’ education support staff. 

Ghent University  

  • ...opts for a responsible use of GenAI tools in the teaching practice, with a special focus on

o its impact on the student’s learning;
o the validity of the assessment;
o the ethical implications;
o preparing our students for the professional field and a society with GenAI. 

  • ...chooses to explicitly allow the responsible use of GenAI tools in the context of the Master’s dissertation from the 2024-2025 academic year onwards;
  • ...chooses to encourage the responsible use of GenAI tools in other (written) assignments from the 2024-2025 academic year onwards, and only to ban it if doing so is feasible and necessary for the assessment of the competencies/learning outcomes. 


In preparation of the 2024-2025 academic year, all faculties, study programmes and lecturers will have to: 

  1. review the Master’s dissertation, the corresponding course sheet and the Master’s dissertation regulations;
  2. review every written assignment in the curriculum.  

The reason for this is that currently, Master’s dissertations and written assignments still assume (in part of entirely) that students do not use GAI tools.. This means that the validity of the assessment is no longer at 100%. What is more, the Master’s dissertation and other written assignments are an excellent opportunity to teach students how to use these tools in a responsible manner (as part of AI literacy). 

Read up on how to review the Master's dissertation and/or other written assignments in these extensive guidelines. 

The emergence of GenAI has an impact on more than the Master's dissertation and written assignments alone. In the academic years ahead, study programme managements will have to consider the impact of GenAI on the curriculum and the course competencies/learning outcomes, the teaching activities and the assessments. 

In these GenAI times it will be key 

  1. to reflect critically on programme competencies/learning outcomes, to determine which basic competencies/learning outcomes students have to acquire without (the help of) AI, to review and adjust learning outcomes, teaching activities and learning materials, and on how validity of the assessments can be guaranteed;
  2. to invest in the digital literacy (including AI and GenAI literacy) of lecturers and students throughout the study programme. 


On-demand workshops are possible, to be planned in consultation with the faculty support staff:

  • at faculty level,
  • at study programme level. 

Want to Know More? 

This Education Tip is the result of consultations among Ghent University’s AI experts and educationalists, and based on information from the references below. If you have any further (support) questions, please get in touch with


Adams, J., Brophy, L., Ediger, J., Herry, L. & Zumpano N. (2022). ChatGPT through an education lens.


Anseel, F. (2022, 8 december) De meest onderschatte vaardigheid. De Tijd.


Cardona, M. A., Rodríguez, R. J., & Ishmael, K. (2023). Artificial Intelligence and the Future of Teaching and Learning Insights and Recommendations.


Clark, D. (2022) Donald Clark Plan B.


Goethals, P. (2023, 24 januari) De grootste intellectuele hold-up uit de geschiedenis. De Standaard.


Liu, D. (2023, 11 mei). Care and Connection: Assessment in the Age of AI and Analytics. JISC Connect more


Miller, M. (2022). Ditch that textbook.


Molenaar, I. (2022). “Towards hybrid human-AI learning technologies.” European Journal of Education, 00, 1–14.


Monash University “Generative AI and assessment” 21 augustus 2023


Rubens, W. (2022) Blog over ChatGPT.


SURF (12 januari 2023) Impact ChatGPT op onderwijs [webinar]


UNESCO (2023) Guidance for generative AI in education and research


Van Deyzen, B (2023) ChatGPT-verzameling bronnen.


Van Gorp, S. (2023, 10 januari). Moedig studenten aan de intelligente software ChatGPT te gebruiken. De Tijd.


VU Amsterdam (2022). Hoe ga je als docent om met ChatGPT?


Watkins, Ryan (2022). Update your course syllabus for ChatGPT.


Last modified June 17, 2024, 4:42 p.m.