AI tools in the context of teaching, learning and exams

Guidance, contacts and further information

At the University of Konstanz, we keep a keen eye on current developments, and foster avid discussion on the opportunities and challenges that artificial intelligence (AI) tools pose for teaching, studying and exams. This includes language-based text-generating tools, such as ChatGPT. Because the number of such tools and their potential applications as well as the quality of the texts and analyses they produce is only expected to increase, the university is actively looking for meaningful uses of AI in university teaching, but also critically assessing the implications for teaching and exam settings.

Last update: 27.10.2023

Guidance

Because there are so many different tools and fields of application, it is neither sensible nor possible to ignore or impose a general ban on AI tools. Instead, students should actively discuss and learn responsible ways to address the opportunities and risks associated with AI tools. Where suitable, they can practice and reflect upon the responsible use of digital tools – both in the context of university study as well as in academia in general.

There are new challenges teachers and examiners face now that students have easy access to a wide variety of tools and the technologies are continuing to develop very rapidly. The following information aims to help teachers discuss the use of AI tools in courses and explain when they can or cannot be used.

Students can get a solid foundation for thinking about whether generative AI can be used in specific scenarios find as well as find guidance to understand the background of university regulations.

AI tools and how they work

What kind of AI tools do we mean?

This page focuses on generative AI used to create new content, e.g.:

  • tools like ChatGPT that are based on large language models and go much farther than long-standing tools such as spell check
  • tools that search available databases and create summaries on the basis of prompts (research assistant tools such as Elicit, Perplexity)
  • tools that generate images on the basis of machine learning (diffusion models such as Midjourney, Stable Diffusion)
  • tools students can use to create individualized learning material or get quizzed on course content (recommender systems and learning analytics tools like ChatPDF, TutorAI)

An overview of AI tools for the university context is available (in German) from VK KIWA.

How does text-generating AI work?

Text-generating AI is based on large language models employing probabilities. The next word of a text is selected from a group of words that are in a certain probability class. This random element causes different texts to be generated from the same input. To make the generated texts sound as much as possible like texts written by humans, these language models are trained with a huge amount of data and take into account word meanings as well as contexts.

Almost everyone has heard about ChatGPT. The language model GPT 3.5 is very easy to use via a chat interface, and it can process and answer requests in natural language. ChatGPT aims to generate texts that appear human-generated and that people are highly likely to find plausible. The content of these texts is not necessarily correct.

For example, consider arithmetic tasks. Since ChatGPT is based on a language model, you cannot use it like a calculator. However, in some cases, it can produce surprisingly correct results for math problems. It uses statistical probability to complete the phrase "17 + 12 =" with the correct answer "29".

The system has also been trained to generate plausible-sounding answers, even if it has no relevant information whatsoever. This means that it "invents" answers that seem plausible on the surface and are written correctly. Such "hallucinations" occur because – at least in the current version – there is no mechanism in place to fact check answers beforehand. This also applies to publications mentioned in texts, some of which sound very convincing although they actually do not exist. Although the system is continually learning and does provide disclaimers that it cannot provide answers, such "hallucinations" still happen often. Using good prompts can improve the quality of AI-generated texts.

More information (in German) about how ChatGPT works: Krapp, KI in Schule und Uni (Video), Weßels, Was ist ChatGPT? (Video) and Arnold, ChatGPT für Nicht-Informatiker (Video)

Reproducibility, black box and bias

Because of the stochastic algorithms used in the AI language models, the results are not reproducible. If you enter the same prompt repeatedly, you will get different results. The availability of computing power and data transmission quality also clearly impact the results, i.e. the quality of the results can differ significantly depending on current traffic and load balancing.

A major problem is that commercial language models do not share the respective training data and algorithm – they are a "black box" to users. The training data used by ChatGPT are mostly from western countries or the northern hemisphere. Training data inherently includes the corresponding culture and values. As a result, AI reproduces these inputs when generating texts. This means that texts also contain biases and unintentionally reinforce the status quo.

Sustainability and ethical considerations

The analysis of training data by language models and the corresponding server farms is very energy-intensive and problematic in terms of sustainability and global warming. The results of studies on the energy consumption of different language models vary, but they nevertheless show that there is very high energy demand associated with the operation and use of these tools. According to researchers from Google and the University of California, Berkeley, the training of OpenAI's GPT-3 required 1,287 megawatt hours of energy. That is the equivalent of what 320 four-person households use in one year.

From an ethical standpoint, there is the issue of bias as well as questions about the working conditions for staff training the AI models. Some of the training work is outsourced to subcontractors, and there is often no information publicly available about the names of these companies nor about the working conditions of their staff.

One example the public has been informed about and has widely discussed, is the situation of of about three dozen employees in Nairobi employed by an American company hired by OpenAI to label data for the training of ChatGPT. The staff's job was to classify "negative examples", e.g. descriptions of violence, sexualized violence or hate speech, so that the AI could later be trained to prevent this type of output.

Options and limitations for using AI

There are four potential scenarios for using AI tools on coursework and performance assessments:

1. No generative AI tools permitted

Do students need to acquire basic competencies for completing cognitive tasks? Of course, some of these tasks can be completed faster or better by generative AI. So you need to explain to your students how important it is to learn and practice these basic competencies themselves. It is helpful to have students practice and discuss these tasks with each other during on-campus classes.

At the beginning of the course, please clearly communicate that generative AI cannot be used for coursework and performance assessments. It could also be advisable to let students take a supplementary or alternative on-campus exam (in a written, oral or practical format), as long as this option is permitted in the examination regulations and by the department. Please ensure that honest students are not put at a disadvantage when completing tasks where using AI tools would provide a large advantage.

2. Only certain types of generative AI tools permitted

Given the wide range of generative AI tools available, it may be useful to explicitly focus on certain types of generative AI in a course and allow students to use them. You could also take time in class to reflect on these AI tools (see item AI in teaching).

If you would like to restrict the use of generative AI on coursework and performance assessments, you can state this fact in the course description: "You may use the following generative artificial intelligence (AI) tools when completing coursework/performance assessments: [insert name or type of AI tool]. You are required to state any use of generative AI (or, depending on the case, You do not have to state...). By using AI-generated content, you, as the author, assume responsibility for the accuracy of the content."

3. Limitations on how AI can be used

Another way to limit the use of AI is to regulate how and when AI tools can be used. For example, such tools may be allowed for brainstorming or text editing, but not for generating portions of text for a final document.

To restrict the use of generative AI in coursework and performance assessments, you can state this fact in the course description: "Generative artificial intelligence (AI) can be used for the following tasks when completing coursework/performance assessments: [insert permitted types of use]. You are required to state any use of generative AI (or, depending on the case, You do not have to state...). By using AI-generated content, you, as the author, assume responsibility for the accuracy of the content."

4. Unrestricted use of AI tools

Even if there are no restrictions on students' use of AI tools on coursework and performance assessments, this information, too, should be clearly communicated to students. You can use a text like this to do so: "There are no restrictions on the use of generative artificial intelligence (AI) tools when completing coursework/performance assessments. You are required to state any use of generative AI (or, depending on the case, You do not have to state...). By using AI-generated content, you, as the author, assume responsibility for the accuracy of the content."

AI in teaching

Skills for using AI

AI tools can complete certain cognitive tasks for people. Having these tools means that some tasks can be completed faster or better. This can be helpful for many situations in academia and the workplace, for teaching, learning and everyday life. In order to benefit from AI tools, you do, however, need to know which ones are suitable for the task at hand so that you can understand how they work, evaluate their output and reflect on the conditions for using these tools (e.g. data protection matters). You also need to practice giving effective prompts to AI tools in order to steer them in the right direction and get useful results.

Through university teaching and study (too), you can learn how to use AI tools responsibly and effectively. University students also need to develop their AI literacy. In the context of the Advanced Data and Information Literacy Track and elsewhere at the university, students can learn how to reflect on, test and evaluate new digital options while using and expanding their critical thinking skills. Since AI tools have only been around for such a short time, there is less emphasis on teaching AI skills and more on giving students and teachers the opportunity to work together and explore options for using AI tools, recognize the limitations and dangers they pose and gain/expand their AI competencies along the way.

Testing AI tools with students

Some ways to test and assess AI tools in courses are:

  • Explore how a tool like ChatGPT can be used to get started writing more easily or to support the stylistic revision of a text.
  • Think about when a tool like ResearchRabbit can simplify and support bibliographic research.
  • Find out how research assistants like Elicit can help with the structuring of research tasks.

There are data protection requirements that apply when students are required to use AI tools in courses. Alternatively, classes can also work together as a group with the tool, or teachers can present the output of an AI tool for students to reflect upon in class.

Equality of opportunity

In the context of using AI in teaching, equality of opportunity is also an important aspect. On the one hand, AI tools that are freely available or have been made available to an entire group of students can balance out inequalities and provide opportunities for inclusion and individualization. On the other hand, concerns are being raised that high-performing students may benefit more from AI tools than their lower-performing peers, so that inequalities may actually be reinforced (Matthew effect). By examining AI tools together in courses, all students have the chance to learn and reflect on how to use these tools, and not just students with a digital affinity.

Preparing course material

When used appropriately, text-generating AI tools like ChatGPT can be quite helpful for teachers.

  • Generating material: A tool generates texts for students to continue working on (e.g. foreign language texts at a specific level of competency). This makes it easy to adjust texts for individual needs – as long as the tool is provided with relevant information. AI tools can also be used to create multiple choice questions and quizzes or to make suggestions for course plans.
  • Assessing performance: Since ChatGPT was released, many teachers have shown that the tools can produce plausible results in evaluating texts if they are given the corresponding criteria beforehand. However, this does not work flawlessly. In repeated evaluations of the same content, differences can be observed, e.g. due to load balancing, traffic or different accounting. ChatGPT can also direct teachers' attention to relevant things and thus shorten the amount of time needed to assess students' work. In the end, of course, it is the teacher's job to do so.
  • Deconstruction: Teachers can have generative language models create texts of varying complexity and quality in order to have them evaluated based on the Bloom taxonomy – which is especially useful for evaluation and analysis.

Data protection

A number of data privacy issues arise that need to be considered in connection with using AI tools. AI tools process personal data, often on servers in the USA. Sometimes it is not clear which data is being processed.

There is a difference between the personal data required for users to log in and use AI tools and personal data entered by users in a prompt for an AI tool.

  • Registered users of ChatGPT provide their given name and family name, email address and phone number as well as the IP address of their computer. If course participants are required to register with their own personal data in order to use an AI tool in a course, this must be voluntary for all participants in accordance with the General Data Protection Regulation (GDPR). It is considered to be voluntary consent if the course is not required but is classified as a required elective or a supplementary course. Additional information is available from the university's Division of Legal Affairs. Contact the Communication, Information, Media Centre (KIM) for information and advice on actively using AI tools in courses in line with data protection requirements.
  • We do not recommend entering the personal data of living persons in a prompt, e.g. for ChatGPT, since you must usually have this person's consent to do so. This personal data includes any information that can be used to identify an individual person. It also includes different pieces of information that, when combined with each other, can be used to identify a specific person. Unlike other searches for people (e.g. via Google), ChatGPT does not just provide information it already has, but processes data input using unknown algorithms, too. OpenAI, the operator of ChatGPT, also uses this data to train and improve the AI. This is why you should avoid entering confidential data (e.g. from research interviews) or current research data.

AI for studying

Support for the learning process

Generative AI tools can be used to support the learning process, since you can communicate with them using natural language. For example, they can be used to create individualized learning material. Another option is to use them to correct and improve your answers to practice exercises. This means AI tools can be used as an adaptive feedback system (sparring partner) to reinforce your individual learning path.

Writing skills

In the area of writing skills, it is important to go beyond merely evaluating the quality, factuality and linguistic correctness of AI generated texts. You need to consider what role the writing process plays in your own learning and thinking process. Text-generating AI tools like ChatGPT can assist with writing, but humans will still require and need to acquire extensive writing skills. This is especially true in the context of university study and teaching, where complex (subject-specific) scientific thinking and writing skills are obligatory.

Discussing course content and scientific topics in writing as well as trying to express complex issues in one's own words not only expands students' academic writing skills but also helps them understand and learn study content. The Writing Centre can advise you on how and when you can use AI tools for this purpose.

Large language models versus expert systems

Language models can be used to generate texts that often sound convincing and are, in great part, linguistically free of errors. At first, this can give the impression that the produced content is correct and complete. However, if you take a closer look, you often notice that the generated text is very superficial in some places and contains factually incorrect content or completely fabricated statements. It is important to know that such texts also contain made-up sources that often appear plausible (see How does text-generating AI work?).

When you use AI tools, it is important to understand how they work and what knowledge basis they rely upon. A language model is not an expert system, which is why users need to act as experts by taking responsibility for checking the correctness of AI-generated content themselves. Since there is a lack of transparency on the sources consulted, this makes the fact-checking process more difficult.

Decide when to use AI tools

Although AI tools can be used in a variety of ways for learning and academic work, there can also be good reasons to decide against using them, e.g. in order to stay independent of AI tools or to avoid limiting your own learning process. Especially when it comes to basic competencies that an AI tool can handle very well, it can still make sense for your learning process to practice these skills first in order to really grasp them and then, later on, to be able to manage the process as well as check and assess the results when using AI. Please also keep in mind that thinking and writing go hand in hand. This means that using AI tools can indeed be a useful help for the writing process, however, it can also hamper your ability to gain a deeper understanding of content by thinking about how to formulate your ideas.

For these reasons, it is valuable to decide ahead of time, whether AI tools support your learning process or whether they simply make completing a task as easy as possible. For orientation, please check whether the use of AI tools is permitted for the tasks you have in mind (e.g. exams).

Your decisions should also factor in the topics discussed in the items Data protection and Sustainability and ethical considerations.

AI and exams

Labelling AI-generated texts in student assignments

The following questions provide orientation about how and when students need to disclose that they have used generative AI (as permitted) in performance assessments (see item Options and limitations for using AI).

  • Is the use of AI part of the research method and is information about its usage thus required for the transparecy of the research process? E.g. were research assistant tools such as Elicit or Perplexity used for literature research and evaluation?
  • Is AI-generated material used as a primary source and must thus be cited like any other (primary) source? E.g. were linguistic analyses performed on AI-generated texts?
  • Is the use of generative AI part of the course content, and are students expected to reflect upon it? E.g. is it part of the performance assessment for students to critically reflect on and edit AI-generated summaries of subject-specific texts?
  • Are the assigned tasks that can be completed with the assistance of an AI tool relevant for the competencies being testing? E.g. are translation tools being used in a translation exam by students of a language or are they simply being used by students who need to use a second language while studying their subject?
  • Should disclosure of AI usage be considered (positively or negatively) in the grading process?

Aside from didactic and examination requirements, users may be obligated by licensing/terms of use to disclose use of an AI tool.

Citing AI-generated text

Students can identify AI-generated content either by providing a citation or by stating the use of AI tools in the text itself or in the corresponding declaration of independent work.

  • If it makes sense and is necessary to mark individual text passages or other AI-generated materials as a direct or indirect citation, think about how this can be implemented in the citation style used. The options for indicating the use of AI tools are constantly evolving, so common citation styles do not always include every type of use. For more information, see the APA Style Blog (7.4.2023) and the MLA Style Center (17.3.2023).
  • In some cases, it is sufficient to indicate the use of generative AI in a suitable place (e.g. in the introduction, in the methods section of a research paper or in a list of resources used). The declaration of independent work that students are required to submit, can state which information is required (type of AI tool used, usage type, input/prompts).

If you are required to state the input/prompts used for an AI tool, it is now possible with ChatGPT to create a stable link to the chat history so that this information can be provided relatively easily.

Using AI tools in coursework

As a teacher, you can test how generative AI tools would complete your existing assignments. Then you can think about how you and your students could critically reflect on the use of AI tools.

  • How can you discuss the opportunities and risks of using generative AI?
  • Can you demonstrate the use of generative AI in your seminar in order to show and reflect upon these opportunities and risks?
  • What are the advantages and disadvantages of using generative AI?
  • What ethical aspects speak for or against the use of generative AI?
  • Do you teach basic skills that exclude the use of AI, or can you expect the students to already have these skills?
  • Which options are there to make not only the end product but also the process of generating content transparent and assessable?
  • Are there scenarios where using generative AI would enable students to spend more time completing more valuable assignments?
  • What impact does using these AI tools have on your learning objectives?

AI and good scientific practice

The University of Konstanz's Statutes to Ensure Good Scientific Practice define what is considered to be scientific misconduct. All of the university's academic staff members and students are required to follow the principle of academic integrity. When using generative AI, please consider the following questions:

  • In which cases might the use of generative AI be classified as scientific misconduct?
    This includes the question of who can be considered the author of AI-generated material – the person working with the AI tool, the AI tool itself or other people? The answer depends upon whether AI-generated material must be cited in order for it not to be considered as plagiarism. AI-generated content might not meet the requirements for the reproducibility and verifiability of research results. Since the legal positions on such questions are still being discussed, we recommend you use AI tools very cautiously and deliberately.
  • Does AI-generated content include other people's texts without citing them?
    Since AI tools use different sources, some of which are also copyrighted, it is possible for AI to generate passages of text that would need to be cited. For example, ChatGPT uses a wide range of different sources (websites, PDF files, publicly available databases and more) even when they have already been published by specific persons. Since ChatGPT, for now at least, does not provide sources for its responses, you should generally assume that at least parts of AI-generated texts can come from sources that need to be cited. Even if, in many cases, this will not be the case because of the way ChatGPT works, users must be aware that they may face a risk of being accused of plagiarism. To avoid infringing on intellectual property rights and potentially facing liability for copyright claims, you will need to research corresponding sources and cite them in your text. As the author, you have complete responsibility for the academic integrity of your work, i.e. the text's factuality as well as the correct labelling of sources used in your text.

AI tools and cheating

If students attempt to influence the result of coursework or performance assessments through fraud (e.g. plagiarism) or the use of aids that are not permitted, the corresponding coursework or performance assessment is deemed to have failed (5.0). Teachers need to decide on and inform their students about whether AI tools are considered to be "permitted aids". We describe four potential scenarios in item Options and limitations for using AI.

The University of Konstanz has created template texts expressly stating the use of text-generating AI tools. The departments can add these text passages to the declaration of independent work that students submit along with their work. For more information, please contact Natascha Foltin and Heike Meyer.

When teaching staff provide students with information about coursework and performance assessments, they must also state in writing which aids are permitted. It is also helpful to discuss with students which type and to which extent it is both possible and useful to use a (specific) AI tool. As a point of orientation, the use of AI is not usually considered to be attempted cheating as long as it does not go beyond comparable help from other persons (e.g. a conversation for brainstorming ideas or help with understanding course content).

Software for detecting AI-generated content

Text-generating AI tools like ChatGPT create new answers and texts based on a randomized algorithm. These tools do not simply pick suitable sections from a pool of old texts. Each text is unique, which makes it impossible to use classic plagiarism detection software on them.

At the moment, it is virtually impossible to use software to identify texts created by generative language models. OpenAI's AI Classifier was only able to correctly identify 26% of AI-generated texts, but it also produced false negatives as well as false positives. Companies such as OpenAI are now working hard to significantly improve the accuracy rate. More information about the rate of accuracy in detecting AI-generated texts.

The biggest problem with detection tools is that they produce so many false positives. This means that such tools can indicate a suspicion, but it is not unlikely that the corresponding text was actually written by a person. Plagiarism detection software cannot be used here, as there are no original texts to compare against and determine whether an author has attempted to cheat. But how does one respond once the suspicion of plagiarism has been raised?

As a first step, one good method is to use common sense and consider: Does the text have an individual tone of voice? Has the text been composed rigorously and is the argumentation coherent? Is there evidence of "hallucination" or does the text seem to be the result of a "statistical parrot"?

Contacts at the University of Konstanz

Working group "AI in teaching", general matters

The working group aims to promote networking within the university and the use of materials and results that have been developed elsewhere. If you would like to share your results and experiences with other university members active in this area, please contact KI-Lehre@uni.kn. The working group is coordinated by Alexander Klein, Academic Staff Development/University Didactics.

If you have questions or concerns about using AI in the teaching context you are also welcome to email KI-Lehre@uni.kn.

Legal matters

Questions about data protection issues: Anuschka Haake-Streibel and Ralph Kraemer

Questions regarding examination regulations and academic integrity: Natascha Foltin

Questions related to copyrights: Karin Günther

Using AI in teaching and exams

The Academic Staff Development (ASD) team supports you with using AI tools in teaching:

  • Development of AI competencies and AI literacy
  • Ethical considerations for using AI tools
  • Testing of AI tools for teaching
  • Exam design in the context of AI tools

To find out more about current discussions on AI tools in the teaching context visit the FON website (in German). There you can read about the bi-weekly meetings in which you can suggest AI tools you would like to test, discuss and analyze with others. For more information, please email Alexander Klein.

Academic writing for students

The Writing Centre advises students on completing written assignments:

  • Development of basic competencies (without the help of AI tools)
  • Critical reflection and use of AI tools in the writing process

An overview of available support in the area of AI and writing is available on the Writing Centre website. For additional information, please contact Stefanie Everke Buchanan and Stephanie Kahsay. Email the Writing Centre

AI-based research tools and plagiarism detection software

Please contact the KIM specialists for your subject area for further information.

Functionalities and compliance with data protection regulations

Academic Staff Development (ASD) and the Communication, Information, Media Centre (KIM) can assist you with using AI tools in teaching and exams. To find out more about current discussions on AI tools in the teaching context visit the FON website (in German). There you can read about the bi-weekly meetings in which you can suggest AI tools you would like to test, discuss and analyze with others. For more information, please email Alexander Klein.

For technical advice and guidance on using AI tools in line with data protection requirements, contact Fabian Stöhr in the KIM team.

AI tools for future school teachers and students

On 20.06.2023, the latest lecture in the series Schule aktuell focused on using AI at schools and universities. The lecture recording and presentation are available for download (4.7.2023).