It is easy to get started – all that is required is to create an account on their website. Once in, the chatbot is ready for conversation:
“How does ChatGPT work? Explain it for the readers of LUM.” A few seconds later, the answer appears:
“ChatGPT is a deep learning model that is trained on large volumes of text data. It uses transformer architecture to understand the context of text and generate answers or solutions. The user interacts with the model and feedback is used to improve the answers.”
Rephrasing the question
The language is a bit stilted and the description difficult to understand for a layman. A rephrasing of the question is in order:
“How does ChatGPT work? Explain it to a child.” The answer comes back:
“ChatGPT is like a smart person in the computer who has read many books and texts. It has learned to understand language and can answer questions or help perform tasks that require an understanding of language. When you ask ChatGPT a question, it thinks back on everything it has read and gives you an answer based on what it has learned. It is like having a super smart friend in the computer who can help you with everything that involves words and languages!”
This was easier. But how smart is this friend in the computer? It calls itself super smart but writes questionable Swedish.
Not a friend anymore
Friend or not – the chatbot is given another chance, this time with the question, “Explain ChatGPT to a beginner.” But when the answer begins with “ChatGPT is a smart computer program...”, but uses the wrong gender for the noun in Swedish, the friendship is over.
The feeling that remains after all the questions – and quick replies – is something akin to a bad conscious. Should we stop being friends just because the chatbot does not know Swedish grammar? When the questions are posed in English, the answers are much better. Which raises the bigger question – are there any limits for how it can be used?
One use case that worries lecturers is for the possibility of cheating. Over the course of autumn 2022, Lund University launched a pilot project to investigate how the University can prevent cheating in examinations. The project is led by Elin Bommenel, senior lecturer in service studies.
“One way to reduce anxiety is to get to know the chat tool,” she says, and goes on to list a few things teaching staff should think about: What kinds of questions elicit usable answers? What level are students answering at on the test questions? Is there a pattern in the answers that makes it possible to spot when they come from a bot? Does the student realise when the language model has been creative and invented fictional answers?
"The teacher's salvation"
Elin Bommenel emphasises that if the lecturer asks for a human understanding of something, this is beyond the AI’s capabilities. She calls it the “teacher’s salvation” as it becomes harder to cheat using the chatbot when the questions require analysis or a critical perspective:
“The chatbot usually answers in a very general way to complex questions and produces strange mistakes when it is required to invent in the absence of available information. When the bot is used in English, the structure and grammar are often superhumanly good – people rarely write that well.”
But lecturers may still need to think about how to run their examinations, she argues.
Håkan Eftring, senior lecturer in rehabilitation engineering, agrees. He has thought a lot about the user perspective – for teaching staff and students – and has come to the conclusion that he needs to work in a new way with examination questions because he does not want to forbid students from using AI.
"Who am I examining?"
“If I prohibit the use of chatbots, will I detect if they have used AI when answering questions?”
He recognises that he needs to ask students more for their own examples, and that one of the consequences will be more oral exams and more examinations on campus.
“I want to know who it is I am examining. That it is a student, not a chatbot,” he says.