Fair Comment: This was not written by ChatGPT

Img
Reading Time: 8 min read

By Dr Wayne Holmes, MA, MSc (Oxon), PhD (Oxon)
University College London, UCL Knowledge Lab (Associate Professor of AI and Education)

Over the last few months, the application of Artificial Intelligence in education (AIED) has burst into the sunlight. While researchers have been studying AIED for more than forty years and commercial organisations have been deploying AIED in schools for more than a decade, such innovations have largely gone unnoticed by the general public. This all changed with the launch last November of the AI application ChatGPT[1].

Within days, ChatGPT, from the AI research lab OpenAI[2], became the fastest growing online sensation ever, thanks to its ability to automatically generate, in response to a prompt, impressively human-like text – and all within seconds. Organisations across the Commonwealth immediately jumped on its potential use by students to cheat, especially for writing essays: “Cheating with ChatGPT? Controversial AI tool banned in these schools in Australian first.” (SBS News[3]), “ChatGPT: New AI tool raises education concerns globally.” (Punch Nigeria[4]), and “Why AI Tools Like ChatGPT Will Widen the Education Gap.” (Global Indian Times[5]).

In contrast to this predictable response, some educators have cautiously welcomed the arrival of ChatGPT and similar tools (new ones, such as Google’s Bard[6], seem to be announced every day). For example, a school in Germany expects its students to use AI when they write their essays before going on to critically examine the AI-generated text.[7] Meanwhile, my own institution (University College London) has released guidance which says, of tools like ChatGPT, “rather than seek to prohibit their use, students and staff need to be supported in using them effectively, ethically and transparently.”[8] This is not to suggest that educators should ignore students using AI to cheat, but rather an acknowledgement that these tools are now widely available, are likely only to become more sophisticated, and have unique and both negative and positive potential.

I have used ChatGPT to support my writing and teaching. For example, when tasked with writing a paragraph on something new to me, I used ChatGPT to generate a first draft; and while I chose not to use any of the sentences that it suggested verbatim, it definitely inspired what I did go on to write, helping me overcome my writer’s block. Meanwhile, the Internet is awash with novel ideas for how to use ChatGPT to inform teaching and learning, such as using it to suggest lesson plans, generate ideas, summarise texts, or simplify difficult ideas. Similar tools are also being used to automatically generate new images, music, and even computer code.

However, before we get too enamoured, we need to recognise some fundamental facts which derive from how these tools work. In essence, ChatGPT and others identify correlations between words (or images or code) in huge amounts of data scraped from the Internet (including all its errors, biases and falsehoods) and then generate an output that is another example of those correlations. As a consequence, although the output might appear human-like, unlike humans these tools actually understand nothing. In any case, they often generate nonsense (errors, biases and falsehoods – garbage in, garbage out, still holds), especially when prompted about something on which opinions are divided. As for its potential for cheating, industry is already launching tools[9] that (they claim) detect when a piece of text has been written by AI. However, such an approach is likely only to lead to an unwinnable arms race – with each generation of detector being leap-frogged by the next version generator, and the cycle repeats.

So, what are the takeaways? First, that these technologies are only going to become more available and sophisticated over time. Second, that they can be used in multiple ways, including inspiring ideas as well as generating texts that might be indistinguishable from student-written essays. Third, to avoid an unwinnable arms race, we need to rethink how we assess students, perhaps beginning with setting tasks that require understanding and critical thinking, neither of which can (yet?) easily be replicated by AI. Finally, we need to think carefully about how these technologies can be used ethically and transparently, promoting and not undermining fundamental human rights, while effectively supporting student agency and learning.

Wayne Holmes
(with a little help from ChatGPT)

 

[1] https://chat.openai.com/chat

[2] https://openai.com

[3] https://www.sbs.com.au/news/article/cheating-with-chatgpt-controversial-ai-tool-banned-in-these-schools-in-australian-first/817odtv6e

[4] https://punchng.com/chatgpt-new-ai-tool-raises-education-concerns-globally

[5] https://www.globalindiantimes.com/globalindiantimes/chatgpt-education

[6] https://blog.google/technology/ai/bard-google-ai-search-updates

[7] https://the-decoder.com/a-teacher-allows-ai-tools-in-exams-heres-what-he-learned

[8] https://www.ucl.ac.uk/teaching-learning/assessment-resources/ai-education-and-assessment-staff-briefing-1

[9] https://writer.com/ai-content-detector

Subscribe To Our Newsletter

Sign Up Now