By Dr Jako Olivier
Adviser: Higher Education
Over the past year, generative AI has caught the education world by storm. In the context of rapid developments in Large Language Models (LLMs), such as ChatGPT, the Commonwealth of Learning (COL) Chair at the Open University of Cyprus (OUC) – an honourary position dedicated to promoting innovative research in education – Professor Michalinos Zembylas recently undertook a research study to bring students’ perspectives to the forefront. Professor Zembylas stated, “We often hear about university regulations, educators’ perspectives, etc., but we rarely hear students’ voices, how students themselves perceive these rapid developments.”
The study led by Professor Zembylas and Dr Eleni Christodoulou, a senior researcher at the Program of Educational Sciences at OUC, filled this gap by conducting four focus groups with undergraduate students at OUC. It investigated how they think, feel about, and engage with LLMs and asked students about their perspectives regarding the role of LLMs in future higher education contexts.
Three categories of students emerged from the analysis – enthusiasts, sceptics and rejectionists. The enthusiasts fully support using LLMs for their benefits, such as time-saving, better comprehension, essay structuring and exam preparation, and prefer LLMs over search engines for direct responses. Sceptics support LLMs under conditions, advocating for protective mechanisms to prevent unethical use, and believe that while LLMs can help with essay structuring, the rest should be done independently. Rejectionists avoid LLMs due to ethical and pedagogical concerns, emphasising the importance of engaging in all learning steps for holistic benefits and viewing LLM use for academic tasks as unfair and unethical. These beliefs influenced students’ views on others’ use of LLMs and their opinions on university regulation of Generative AI.
While students use LLMs for various purposes, they express a significant need for more information and training on ethical and pedagogical use. This need is underscored by their lack of awareness about tutors’ beliefs, university policies and ethical use of LLMs, with none being aware of the university’s AI policy. The frustration they experience due to mixed signals from tutors and the university further highlights the urgency of addressing this issue. They call for a consistent university policy and better-informed tutors regarding its risks and benefits.
According to a report of the study prepared by Professor Zembylas and Dr Christodoulou, “The results highlighted the importance of engaging with students fears, ethical and pedagogical concerns; the variation of perspectives in the student cohort (the ‘enthusiasts’, ‘sceptics’ and ‘rejectionists’) and the multiple and creative ways in which students use such programmes for their learning and assessments.”
The study also included recommendations for stakeholders, particularly in response to the identified student needs. The publication based on this research is currently being finalised.