COL has released a new publication as part of its Knowledge Series, Authentic Assessment in a Changing World: Considerations for Open and Distance Learning, offering timely guidance for institutions, educators, policymakers, and governments rethinking assessment in a context shaped by generative AI, widening social inequalities, and rapid change.
The authors argue that authentic assessment is no longer simply an alternative to conventional examinations, but rather a practical necessity. In Open and Distance Learning (ODL), assessment must account for both academic integrity and practical knowledge application in complex, real-life situations. Instead of treating AI as wholly positive or wholly harmful, the authors also propose a more balanced approach in which institutions consider the conditions that enable AI to be used responsibly, with human judgment, ethics and accountability remaining central.
Professor Jane-Frances Agbu, Adviser: Higher Education at COL, highlights the updated working definition of authentic assessment in the toolkit. “Authentic assessment is framed as the design of tasks that prepare learners to navigate complex, messy, ambiguous, unpredictable situations with informed integrity, while also making appropriate choices about whether and how to use AI tools.”
Informed integrity is revisited throughout the publication. In addition to producing answers, students should demonstrate how they arrived at them. AI use should not prevent students from interrogating their knowledge claims or being accountable for their work.
Dr Jako Olivier, Adviser: Higher Education at COL, further emphasises the publication’s value in providing practical guidance. “Beyond theory, the publication identifies three implementation issues relevant to ODL: scalability, the use of generative AI, and the diversity of learners’ languages, cultures and lived realities.”
In making tasks more context-specific, using clear assessment rubrics, asking learners to self-assess against criteria, and breaking large assignments into sequenced parts across a term, the authors propose that institutions can strengthen authenticity and fairness in assessment design.
The publication also offers broader guidance on the general use and role of AI in assessment. It warns against over-reliance on AI-generated content and notes that blanket bans by institutions are neither practical nor advisable. Instead, institutions should recommend clear policies or guidance on acceptable uses, such as planning, idea development or structured feedback. The authors also caution educators against handing final assessment decisions entirely to AI systems, as the risks currently outweigh the benefits.
Another key takeaway is that the familiar language of “21st-century skills” may no longer be sufficient on its own. The publication calls for a deeper orientation towards critical awareness, truth-seeking and understanding the power relations shaping society, technology and knowledge. In this view, authentic assessment should help prepare learners not only for work but also for ethical participation in a fast-changing world.
As part of COL’s Knowledge Series, this publication shares thoughtful, practical resources for those seeking to redesign assessment for relevance, equity and integrity. For ODL institutions in particular, it offers a clear reminder: assessment must evolve together with the world learners are entering.
Download the publication here: https://hdl.handle.net/11599/6110

