×
App Icon
The Standard e-Paper
Fearless, Trusted News
★★★★ - on Play Store
Download App

How universities can examine their students in the age of AI

Vocalize Pre-Player Loader

Audio By Vocalize

Robot humanoid using a laptop for big data analysis.[Getty Images]

After a heavy week of attending physical and online debates on Artificial Intelligence (AI) use for teaching and learning, particularly at the university level, I am persuaded that we need to review how we measure student learning.

At the university level, particularly from the third year to the postgraduate level, traditional assessment practice is now being questioned in light of AI capabilities. Generative AI has not eliminated the value of examinations or term papers, but it has complicated what these formats can reliably evidence when the work is produced outside a supervised setting.

Where the final script can be substantially shaped by tools, the question becomes less about intent and more about measurement: What exactly are we validating when we grade a product whose authorship and process are partly opaque?

One school of thought argues that stronger proctoring, tighter controls, and more rigorous enforcement can preserve standards. That position deserves respect because it is motivated by legitimate concerns: The protection of academic credibility, fairness to diligent students, and the integrity of qualifications.

Yet the practical constraints are increasingly visible. When assessment depends heavily on proving non-use of AI, the institution risks shifting attention away from learning outcomes toward compliance.

In addition, the reliability of AI detection remains contested, given the advancing humanisers and anti-plagiarism tools.

A second perspective emphasises that universities should not rush to abandon familiar assessment forms. Proctored exams, when carefully designed, can still test recall, conceptual understanding, and problem-solving under time constraints.

Likewise, a well-supervised research project can cultivate inquiry, synthesis, and scholarly communication. The point, therefore, is not to frame traditional approaches as obsolete, but to ask where they remain fit for purpose and where complementary formats might better capture advanced competencies such as reasoning, judgment, and methodological fluency.

Within this debate, the key analytical shift is from authorship policing to competency demonstration. Quality assurance guidance is increasingly encouraging institutions to assume that students will have access to generative AI and to design assessments in ways that continue to elicit authentic evidence of learning. The emphasis is on sustainable assessment: Tasks and processes that are fair, scalable, and aligned to learning outcomes without requiring lecturers to carry an unrealistic burden of proof about what tools were or were not used.

Oral assessment is reemerging in this context because it provides direct evidence of ownership and understanding. An oral exam or short viva allows a student to explain choices, defend an argument, clarify methods, and respond to probing questions.

The literature indicates that oral assessment can be credible and reliable when supported by clear rubrics, standardised prompts, and moderation practices. Properly designed, it is not an adversarial encounter, but a structured academic conversation that makes thinking visible.

A related approach is to retain written work while adding a brief oral defence and a transparent AI use statement. Under this model, AI is neither prohibited nor ignored. Students disclose how they used tools, and then demonstrate that they understand the claims, evidence, and limitations within their own submission.

This approach has two strengths. It encourages academic honesty by design, and it shifts the examiner’s task from forensic investigation to evaluating whether the student can account for the work with intellectual clarity and integrity.

Continuous assessment can be strengthened similarly by focusing on process evidence. Rather than weighing a single product heavily, lecturers can request staged deliverables such as annotated bibliographies, reading syntheses, concept maps, research logs, drafts with revisions, data analysis outputs, and short in-class applications.

These artefacts can accommodate AI support while still requiring the student to show progression, judgment, and reflective control. Over time, the learner’s trajectory becomes the evidence, which is often a more valid indicator of competence at advanced levels than a standalone text.

I hold the view that universities can reduce the burden of examiner policing and extend greater student freedom to use AI or not by aligning assessment with demonstrable competencies.

Dr. Mokua is the Executive Director of the Loyola Centre for Media and Communication

Support Independent Journalism

Stand With Bold Journalism.
Stand With The Standard.

Journalism can't be free because the truth demands investment. At The Standard, we invest time, courage and skills to bring you accurate, factual and impactful stories. Subscribe today and stand with us in the pursuit of credible journalism.

Pay via
M - PESA
VISA
Airtel Money
Secure Payment Kenya's most trusted newsroom since 1902