The transition from pen and paper exams to an entirely online model is becoming the norm post-pandemic. Earlier this year, Jisc published its report into the future of assessment, in which the idea of all-digital exams was floated as an ambition for 2025. This was recently followed by the announcement that Qualifications Wales is consulting on the future of GCSEs from 2026, stating that e-assessment seems like an ‘inevitable next step’.
Both Jisc and the Chair of Qualifications Wales recognise the logistical difficulties of achieving this type of wholesale transformational change in a summative assessment context. These difficulties will certainly not be resolved quickly, but there are many interim technological milestones we can reach to help us work towards that future.
In its report, Jisc stated that we need to automate only where appropriate – an assertion I strongly agree with. Auto-marking for multiple choice assessments has been deployed for some time now, with countries like the US using it extensively. While its time-saving benefits are clear, multiple choice is simply not an appropriate method of assessment for everything.
So, what might other examples of automation look like? Artificial intelligence could be used to assist with current pen and paper assessment. At RM, we have piloted the use of AI for auto-marking short (one to three words) handwritten exam responses with several awarding organisations.
The technology has proved successful and could be applied to longer responses – to fact-check sentences, for example.
It’s important that automation isn’t seen as a replacement for professional judgment. Instead, automation should augment human intelligence, acting as a form of quality assurance, and removing the more mundane elements of marking. This could reduce time spent marking and generate faster results.
“Using methods of testing that might be more inherently neutral and familiar to learners, who will use this type of technology elsewhere, is arguably a better way of assessing skills”
Digital delivery opens up possibilities to redesign assessments in more real-world contexts, using rich multimedia and interactive formats whilst maintaining their authenticity by ensuring they continue to be developed based on sound pedagogy. E-testing platforms can be used to innovate testing, incorporating simulative, interactive and live problem-solving scenarios. They allow teachers, assessors and awarding organisations to build their own content, and with multiple delivery methods, they can be completely customisable. Using methods of testing that might be more inherently natural and familiar to learners, who will use this type of technology elsewhere, is arguably a better way of assessing skills.
The practical barriers for ending pen and paper exams within our current summative assessment systems are difficult to overcome, and what this would look like in practice still needs to be determined. The issue of access to equipment is one barrier – accessing 300 computers at one time is harder than setting out 300 desks with exam papers and answer sheets. Whether wide-scale adoption of relatively low-cost devices such as Chromebooks or bring-your-own-device systems are part of the solution, there will be questions to be answered regarding equity, cost and technical support.
The crucial thing is that we do not wait for summative assessment to lead digital transformation. By digitising formative assessment now, teachers and learners will be fully ready to embrace summative transformation when it comes – whether that be in 2025 or before, given the impact of the pandemic – having already built up confidence, experience and evidence of positive impact.
You may also like: The rise of remote learning