Putting assessment to the test

Nicola Yeeles investigates the contentious topic of assessment, and looks at how the feedback cycle is being eased and influenced by technology from blogs to artificial intelligence

Remember and regurgitate, or develop skills for life? Getting students to engage meaningfully with a school, college or university’s assessments can be a challenge. But technology can be a game changer, allowing educators to mirror what students might have to do in their working lives. Blogging is one such skill, and is increasingly popular as a method of assessment in universities. University of Edinburgh lecturers Nina Morris and Hazel Christie looked at how geology and geography students responded to blogging as part of their assessment, and noted that “the continuous nature of blogging compels students to engage more, not just in individual classes, but also across the course as a whole. As a result, students are more able to make connections between course themes and make evaluations based on a broader subject knowledge base.”  Meanwhile, they concluded that tutors were given a more personal insight into what their students were learning. In the further education and schools sectors, a similarly open-minded approach to learning outcomes has led many to experiment  with online tools like SeeSaw through which students can create drawings, voice recordings and videos to show what they know in the way that works best for them.

Saving teachers time

It’s clear that it’s not just the learners being assessed that benefit from technology, but also their educators. Haylie Taylor has 13 years’ teaching experience, a decade of which was spent in primary schools, and she reflected on the main challenges for teachers: the first one being finding assessments. After all the printing and queuing for photocopiers that is associated with traditional paper assignments, she says that the appeal of online teaching and learning tools like EducationCity, for whom she now works as a consultant, is that “all assessments for the core subjects can be housed in one place. They can be set in just a few clicks and students can simply login on a computer or tablet to complete the assessment.”

Secondly, she recalls the heavy marking load: “Marking one 20-question assessment for one student might take 15 minutes. However, marking a whole class worth of English and maths assessments can take a weekend, and that’s if you’re quick! Then comes the data collection and analysis – this can be very time-consuming and labour intensive, especially if your assessment and data platforms are separate and you have to transfer this information between the two.” Online systems often mark automatically, providing teachers with the data they need instantly ­– often presented in a simple visual report to help them identify gaps and misconceptions at a glance.

Finally, Taylor notes that lots of time can be spent on closing the final part of that feedback loop. She says: “After getting to grips with the data, I’d need to think about how I’m going to support my class with those gaps and misconceptions that have been identified. If I already have the resources, I need to re-teach or allow my students to practice, then great, but if not, more time will be needed to either source or create relevant, curriculum-linked resources.” This is a further area where technology can assist by providing a central hub, with assignments tagged to relevant content. She says: “In some cases, the curriculum content can even be assigned automatically based on assessment results, saving teachers a significant chunk of time.”


SPONSORED: Maintaining academic integrity with Urkund

By Neil Walker, senior account manager, Nordics

Technology has changed the way we do business, live our lives and interact as human beings. One part of that change can be observed in the classroom. Suddenly, mobile phones were omnipresent and information available a few taps away, presenting new challenges and opportunities. One of these new challenges was to handle the information overflow and the mere copy-pasting of sources, legitimate or not. According to recent studies, cheating went up a staggering 40% in the years between 2015–2018 at top UK universities.*

Which isn’t all that surprising. More and more universities and schools are opting for technical solutions such as Urkund, which ultimately leads to a higher hit rate of exposed academic misconduct. Instead of having educators scrolling endlessly through documents trying to single out sources, a fully automated system doing the work for them can be a true lifesaver. Besides the fact it saves time it also checks sources that are behind paywalls or were previously submitted by another student. Based on our experience, this is where 80% of all plagiarism can be found. A great upside is (and this is confirmed by teachers over and over again) that the pure fact of having a plagiarism checker at your institution helps prevent it. It enables a conversation around the topic, how to avoid it and why it is a danger to academic integrity.

*(https://www.theguardian.com/education/2018/apr/29/cheating-at-top-uk-universities-soars-by-30-per-cent)


Collaboration between schools

In order to look afresh at these and other challenges, Church Cowley Saint James Church of England Primary School in Oxfordshire recently brought together 14 previously unlinked primary schools to work on assessing writing tasks by Year 6 children using technology called RM Compare. This shows teachers two anonymous pieces of work side-by-side on a screen, and the teacher judges which of those best meets the simplified assessment criteria. The system then uses an algorithm to intelligently select and pair similarly ranked work side-by-side. Headteacher Steve Dew says: “The main success of using the technology is the opportunity to collaborate with other schools on what good/great writing looks like. From that our teachers have a good idea of what good writing looks like from a sample of 415 children’s writing, and not just that within our school – this sort of information and analysis has not been available to us previously. This has had a positive impact on how we plan and teach lessons, has raised our bar of expectation and given us some great exemplar material to support the children’s understanding too.”

As Haylie Taylor indicated, the assessment cycle continues after a mark is decided, and can usefully inform educators on how to best teach a given cohort. Dew cites standards in years 3 and 4 as an example of this. He says that the school felt they had a handle on attainment within these classes – but there was a surprise to come when they used RM Compare to assess both year groups (120 children) together. He says, “The crossover of literacy attainment was an eyeopener. For the first time we were able to show that at least 10 children in Year 3 attained in the top 25% of children in Year 4; this helped our conversations with teachers to radically rethink the provision for these children in class.” As educators move towards evidence-based practice, technology is therefore being used to give them the edge. As a result, teachers are making better decisions and able to intervene quickly to improve learning outcomes.

Integrity in HE

Of course, at university level, integrity remains a focus. With the rise of the internet has come an increase in contract cheating whereby students pay for people to create work they later submit as their own, so institutions have to work hard to ensure that the student submitting the work is honest about their contribution. They do this by using the Turnitin plagiarism detection service and increasingly by promoting academic integrity more widely. But with all this intelligent technology to hand, will there come a time when educators’ roles in conducting assessment is actually minimal? Sir Anthony Seldon, vice-chancellor of the University of Buckingham, believes so. Seldon says that once authorship is established, the main concerns are that tutor comments are “formative, constructive, personalised and useful for the students’ further learning, and that staff time is not excessively taken in this process.”

But he claims that artificial intelligence “is a complete game changer on all three fronts because the AI technology will know the student intimately, it will detect at once when the work submitted is not the students’ own, it will be able to give details, personalised and constructive feedback tailored to optimise the learning by each individual student, and it will eliminate almost totally the need for academic and administrative staff to give their own hard-pressed time to assessments.”

If learners are getting this kind of feedback, there can be no doubt that their study time will be spent more purposefully.

But perhaps the most exciting consequence is how teachers and lecturers might use their new-found time to create an even richer experience for their learners.


Pros and cons of technology-enabled assessment (TEA), from the University of Reading

Pros

Improves authenticity and alignment with learning outcomes

Helps to clarify marking criteria

Spreads the assessment load for staff and students

Improves student engagement and promotes deeper learning

Cons

Finances and staff time

Accessibility issues

Large-scale introduction requires a significant level of institutional buy-in

Sense of isolation

Source: https://www.reading.ac.uk/engageinassessment/using-technology/eia-pros-and-cons-of-using-technology.aspx


You might also like: Empowering universities with digital assessment

Your [FREE] In-depth Guide To Object Storage

Plus: How University of Leicester Saved 25% in Data Storage Costs