AI came, AI saw, but has AI conquered?

Where are the robot teachers? Is facial recognition still just down to a beady-eyed head? And just what is 5G doing to that algorithm? AI in education might not be glaringly obvious – but rest assured it’s everywhere and growing. So, how do we make sure it’s working for us, and not us working for it?

The Institute for Ethical AI in Education (IEAE) launched from the University of Buckingham in 2018 with a brief to encourage regulation of the prevalent, but little understood, technology that seems to have a mind of its own.

At the helm were renowned educationalists Sir Anthony Seldon, UCL Professor Rose Luckin, and Priya Lakhani OBE, the founder and CEO of education software platform CENTURY.

“AI could transform every aspect of education,” wrote Sir Anthony, optimistically waving the flag for AI that could tackle teacher shortages and “offer personalised learning, automated setting and marking of work, and more detailed feedback. Traditional testing and exams will gradually fade away and be replaced by real-time reports that give far more accurate appraisals.”

But AI’s benefits, he and the IEAE acknowledged, comes with a lot of caveats. AI is only as beneficial as the intentions of the businesses that create it, and the institutions that use it.

At the launch, Professor Luckin said that the solution was at our fingertips, but, “we must ensure that the ethical vacuum of much of today’s commercial AI development is filled with practices, moral values and ethical principles, so that society in all its diversity will benefit. Ethics must be ‘designed in’ to every aspect of AI.”

Hence, the aim of the two-year project; to develop – and lobby for national and international take-up – a framework that would enable everyone in education to benefit from AI, while also being protected against the known risks this technology presents.

The research examined the assumptions about human behaviour that underlie current AI development and drew on insights from a series of roundtables and forums at Buckingham and the 2020 Global Summit on the Ethics of AI.

The pandemic and subsequent mass uptake of AI-driven blended and home learning added an extra urgency to the Institute’s mission before they ended their tenure at Buckingham in 2021, with the release of a flat-pack manifesto for educationalists and governments to build that apparatus into their policy on AI. 

The nine points it made were:

  • AI should be used to achieve well-defined educational goals based on strong societal, educational or scientific evidence that is for the benefit of learners
  • AI should be used to assess and recognise a broader range of learners’ aptitudes
  • AI should boost institutions’ abilities while simultaneously respecting human relationships
  • AI systems should promote equity between different groups of learners
  • AI should be used to increase the control learners have over their own academic development
  • Institutions must strike a balance between privacy and the legitimate use of data to drive well-defined and desirable academic goals
  • Humans are ultimately responsible for educational outcomes and should, therefore, have an appropriate level of oversight of how AI systems operate
  • Both learners and educators should have a reasonable understanding of its implications
  • AI resources should be designed by people who understand the impacts of the technology.

AI development is unstoppable, and new machine-learning innovations launch continuously in education. The media like nothing better than to warn that classrooms risk losing human teachers to voice- and face-recognising software.

But, says Dr Harin Sellahewa, dean of the Faculty of Computing, Law and Psychology at the University of Buckingham, things haven’t moved on quite as fast as the media and public perception of AI.

Soft(ware) policy

If we really are living in the oft-mooted fourth Industrial Revolution, we’re talking cotton gins not the synthetic humans of Blade Runner (which, a human pedant points out, was set in 2019 – so they’re already two years late for class).

Professor Sellahewa, who was involved with the research and focus groups established by the IEAI, says one of the fastest-growing, “and useful”, uses of AI in education, today, is as software that can recognise patterns in students’ performance – scanning individual and class, last and present, data for strengths, weaknesses and gaps in a student’s knowledge that it can then alert human teachers to.

That kind of algorithm potentially frees up time for teachers to focus on individuals.

“There’s a problem with AI, especially when it comes to education, because it could stand in the way of students and lecturers forming the sort of human interaction and relationship that’s hugely beneficial to learning” – Professor Sellahewa, University of Buckingham

The professor is all for that; but he sounds a note of caution: “There’s a problem with AI, especially when it comes to education, because it could stand in the way of students and lecturers forming the sort of human interaction and relationship that’s hugely beneficial to learning. And, of course, you’re using past data to predict future outcomes.”

Algorithm & blues

Every teacher knows a student is capable of improvement. Harin worries that relying on AI’s performance analysis could lead to students falling through the net. “I think there’s an ethical question; should you judge a current student’s potential based on their past performance and somebody else’s performance?

“Where it’s also becoming prevalent is in the use of data analytics for assessment and grading purposes. It’s going to be increasingly useful for assessments and looking at progression rates and trying to understand dropout rates.”

Computer science undergraduates at Buckingham have been doing their own research into this. They recently took part in a project that tried to predict a test student’s degree outcome after they had completed the first exam set in their course. “For context, we offer a two-year degree programme, and we have exams every six months, so we created models based on past student exam data and the data from new students after they had taken their first exams after six months on the course. Based on those results, we tried to predict what they might get at the end of the degree.” 

Using that data, the project argued, AI should – in principle – be able to alert tutors to areas where students at the beginning of their undergraduate trajectory are struggling, and which might otherwise go unnoticed resulting in a lower-than-expected final degree.

“The idea of the research is about when teachers should make an intervention and assist the student in areas where they struggle and help them to up their final grades. That’s actually very hard to do manually, sifting through all the papers and work looking for patterns and weaknesses.    

“The research our students carried out only had a small set of data – our cohort sizes are very small – but it was more than a 50% accuracy in terms of prediction.” The plan was twofold: first, to try and predict the degree class, and then to predict the average mark the hypothetical student would get. The results gave, says the dean, “a fairly accurate picture”.

“So, say somebody was predicted to get a first-class degree, but in reality, they only got a 2:1. Well, we’ve also predicted their average, and if that average is a high 60, then we know that it’s within the first-class or a second upper-degree class. Having those two pieces of information increases the accuracy of the model.” 

“Is it right to predict someone’s future based on someone else’s past data?” – Professor Sellahewa, University of Buckingham

But a fundamental question remained, and, admits Harin, is in need of some deep thinking. “Is it right to predict someone’s future based on someone else’s past data?”

Rage against the machine learning

The intention of the Buckingham students’ research was pastoral; to identify those who might need extra help to lift their grades.

In the case of last year’s A-level and Scottish equivalent results debacle, grades were determined by a similar process of personal data-gathering matched to wider historic data – but this time for real and with potentially life-path determining consequences. The results were controversial, to say the least; perhaps summed up by the cries and memes of “F*** the algorithm!” chanted by 18-year-olds across the nations. 

Image source: jackie_naim/Freepik

“Obviously ours was an academic exercise, but we still had to consider, if one were to put this into practice, the consequences. What if a student is predicted to get a first-class degree at a very high mark? Do we not pay much attention to that student? Because that could potentially happen, but then that student is neglected, nobody is checking on them, the results say they’re doing fine, but suddenly they start to perform poorly. What would happen if they were predicted to get a lower grade? Would that knock their confidence and mental health? Would it demotivate them or urge them on? The risk is they might feel that the system has determined they will fail, so why bother? The human factors and the psychological aspects have to be considered.”

Taken as a whole, you can’t just rely on what the AI predicts from the data it consumes. It’s about how the terms of that message – the prediction – are couched and mitigated. 

Data of reckoning

The physical restrictions caused by the pandemic have resulted in a surge of useful data.

“Because they’re online, students leave lots of digital footprints. In a physical environment it’s very difficult to capture data. But now I can go onto my teaching-learning platform and see data about students; did they open a lecture slide? Did they watch the video? Did they complete the quiz? I can find out that student A was not engaged in a project for two weeks and I can quickly get a picture of what’s happening in my class. I can get an indication if someone is unhappy, or at a risk of dropping out, and with that I can make an intervention – class sizes meant that, in the past, teachers simply wouldn’t be able to do that.”

But the data isn’t definitive. Teachers might be able to review a student’s course-reading progress from the e-books they’ve borrowed, but that student might be using a different, unlinked resource or they might be using platforms like YouTube for additional learning.

“So that’s a weakness in the data gathering – you have to be wary of the information; it will never be the full picture. You can’t jump to conclusions, you’re looking at thin strips of data, gaps in the information that teachers receive don’t necessarily translate as a student disengaging.”

“You can’t jump to conclusions, you’re looking at thin strips of data, gaps in the information that teachers receive don’t necessarily translate as a student disengaging” – Professor Sellahewa, University of Buckingham

Harin doesn’t think it will be long before this kind of machine learning becomes common – but hopes it comes after a lot more testing.

“Any kind of innovation needs a period of piloting, and somebody has to be the ‘guinea pigs’, if you like, whether it’s an AI innovation or not AI, right? Because any new ideas have to be tested before they can be rolled out.”

Computer says 'no'

Given the already rapid advance of AI in the last decade, Harin thinks the question of where the technology will be in 20 years’ time is almost impossible to answer, and plays into people’s fears by concentrating on the already negative aspects of the tech.

“I do understand people’s fears about AI. We’re already seeing quite a lot of fearful media stories about AI take-up in China that not only monitors performance, as we’ve discussed, but uses facial recognition to monitor student behaviour in the classroom.”

Added to that, Australian Universities are under fire – with students – for adopting AI proctoring systems. This software – examples of which include Examity and Proctorio, and are described by its critics as ‘invasive’ – records students’ computer screens and monitors their eye movements to stop them cheating on exams.

Nonetheless, the professor is confident, citing debate in the EU and UK parliaments and the work by the IEAI and Jisc, that good practice and legislation will be firmly established as a barrier to a dystopian future.

Don't fear the reaper

Staying with the nearer-future gazing, Harin hopes to see AI become advanced enough to actually mark student essays, at least partially.

“Some subjects will be easy for an AI to mark, and already are; for instance there’s already lots of program code-marking software on the market and coming through. That’s a pretty straightforward thing for an AI to test, and it’s a very valid time-saving function for teachers.”

But beyond strings of manually inputted code, things become more complicated, not least because AI can’t appreciate, let alone encourage and nurture, nuance, opinion or even humour. These are all human qualities that indicate a student can demonstrate original thinking – surely the core purpose of education? 

It might be that, one day, AI will be able to identify, as Harin puts it, “out-of-the-box thinking and draw a teacher’s attention to it. But we’re not quite there…yet.

“Marking a history paper, or any creative writing, is different and I think, really, expecting AI, at this time,  to understand nuance is actually a question about our expectations and imaginations, not the capability of AI.”

Leave a Reply