No technology is more hyped, more maligned – nor misunderstood – than AI. It is a rare public figure who does not have an opinion on the subject. There are the true believers who celebrate its potential, and there are the heretics who decry its implications for jobs, data security and corporate monopolies.
Currently, AI zealots are outnumbered by unbelievers. Those techno-evangelists, like Ray Kurzweil, Google’s Director of Engineering, who preach the ‘singularity’ – the point at which machine intelligence outsmarts human – are in the minority. Far more typical is the response of Robert Halfon, Chair of the Education Select Committee: “I think if we don’t reskill, the potential threats are enormous, and we’re woefully underprepared for it. It has massive implications.”
The issue is particularly acute in education. By its nature, education aims to shape the future – how and what children are taught has profound bearings on the success, and make-up, of society. It is an obvious point perhaps, but one that lies at the heart of the debate over AI in education. Sir Anthony Seldon, a notable commentator and author of The Fourth Education Revolution, has founded the UK’s first Institute for Ethical AI in Education in response.
The institute will examine how AI is used in education, and how an ethical framework might be developed to provide oversight. “We are sleepwalking into the biggest danger young people face,” Sir Anthony said. “AI could be a considerable boon if we get the ethical dimension right but with each passing month we are losing the battle.”
But what are its impacts on education? And what might they look like in the future? These questions are difficult to disentangle from the hype and hyperbole, but clarity is critical. “There’s a phenomenal amount of ignorance out there,” cautioned Dr Kaska Porayska-Pomsta, an education scientist at UCL. “A lot of buzz words are thrown around without understanding the granular details of what AI is.”
The present: the future
Start with the current impacts. Today, AI in education falls in three broad categories: learner-facing tools, teaching-facing technologies, and system-facing applications.
AI could be a considerable boon if we get the ethical dimension right but with each passing month we are losing the battle.
–Sir Anthony Seldon
The first category is perhaps the most well-known. It is the most marketable and fits best with the sci-fi trappings of AI. In learner-facing tools, AI monitors a pupil’s progress through education software, tailoring it to the needs of the student. It can make questions harder or easier, and aims to the stretch the pupil’s knowledge without frustrating them. Carnegie Learning, for instance, uses AI-driven applications to teach maths, providing both one-to-one tuition and education resources for schools.
Another version of these technologies teaches the student through what Wayne Holmes, an AI expert at The Open University, described as a “Socratic dialogue: engaging you in conversation and guiding you towards the correct answer.” Importantly, Wayne noted, “some systems deliberately allow students to fail, before nudging them towards the right answer.” Thus the idea of ‘learning through failure’, a key pedagogical concept, is baked in.
Teaching-facing tools are less eye-catching, but no less important. virtual learning environments (VLEs) or dashboards have been around for a while in higher education, where corralling the work and research of sometimes hundreds of students is simplified through one dedicated platform like BlackBoard or OpenAthens. Increasingly though, VLEs are being adopted in schools. The benefits, especially when combined with AI systems, can be significant. “Teachers can use their time in a more targeted way,” said Toby Baker of Nesta, a charity. “The aim is to relieve pressure on teachers.”
The third category of system-facing tools could hold the greatest possibility – and attract the most controversy. Ofsted, for instance, have been trialling an analytics tool which crunches data, including location, exam results and previous inspections, to predict which schools stand a higher chance of failing. This allows them to prioritise stretched resources to schools which have the greatest need.
For many though, such systems, while useful, have more than a shiver of Orwellian surveillance about them; especially when they are aimed at emotive areas like education. People who might be unfazed by self-driving cars are, nonetheless, cautious about the prospect of similar technologies being levelled at their child’s learning. These qualms are understandable, said Dr Porayska-Pomsta. But they must be overcome, because “considering the education system as an ecosystem” holds the most potential for practical – and ethical – deployment of AI. “It’s not about replacement; it’s about enhancement,” she said. “Using AI shifts the level of discussion. It promotes education as more than a box-ticking exercise.”
AI: the great disruptor
And the future? The furore around AI is partly driven by concern at the pace of its development, and the sense, as Wayne put it, that big questions about our futures are being decided by “one clique of white, male, middle-class technologists [in Silicon Valley].” And given the whiplash advancements in the field, firm predictions seem foolish. But avenues of progress can be seen. AI-driven collaborative learning, for instance, an area where it currently lags behind flesh-and-blood teachers, is sure to improve.
AI is not the problem, it’s the system itself. We need to decide what we want the system for, what we want education to be about.
– Dr Kaska Porayska-Pomsta
Another area AI could have a huge difference, said Wayne, is in AI teaching assistants: “Teachers could carry all the AI they need on their phones. They would have access to all their students’ successes, failures and difficulties instantly.” These tools, which would combine the in-depth analytic of dashboards with the accessibility of a smartphone app, could be used in combination with a further innovation: the AI learning companion. Wayne envisions this as an ‘educational FitBit’: “It’s your portal to the knowledge of the world, monitors your progress, and could replace exams. When you apply for a job, you could give them a temporary key to your educational history,” he explained. This functionality could be protected by block-chain style technology, ensuring data remains private.
Two roads diverged
So much for the future, but what about the present? As the narrative around AI indicates, many worry an infatuation with its head-spinning promise will lead us, somnambulantly, towards disaster. “As humans, we’re lazy,” warned Porayska-Pomsta. “If we can surrender a task to AI we will. The question is: what facilities are we willing to surrender?”
The urgency of this question should mean less glib headlines about teachers being replaced by robots, and more hard-nosed scrutiny of the pressing concerns over data ownership, the ethics of algorithms, which the wider debate over AI has thrown up.
A lack of transparency in the industry is a particular problem. ‘Black-box algorithms’ – whereby the user is ignorant of how their data is used – are common.
And a universal worry is the biases that are being hard-wired into AI technologies – and therefore inadvertently into our education systems. “AI is just an advanced classification tool,” said Porayska-Pomsta. “And the data we feed into it is inherently representative of our human biases.” She said this problem can be turned on its head, though. “AI magnifies to us what is wrong with the educational system. It’s an opportunity to re-examine and reflect. AI is not the problem, it’s the system itself. We need to decide what we want the system for, what we want education to be about.”
Fixing the system, then, will buy us time to reckon with the seismic impacts of AI. But decisions must be made fast. Two roads diverge before us; we must decide which one we shall travel by.