How to make AI in education ethical

AI is exciting, terrifying, and game-changing for education. But how can we make sure it’s ethical? Hazel Davis finds out…

Education 4.0 and the fourth industrial revolution

education-4.0
Education 4.0 can help deliver an adaptive, personalised learning experience

AI is starting to be used in education in a variety of positive ways with the potential to revolutionise how we do things entirely. Education 4.0 (that is, education in the fourth industrial revolution) can help deliver a personalised, adaptive learning experience, assist teachers in conducting assessments and provide teachers and institutes with the ability to use predictive analytics for early intervention as well as play a role in wellbeing. In the future, says Harin Sellahewa, interim dean of the school of computing at the University of Buckingham: “I would expect personalised, adaptive learning systems to be used in primary education as content is fairly standard, especially mathematics and science.

“Some aspects of language could be taught using AI and I believe that AI-based automated assessment tools will be used more widely.”


You might also like: AI edtech experts? Show me the evidence
Dan Sandhu, CEO of Sparx, discusses his concerns around the lack of bona fide AI experts within edtech. Read the blog in full here.


Predictive analytics will make an impact on secondary and higher education, says Sellahewa: “AI will enable schools and universities to make interventions on an individual student basis to help them achieve their full potential.” He also believes that AI and robotics will be used to develop language and social skills with robots and chatbots playing an increasing role in wellbeing. Sellahewa is positive about AI’s benefits: “Overall, AI will empower teachers to spend more time on individual learners and those learners will be able to learn at their own pace using the most effective medium and resources.”

What is ethical AI?

ethical-ai

There is currently no universally or nationally accepted definition or a framework for ethical AI. “Should there be such a definition and who should be responsible for it,” says Sellahewa, “are important questions.” The University of Buckingham has launched an Institute for Ethical AI in Education, to tackle issues around how AI can be designed ethically.

Broadly speaking, ‘ethical AI’ can be described as AI for good: “That is, AI that will have a positive impact on society, AI to solve major problems for a sustainable future. What might be seen as positive for some may not be positive for others. National security is a good example.”

Ethical AI can include anything from making sure algorithms are developed by a diverse team of people to avoid ingrained bias, to teaching students on AI courses about the ethical considerations of when to use and how to deploy AI systems.
There are a few basic principles that have been used by organisations from the World Economic Forum to Microsoft to describe what they mean as ethical AI. These include fairness, inclusivity and diversity, an absence of bias, reliability, accountability, transparency and clear liability, clear privacy and data protection processes and security.

We’re seeing AI tools that can take lots of data around career pathways and make recommendations about what students can study. So we’ve got AI informing decisions made by young people.
– Toby Baker, Nesta

Toby Baker works in Nesta’s education team and is author of a number of reports including Educ-AI-tion Rebooted: Exploring the future of artificial intelligence in schools and colleges. He says: “There are a few issues. One is an issue around bias. There’s bias within the data (that is, data that’s skewed towards people and findings) and bias of the people who create the technology. There’s the issue of intelligibility – for example, there are reasons that particular types of AI arrive at certain decisions but if we can’t see that then it becomes difficult for us to question them and hold them to account. In an education setting and you’re a minor or a parent, you might not even imagine you’d need to think about these things.” Then there’s accountability: “There are no rules to tell us who’s accountable when things go wrong. In a classroom setting we have Ofsted to hold head teachers to account but adult learners might be purchasing an AI tool to learn through. If this is not very good and the person fails their exam, who do we complain to?” Here, says Baker, “accountability overlaps with intelligibility in quite interesting ways.”

AI in education has its own distinct set of properties, says Baker. There’s the issue of determinism, for one: “Young people are particularly malleable at that age in life and things that happen to them educationally can really set them on a specific pathway.” He adds: “We’re seeing AI tools that can take lots of data around career pathways and make recommendations about what students can study. So we’ve got AI informing decisions made by young people.” Ethical implications for situations like this are live and easier to grab on to, says Baker, “So it’s worth trying to develop a counterfactual, looking at what would happen without the AI.” It can be easy to criticise AI tools, he says, “but it’s worth asking whether it’s actually better than what we’ve already got.” In a situation like the one outlined above, Baker says, “we can ask specific questions and get a number of specific answers such as, ‘Where did this data come from? How was it generated? How representative of the population is it? Am I qualified to understand how this tool should be used?’”

Educating educators

educating-educators-in-ai

The fact is, Baker says, “We’re putting a lot of faith in the ability of technology creators who often have a limited understanding of pedagogy or how classrooms or universities work. We’re putting faith in them developing a tool that gets parachuted into an educational context. That’s where the intelligibility and accountability gap comes from.”

A recent report from BCS, the Chartered Institute for IT, suggested that not enough is yet being done on diversity in AI. The report recommends that the AI sector should put in place evidence-based solutions to diversity issues as a “matter of urgency”.

It recommends developing and maintaining ethical and professional AI standards for MSc graduates contributing to the design, development, deployment, management and maintenance of AI products and services.

It also recommends independent accreditation of AI MSc courses to encourage the embedding of an ‘ethical by design’ approach as part of a range of incentives for universities to adopt this approach.


From the archive: Institute for Ethical AI in Education launches in a UK first


Sellahewa believes that including ethics in all AI courses is a must, irrespective of the subject area (whether computer science, business, humanities or medicine). Clear understanding is key to this, says Sellahewa: “AI is being used in many sectors, but not everyone has an understanding of how it works and how it arrives at decisions and predictions. This has serious implications when it comes to transparency and accountability.” Therefore, he says, “AI specialists must have technical as well as soft skills to be able to explain AI systems to the wider society. Equally, the non-AI specialists must have some broad awareness of AI’s capabilities as well as its limitations.”

Baker thinks there needs to be a better way of bridging the gap between those who create AI tools and those who use them: “There are a bunch of ethical questions at the point at which we identify data to go into an AI tool, there are a bunch at which we generate the algorithm and a third bunch of questions which often get overlooked: ‘In this specific circumstance is it the right thing to use this tech?’ And this question is currently being answered by teachers or students themselves.”