The Institute for Ethical AI in Education (IEAIED) has published new guidance to assist educators in the procurement of artificial intelligence (AI) teaching tools, providing a framework which they hope will help teaching staff maximise the potential of AI safely and securely.
Drawing on insights from a series of roundtables led by the IEAIED over the past year, as well as the Global Summit on the Ethics of AI held in November 2020, the new framework lays out the ‘gold standard’ on the use of AI classroom technologies.
The presence of AI across the education sector has significantly increased in recent years – especially over the last 12 months given the pandemic-driven school shutdown. AI tools have proven their worth in multiple ways throughout this difficult time, with one example being their automation capabilities helping to reduce teachers’ hefty workloads and time spent on marking and assessment. Despite this, however, the IEAIED emphasises that many educators still lack the knowledge and understanding needed to maximise the potential of such technologies.
“The opportunities presented by artificial intelligence must be seized to allow every learner to fulfil their potential” – Sir Anthony Seldon, IEAIED
As such, the Institute – established in 2018 and based at the University of Buckingham – has published the framework penning nine principles for the ethical use of AI in education, emphasising how education leaders can help their respective institutions achieve these goals. The framework’s prepositions are as follows:
- AI should be used to achieve well-defined educational goals based on strong societal, educational or scientific evidence that is for the benefit of learners
- AI should be used to asses and recognise a broader range of learners’ aptitudes
- AI should boost institutions’ abilities while simultaneously respecting human relationships
- AI systems should promote equity between different groups of learners
- AI should be used to increase the control learners have over their own academic development
- Institutions must strike a balance between privacy and the legitimate use of data to drive well-defined and desirable academic goals
- Humans are ultimately responsible for educational outcomes and should therefore have an appropriate level of oversight of how AI systems operate
- Both learners and educators should have a reasonable understanding and its implications
- AI resources should be designed by people who understand the impacts of the technology
On top of this, the guidelines encourage educators to consider how AI can be used to enhance the social skills and wellbeing of learners, and are urged to use AI to improve education delivery without undermining teachers.
Sir Anthonly Seldon, former vice-chancellor of the University of Buckingham and co-founder of the IEAIED, commented: “If we are to address the educational inequalities that have intensified as a result of the COVID-19 pandemic, then education cannot simply return to normal. The opportunities presented by artificial intelligence must be seized to allow every learner to fulfil their potential. By guiding educators to use AI ethically, we hope that the framework will accelerate the adoption of AI, whilst also protecting learners from the known risks associated with this innovation.”
Priya Lakhani OBE, founder-CEO of CENTURY Tech and a fellow co-founder of the Institute, said that the framework puts educators “in the driving seat”, empowering them to make informed and thus impactful decisions when it comes to AI technology procurement, enabling them to “shape and steer the market for AI in education”.
“Suppliers therefore need to keep up,” she added, “and ensure their products are designed ethically.”
Lord Tim Clement-Jones, chair of IEAIED and former chair of the House of Lords Select Committee on AI, warned that the unethical use of AI in education could “hamper innovation” by driving a ‘better safe than sorry’ mindset across the sector. “The Ethical Framework for AI in Education overcomes this fundamental risk. It’s now time to innovate,” Lord Clement-Jones concluded.
You might also like: Most UK adults wary of artificial intelligence, survey finds