The ‘unethical’ use of artificial intelligence (AI) risks progress being “stifled by inevitable backlashes” and puts learners at risk of “unnecessary suffering”, warns a new report from the Institute for Ethical AI in Education (IEAIED).
While emerging technologies such as algorithms and AI can undoubtedly be harnessed to empower 21st century learners, the report, published yesterday, warns that stakeholders must reach a consensus on how to ensure the ethical implementation of these technologies across the education sector.
Priya Lakhani OBE, founder and CEO of AI company CENTURY Tech and co-founder of the IEAIED, said: “Artificial intelligence is adept at processing large quantities of data, automating complex tasks and personalising learning for students. But that does not make it a panacea. Learners, educators, technologists, academics and many others with relevant expertise need to collaborate to decide how AI should be used in an ethical, beneficial way, and also where and when AI is not the answer to learners’ needs.”
The report warns that the sector is already experiencing the negative repercussions of these technologies; the controversial use of algorithms in this year’s A-level exam fiasco is testament to this, leading to the unfair treatment of disadvantaged students and learners being stripped of their agency. The uproar from the public was to be expected, and evidences that the sector can not afford to make the same mistakes again.
The IEAIED report hopes to focus and inform the debate on how AI should be used ethically, and invites anyone who stands to be affected to take part in the development of an ethical framework for AI in education. This information will support a further report by the IEAIED, which will lay out the ethical framework and make recommendations on further means by which the ethical use of AI in education can be promoted and facilitated. The final report will be published in March 2021.
“The [IEAIED] has great ambitions for learners, and we are realistic in our ambitions,” commented Sir Anthony Seldon, co-founder of the Institute and vice-chancellor of the University of Buckingham – where the IEAIED is based. “In isolation, we cannot hope to decree how AI should be used to advantage learners across the world, nor when and where the use of AI should be discouraged. These decisions are not ours to make alone. We hence very actively seek input on what it means to use AI ethically, and on how ethical AI can be achieved in practice. By pooling our expertise, together we can work towards a comprehensive, inspiring vision of ethical AI in education.”
The report, Developing a Shared Vision of Ethical AI in Education: An Invitation to Participate, presents 14 critical questions for stakeholders to address. Drawing on commentary from a range of expert interviews, the report will also explain how AI can benefit learners, the risks attached to using this technology, as well as which approaches should be taken to ensure students are able to harness AI to its maximum potential.
The ‘critical’ questions include:
- How can AI be used to narrow, rather than widen, educational divides?
- What rights should learners have over how their data is collected, processed and shared?
- Are certain educational contexts too high stakes for the use of AI to be justified?
The report also states that ‘kite marks’ should be used to motivate ethical practice, also suggesting that data-ownership models should be explored to enable learners to have optimal levels of control over their own data. It also recommends that stakeholder collaboration should help to uncover the benefits and risks of using the technology, suggesting the sector undertakes training for software developers, continuous professional development (CPD) for teachers, as well as for AI to become part of the curriculum for all students.
“We therefore need educators, learners, advocates and interested members of society to engage in dialogue with us to understand more about AI…” – Professor Rose Luckin
Co-founder Professor Rose Luckin commented: “All members of society deserve to be educated about AI so that they can be discerning users, not unknowing subjects. We also need robust standards for ethical AI and these cannot be formulated in a vacuum. Education about AI can and should include understanding the ethical implications of AI. We therefore need educators, learners, advocates and interested members of society to engage in dialogue with us to understand more about AI and tell us how they feel so we can ensure AI is being used ethically in education.”
The IEAIED will carry out a series of roundtables this Autumn, in which young people will be actively encouraged to have their say on the ethical use of AI in education. Any consensus reached will directly inform the final report which will be published next March.