Beyond the national lockdown, there is a real possibility that remote education will continue in some form for months to come, and this will have a significant impact on young people’s educational experience and attainment. Research from organisations such as the Education Endowment Foundation suggest school closures “are likely to reverse progress made to close the gap in the last decade since 2011”. It’s clear the effect of coronavirus on children’s progress will be the subject of much academic investigation and research over the next decade.
In the meantime, schools continue to have to make decisions about how to best manage COVID-safe schooling with limited experience, research and evidence of what works in such circumstances. Educators are taking every opportunity to maximise the use of technologies to support online and blended learning, but are doing so somewhat blind to whether these technologies deliver real impact. While the majority of schools have adapted well, becoming adept at switching from in-person to online education is an ongoing challenge.
Education technology providers are rushing to solve these challenges with innovative products that may help mitigate a short-term issue, but often have no basis in evidence. Subsequently, there is a risk that quick-win implementations may not have long-term impact.
‘Suffering from an evidence issue’
Even before coronavirus, the edtech sector was suffering from an evidence issue. Teachers were regularly sold barely-tested solutions to problems that the providers just didn’t understand. As for independent, verifiable evidence of impact – in many cases, it simply didn’t exist.
A recent survey of edtech suppliers, conducted by Edtech Impact, found that just 7% used Randomised Controlled Trials to garner evidence of impact. Third-party certification was used by 12%, while 18% engaged in academic studies. However, the most frequently cited benchmark of evidence was simply customer quotes and basic school case studies. While there is merit in gathering feedback from end-users, and case studies can provide helpful insight, they should form part of a broad mix of evidence. On their own they are far from being a gold-standard benchmark and not enough on which to make informed decisions.
As Michael Forshaw, founder and CEO of Edtech Impact explained: “The main reasons why so few edtech companies have conducted randomised control trials (RCTs) and academic studies are firstly, it costs a lot of money, and with increasing competition in the marketplace, companies are prioritising sales over building their evidence base. Unfortunately, there is a lack of understanding within edtech companies about what outcomes their product improves, and the conditions for success. For those who wish to pursue trials there is a lack of choice of who can evaluate their product. And, there’s also a lack of demand from schools to see more robust evidence as part of their procurement process, though this is slowly changing.”
‘An issue of priorities’
It’s clear there’s an issue of priorities when it comes to edtech evidence and more needs to be done to establish a culture of evidence first. However, a lack of edtech evidence doesn’t signal poor intentions across the board; many edtech innovators want to prove their products work, but the logistical difficulties of running studies in educational settings can hamper the best of intentions.
Unlike other areas of research, such as science and medicine, securing high sample numbers of schools is challenging and often not possible to facilitate. Ideally, you would need access to lots of schools to make a viable comparison. Assuming you can access these schools, and adjust for some of the main demographic variables, the resulting data can be ‘noisy’ – i.e. clouded by any number of variables. Take an at-home learning product – data from its use by students won’t tell us whether they were using a calculator or Google. Were they asking parents or friends for help? Were they watching TV while working? When it comes to in-class product testing, some such variables can be controlled, but of course, our current crisis has put an end to any in-class research previously underway.
So, how can we overcome these logistical challenges of gathering evidence with limited budgets to ensure we have more than anecdotal feedback and something more robust? Sparx is a founding member of the Edtech Evidence Group and, together we have produced advice on how edtech companies can cost-effectively provide useful evidence about their products.
- Existing academic literature – there is a wealth of academic evidence that explains the impact of various pedagogical methods. A good starting point for edtech companies is to immerse themselves in the existing literature and consider where their solution is supported by those studies.
- Engage with existing users to conduct research – many schools will take part in studies that help to identify a hypothesis and measure an effect. Does a product have a specific impact on workload or benefit a particular type of student? These are most powerful when an independent partner manages the research.
- User data and control groups – most edtech companies collect lots of data about usage of a product. On its own, data is very useful, but studies are more robust if you can have a control group for comparative purposes. They need to match your test sample in every way. However, use of control groups does raise ethical issues; for instance, it’s important to build in processes for closing a trial quickly so that if an intervention proves positive, everyone can benefit.
Edtech evidence matters because it nurtures a sustainable partnership between providers and schools, and supports development of edtech based on what works, rather than what sounds good or what sells.
Should we put edtech evidence on the back burner in an increasingly long list of priorities? I would certainly argue that, rather than dimming the light on edtech evidence, the spotlight needs to be focused more sharply than ever before.
You might also like: ‘Students must be at the heart of tech-enabled teaching’