Use evidence to make edtech purchase decisions: A practical guide

As school leaders, it is absolutely vital that we interpret the evidence presented to us – challenging bias within it and being absolutely clear on what it might mean for our school, our teachers and, most importantly, for our students, says award-winning teacher, Fiona Aubrey-Smith

As school leaders we are undoubtedly becoming better at using research evidence to inform our decision-making, both individually and collectively. However, 42% of buying decisions are still made based on the ‘word of mouth’ informal recommendations by other schools (NFER, 2018) which suggests we still have a long way to go.

As we begin 2022, there are an increasing number of sources of evidence to draw upon when making buying decisions about edtech. Whilst historically many suppliers have produced case studies from advocate schools and soundbites from enthusiasts, many now recognise the need for more robust evidence of impact. School leaders deserve to know what value using the edtech product adds to existing teaching and learning experiences.

Edtech suppliers are increasingly working in partnership with academic researchers to undertake objective analysis – identifying precisely how their products make a direct impact on improving teaching and learning. Furthermore, edtech suppliers are also following the trend for online retail to provide customer ratings. Ventures such as EdTech Impact have been set up where suppliers list their products and existing customers provide validated reviews based on pre-determined criteria. Furthermore, sources of support such as Educate provide schools with comprehensive guidance about what to consider.

As school leaders, it is absolutely vital that we interpret the evidence presented to us, challenging bias within it and being absolutely clear on what it might mean for our school, our teachers and, most importantly, for our students.

Every school has its own unique flavour – a combination of size, catchment, strategic priorities, characteristics of teaching and learning, improvement or innovation priorities, policies, experience and expertise of staff, and a great many other variables. Even within the same school, a department, phase, year group or class can have a very different personality to its neighbour. We must remember that these kinds of variables affect the relationship of a particular product with a particular school. Moreover, there is the relationship between a particular product, the teachers and children using it, and the specific context that they are using it within (Aubrey-Smith, 2021).

So when receiving recommendations, either from other schools, comparison websites or through supplier marketing materials, you are encouraged to ask:

  • What proportion of staff and students are using the product – and why are those staff and those students are the ones using it? This will help to surface the other influences affecting its successful use.
  • What prompted the decision to use this particular product, and which others were considered? This will help to surface whether it’s the general concept of the product that is perceived as successful – such as automated core subject quizzes – or whether it is the specific product itself.
  • How long the product has been in use for – and if it has been renewed, what informed that decision? This will help surface how embedded the product is.
  • Since this product has been introduced to the school, what other improvement strategies have been implemented – either whole-school or within this particular subject/phase/department? This will help surface whether any improvements seen relate to the product, other T&L strategies, or a combination of both.
  • Once students are used to using this product, what evidence is there that show that their learning translates into the same levels of mastery in other contexts? If they score ‘x’ or do ‘y’ when using this product, can you be confident that they would later score ‘x’ or do ‘y’ when applying the same skill in an unrelated context? Are the attainment increases about the child’s knowledge, or the child’s familiarity with the product?
  • What evidence is there of student’s long term knowledge or skill retention – over a week, term, year and beyond? This is not the same as progression through units of work, but about retaining knowledge over time. Is the product securing long term knowledge or targeting short term test preparation or skill validation?

 

Part of a school becoming an effective professional environment for all staff is about everyone engaging meaningfully with available evidence, and embedding specific types of strategic thinking and evaluative focus into practice (Twining & Henry, 2014). In other words, it’s about using robust evidence to inform our thinking, and being clear on how we use that evidence meaningfully, to make future decisions.

There are 3 key lines of enquiry which will help you to challenge evidence meaningfully:

  1. Correlation is not the same as causation. In other words, just because a school using a product saw improved attainment outcomes, increased engagement, reduction in workload or improved accountability measures, it doesn’t mean that it was the product that led to this. Most schools implementing a new product do so as part of a broader strategy focused on improving specific priorities. One would therefore expect the improvements to be seen regardless of which products were chosen because of the underlying strategic prioritisation given to the issue. Instead, focus on how the product affects changes to behaviours (e.g. increased precision within teaching and learning dialogue) – this is where meaningful impact will be found.
  2. For every research finding that argues one approach, there will be research elsewhere arguing for something different. Your role is to identify which research relates closest to your specific context. You can do this by asking:
    1. Who produced the material that I am reading? What bias might they have? Have they acknowledge that bias and shown how they have mitigated for it?
    2. What evidence led to their recommendations? What data are findings based on – are these large scale but surface level, or smaller scale and probed more meaningfully?
    3. What is their vision for teaching and learning and how does this align with the vision of what good learning and good teaching look like in our own school? Did you know that there are at least 23 different types of bias that we all bring to our decision making? (Hattie & Hamilton, 2020, pp. 6-9)
  3. Plan for impact before you commit to investing. A vital part of decision making is about planning from the outset how you will evaluate what works and why. You will then remain forensically focused on what matters most to your school throughout procurement, implementation and review. Furthermore, being able to identify and recalibrate when ideas do not work as intended so that future practice improves. Guskey (2016) encourages us to think about impact through 5 levels: reactions to something, personal learning about it, consequent organisational change, embedding ideas within new practices, and finally creating a positive impact on the lives of all those involved. These apply to both teachers and students (as well as leader, parents and other stakeholders – depending on the product). Embedding meaningful review of the impact of your product choice connects your intentions to the lived experiences of the students whose needs and future you are serving. The two vital questions that you will want to ask yourself and your team are:
    1. What evidence is there that our intentions for this product are being lived out in reality by our young people?
    2. What evidence is there that our provision (through this product) is making a tangible difference to how students view themselves, their learning and their future?

 

Finally, any decision made in school should always be rooted into improving the quality of teaching and learning. This can easily be lost amongst conversations about requirements and procurement.

To help with this, identify three to five ‘personas’ – short descriptions of the people who the product is ultimately intended to support. For example:

  • High attaining pupil premium students
  • KS3 girls disengaged with STEM
  • Children with EAL in KS1

 

At every point keep coming back to these personas – how would each product, feature, piece of research, impact finding, or sample of evidence relate to those specific students?

That way, we keep a forensic eye on what matters most: our students and their learning.

All of these issues will be debated and unpacked by school leaders, suppliers and academics in the ‘Making evidence informed decisions about edtech’ panel at The World Education Summit (21-24 March 2022).


You might also like: How to use technology to prevent burnout among teachers

Leave a Reply

Free live webinar & QA

Blended learning – Did we forget about the students?

Free Education Webinar with Class

Wednesday, June 15, 11AM London BST

Join our expert panel as we look at what blended learning means in 2022 and how universities can meet the needs of ever more diverse student expectations.