Research published today (September 10) into the use of machine learning in children’s social care concludes that none of the four models assessed could accurately predict levels of risk to young people.
‘Machine Learning in Children’s Services’ is the culmination of 18 months of work by What Works for Children’s Social Care (WWCSC). It saw machine learning models tested by four local authorities in England, with each making predictions of risk levels based on each authority’s historical data.
None of the models reached WWCSC’s pre-defined level of ‘success’, falling well short of the effectiveness levels required for application in the real world; four out of five of the children at risk were missed by the models. Conversely, when the models did assess a child as being at risk, they were wrong 60% of the time.
“Data science can provide objective evidence, at scale, that makes professional judgement easier” – Anne Longfield, Children’s Commissioner for England
The research was not undertaken to establish whether machine learning could ever work effectively in children’s social care, says the WWCSC. Instead, the report underlines its current limitations, and adds technical detail to the review published in January which found fundamental barriers to machine learning’s ethical use. Local authorities already using predictive analytics are being urged by WWCSC to be open about the challenges they have experienced.
“It is clear that in our research the promise of machine learning has not been realised,” said Michael Sanders, chief executive of WWCSC.
“Far greater transparency about the effectiveness of these models, and how well they actually work, is sorely needed, not just in children’s social care but in any area where predictive models could be used to make decisions.”
In her forward to the summary report, Anne Longfield, the Children’s Commissioner for England, is clear that the door should not be closed on machine learning’s role in social care.
‘Nobody is suggesting that data science or algorithms can ever replace professional judgement,’ she writes. ‘But what they can do is provide objective evidence, at scale, that makes professional judgement easier.
‘We will have all seen, in the context of this year’s A-level results, the issues caused by so-called ‘decision-making by algorithm’. It is an important warning that we must all heed. But it is not a reason not to use data science or algorithms; rather, it is a reason to use them carefully, understand their limitations, and test and refine them, while continuing to treat people as individuals.’