Experts create machine learning tool to tackle fake news

By applying the principles of machine learning to domain registration data, the tool was able to correctly identify 92% of false information domains set up in relation to the 2016 US election

Academics from University College London (UCL) and other respected institutions have combined to create a machine learning tool to identify new domains designed to spread false information.

Real-Time Prediction of Online False Information Purveyors and their Characteristics is a working paper co-authored by Anil R. Doshi (UCL School of Management), Sharat Raghavan (University of California, Berkeley) and William Schmidt (Cornell University), which strives to tackle ‘fake news’ domains before the information proliferates.

With fake news travelling six times faster than real news, any attempt to counter the spread has to be fast. In light of this, Anil Doshi and his peers produced an early detection system to identify domains that were most likely to be bad actors. Details contained in the registration information, for example, whether the registering party is kept private, are used to identify the sites.

“Many models that predict false information use the content of articles or behaviours on social media channels to make their predictions. By the time that data is available, it may be too late. These producers are nimble and we need a way to identify them early. By using domain registration data, we can provide an early warning system using data that is arguably difficult for the actors to manipulate. Actors who produce false information tend to prefer remaining hidden and we use that in our model.”

Advertisement

After applying machine learning principles to domain registration data, the tool was able to accurately identify 92% of false information domains, and 96.2% of the non-false information domains set up in relation to the 2016 US election before they started operations.

The authors of the study propose that their tool should used to help regulators, platforms and policy makers proceed with an escalated process to increase monitoring, send warnings or sanction them, and ultimately decide whether they should be taken down. The researchers have urged social media companies to invest more money and effort into quelling the spread of fake news.

Doshi added: “Fake news which is promoted by social media is common in elections and it continues to proliferate in spite of the somewhat limited efforts social media companies and governments to stem the tide and defend against it. Our concern is that this is just the start of the journey. We need to recognise that it is only a matter of time before these tools are redeployed on a more widespread basis to target companies, indeed there is evidence of this already happening. Social media companies and regulators need to be more engaged in dealing with this very real issue and corporates need to have a plan in place to quickly identify when they become the target of this type of campaign.”


You might also like: UK fake news detection start-up raises £2.5m to gear up for US election


 

Leave a Reply

Advertisement

WATCH THE LIVE PANEL DISCUSSION

Outstanding Practice in Safeguarding and Pastoral Care