Ninety-two percent of escalated harmful content encountered online was successfully removed by the Report Harmful Content (RHC), the helpline’s annual report has revealed.
Of all reported cases, those involving bullying and harassment were most commonly experienced, with online harassment and abuse disproportionately affecting women, and the perpetrator(s) often being ex-partner(s).
Overall, the report has highlighted three common trends in online harmful content:
- A combination of impersonation (catfishing), bullying/harassment and privacy violation
- A combination of abuse, threats and hate speech (most commonly racism/xenophobia)
- Clients inadvertently viewing harmful content (e.g. violence or pornography)
Nineteen-percent of RHC clients reported an incident or content deemed to be criminal, and thus, their case was progressed to law enforcement. However, 47% of those clients got back in touch with the helpline, often reporting that the police had dismissed them and incorrectly informed them that their issue was not criminal.
While some online material did not meet legal or platform thresholds for removal, the client’s cultural/religious background often meant that it was extremely harmful to them. RHC highlighted various issues in securing the removal of this type of content and consequently, their ability to safeguard clients. On top of this, responses from industry platforms often showed a lack of clarity around the sort of online material that would actually be removed.
Further to this, 32% of RHC clients reported negative mental health impacts as a result of being exposed to, or being a victim of, harmful online content, with 13% reporting suicidal ideation.
The helpline sought to assist clients in a number of diverse ways, removing, restricting or regaining access for 92% of cases actioned. In cases deemed to be criminal, RHC provided a service of emotional support and practical assistance, where practitioners were able to offer advice and onward signposting.