How we discovered the true depth of online hate.
YouGov conducted a survey of 4,000 people to give a nationally representative view of abuse across private channels (Facebook, Instagram, TikTok, Snapchat and other channels). As well as diving into the general public’s perception of online abuse, their behaviours, where, how and why they see or experience abuse, the type and severity and their attitudes towards it all.
All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 4,573 adults, of which 4,135 were regular social media users. Fieldwork was undertaken between 1st - 6th April 2021. The survey was carried out online. The figures have been weighted and are representative of all GB adults (aged 18+).
A bespoke AI Abuse Tracker was created to understand and categorise abuse in real time. This unique tracker was trained in language-detection logic to then accurately identify and classify online abuse.
The ‘abuse’ labelled data was extracted through multiple classifications.
- Abusive Language – When a post or comment has obvious language which is intended to be abusive. The different degrees of aggression then generate a score of abusiveness.
- Emotion & Emojis – The tone of these posts was identified to understand different forms of emotion. Do they seem ‘abusive’ but are in fact, ironic?
- Sentiment – Was the post positive or negative? Deep-learning models through Google and Azure’s technology extracted the feelings of each post.
All of these factors helped achieve the best possible accuracy for each specific post and reduce the potential for any unintended bias.