The challenge of online hate speech : Can AI really help?

09 March 2022

Hate speech (HS) is a growing concern online. It can inflict harm on targeted individuals and stir up social conflict. However, it has proven difficult to stop its spread and mitigate its harmful effects. In many cases, there is a real lack of agreement about what hate is and at what point it becomes illegal — problems compounded by differences across different countries, cultures, and communities. Further, there is little consensus on protecting people from hate should be balanced with protecting freedom of expression. A key challenge with online hate speech is finding and classifying it — the sheer volume of hate speech circulating online exceeds the capabilities of human moderators, resulting in the need for increasingly effective automation. The pervasiveness of online hate speech also presents an opportunity. These large volumes of data could be used as indicators of spiraling instability in specific contexts, offering the possibility of early alerts and intervention to stem real-world violence. AI is now the primary method that tech companies use to find, categorize and remove online hate speech at scale.

The session will create a direct conversation between 4 key stakeholders (Private, Civil, Technical, Intergovernmental, and Youth) who all work to tackle online abuse to establish a shared understanding of challenges and solutions but are rarely brought into contact. In particular, we anticipate articulation of a global human rights-based critique of data science research practices in this domain, helping to formulate constructive ways to shape the use of AI better to tackle online hate speech.

For more information and registration, click here.