Hate speech is a contextual phenomenon. What offends or inflames in one context may differ from what incites violence in a different time, place, and cultural landscape. Theories of hate speech, especially Susan Benesch’s concept of “dangerous speech” (hateful speech that incites violence), have focused on the factors that cut across these paradigms. However, the existing scholarship is narrowly focused on situations of mass violence or societal unrest in America or Europe.

This paper discusses how online hate speech may operate differently in a postcolonial context. While hate speech impacts all societies, the global South—Africa in particular—has been sorely understudied. I posit that in postcolonial circumstances, the interaction of multiple cultural contexts and social meanings form concurrent layers of interpretation that are often inaccessible to outsiders. This study expands the concept of online harms by examining the political, social, and cultural dimensions of data-intensive technologies.

The paper’s theories are informed by fieldwork that local partners and I conducted in Kasese, Uganda in 2019–2020, focusing on social unrest and lethal violence in the region following the 2016 elections. The research, completed with assistance from the Berkeley Human Rights Clinic, included examining the background and circumstances of the conflict; investigating social media’s role in the conflict; designing a curriculum around hate speech and disinformation for Ugandan audiences; creating a community-sourced lexicon of hateful terms; and incorporating community-based feedback on proposed strategies for mitigating hate speech and disinformation.

I begin this with a literature review of legal theory around hate speech, with a particular focus on Africa, and then turn to the legal context around hate speech and social media use in Uganda, examining how the social media landscape fueled past conflicts. Then I explain my Kasese fieldwork and the study’s methodology, before describing initial results. I follow with a discussion of applications to industry, specifically how hate speech is defined and treated by Meta’s Facebook, the dominant social media provider in Kasese. It progresses to a discussion of the implications of the study results and legal and policy recommendations for technology companies stemming from these findings.

Importantly, I apply the research findings to expand existing scholarship by proposing a new sixth “hallmark of dangerous speech” to augment Benesch’s paradigm. Adding “calls for geographic exclusion” as a new qualifier for dangerous speech stems from the particular characteristics embodied by postcolonial hate speech. Examples from the Kasese study illustrate how this phenomenon upends platforms’ expectations of hate speech—which may not consider “Coca-Cola bottle” to be an epithet. The application of this new hallmark will create a more inclusive understanding of hate speech in localized contexts.

This paper’s conclusions and questions may challenge platforms that must address hate speech and content moderation at a global scope and scale. It will examine the prevalence and role of social media platforms in Africa, and how these platforms have provided resources and engagement with civil society in these regions.