Publication
In the digital age, hate speech poses a threat to the functioning of social media platforms as spaces for public discourse. Top-down approaches to moderate hate speech encounter difficulties due to conflicts with freedom of expression and issues of scalability. Counter speech, a form of collective moderation by citizens, has emerged as a potential remedy.
Here, we aim to investigate which counter speech strategies are most effective in reducing the prevalence of hate, toxicity, and extremity on online platforms. We analyze more than 130,000 discussions on German Twitter starting at the peak of the migrant crisis in 2015 and extending over 4 years.
We use human annotation and machine learning classifiers to identify argumentation strategies, ingroup and outgroup references, emotional tone, and different measures of discourse quality. Using matching and time-series analyses we discern the effectiveness of naturally observed counter speech strategies on the microlevel (individual tweet pairs), mesolevel (entire discussions) and macrolevel (over days). We find that expressing straightforward opinions, even if not factual but devoid of insults, results in the least subsequent hate, toxicity, and extremity over all levels of analyses.
This strategy complements currently recommended counter speech strategies and is easy for citizens to engage in. Sarcasm can also be effective in improving discourse quality, especially in the presence of organized extreme groups. Going beyond one-shot analyses on smaller samples prevalent in most prior studies, our findings have implications for the successful management of public online spaces through collective civic moderation.
J. Lasser, A. Herderich, J. Garland, S.T. Aroyehun, D. Garcia, M. Galesic, Collective moderation of hate, toxicity, and extremity in online discussions, PNAS Nexus 4(11) (2025) pgaf369.
Related
Signup