Analysis Q3 2024 toxic bot comments
The dataset of toxic bot comments detected between July 1 and September 29, 2024, included over 1,133 English messages (n=1,133). The average toxicity score was relatively low at 0.20, but a notable peak occurred on July 9, 2024. Several key events that day could have influenced this spike, such as the bombardments of Gaza by Israel, the conflict between Israel and Hezbollah, and the Russian invasion of Ukraine. (Here is a full list of possible events on that date)
Despite the low overall toxicity score, the dataset contained problematic and harmful content with toxic keywords like evil Jews, piece of shit, Jews, genocidal, moron, deep state, treasonous, scumbag, the white man, and disgusting. The content often focused on antisemitic narratives and conspiratorial themes.
Prominent toxic categories:
Politics (10%)
Racism, Ridicule, and/or Religion
Threatening and Disinformation content:
About 3% of the messages were categorized under these categories.
Most frequent combination: Politics + Racism (click here for the explanation of the different categories)
Below is a visualization of the top 50 toxic keywords from the third quarter of 2024. The size of the words reflects their frequency relative to others. Notably, Trump was the most frequently used keyword, followed by black, ho, racist, genocide, and Indians.
The table below highlights the various toxic categories detected during Q3 of 2024 and their respective frequencies (represented by black dots and explained further below). The most prominent categories were Ridicule, Racism, Sexism, Politics, Religion, Threat, and Untruth. However, the category with the highest toxicity score was content involving Jews, with an average score of 0.75 (on a scale from 0 to 1, where 1 indicates the highest level of toxicity).
The keyword analysis further supports the observed focus of toxic bot content. Categories like Untruth, Religion, and Politics were frequently linked to antisemitic narratives, as reflected in the use of terms associated with conspiracy theories and hate speech targeting Jewish communities.
The bar chart below highlights that the most prominent categories within the toxic bot content during Q3 of 2024 were Profanity, Ridicule, and Politics. However, the majority of the content fell under the Politics category, indicating that a significant portion of the bot activity focused on influencing political discussions and narratives (see the explanation of the different categories here).
The keyword network below visualizes the main focus of toxic bot content detected during Q3 of 2024. Different colors represent clusters of words frequently used together, highlighting key narratives. A significant focus is on the American elections, which were in full swing during this period. This is reflected in the frequent appearance of names such as Joe, Vivek, Biden, Kamala, and Trump.
Vivek Ramaswamy, a Republican presidential candidate, had dropped out in January 2024 and endorsed Trump. He was set to join the newly created Elon Musk-led Department of Governmental Efficiency (DOGE).
The term genocide, connected to Jews, likely references the ongoing Israel-Gaza conflict, a significant source of tension and bot-driven narratives during this period.
This network showcases the blending of domestic U.S. political narratives and international conflicts in the spread of toxic bot content.
Trends
The analysis of toxic bot content from Q1 through Q3 of 2024 reveals several consistent patterns and similarities across all three quarters:
Main Similarities
Dominant Narratives:
American Elections: The toxic content frequently revolved around key political figures such as Trump, Biden, Vivek Ramaswamy, and Kamala Harris, indicating a persistent focus on influencing U.S. political discourse.
Israel-Gaza Conflict: The conflict remained a central focus, often linked to antisemitic narratives involving terms like genocide, Jews, and Zionism.
Clear Antisemitic Narrative:
Across all quarters, antisemitic content was prominent, with keywords such as evil Jews, genocide, zionists, and Jewish frequently appearing. These narratives were often tied to conspiracy theories, disinformation, and hateful messaging.
The category Jews consistently received some of the highest toxicity scores, underscoring the severity of this issue.
Low Average Toxicity Scores:
The average toxicity score remained relatively low at 0.20 across all quarters, indicating that most messages had moderate toxicity but still contained problematic content.
However, pockets of extreme toxicity were detected, with some messages having scores above 0.8, particularly within antisemitic and threatening content.
Similar Category Scores and Frequent Combinations:
The categories Politics, Ridicule, Racism, and Religion were prominent throughout the year, with Politics + Racism being a frequent combination.
Threatening and disinformation content consistently made up around 3-5% of the messages each quarter.
Keyword Overlaps:
Across the word clouds and keyword networks, terms like Trump, Biden, genocide, black, racist, and Jews frequently appeared, illustrating the overlapping nature of bot-driven narratives.
Conclusion:
While the data from 2024 shows continuity in toxic narratives—particularly in relation to political discourse and antisemitism—it also reflects the bot networks' adaptability in exploiting ongoing events, such as election campaigns and international conflicts, to amplify their messaging. The consistent focus on key political figures and events suggests that these bots were designed to influence public opinion and exacerbate divisions on both domestic and international fronts.