Bot comments Report June 2024
The dataset of toxic bot comments detected in June 2024, spanning from June 1 to June 30, included over 800 English messages (n=836). The average toxicity score of the bot content remained relatively low at 0.20. However, a significant peak in toxic activity was observed on June 26, 2024, potentially linked to Israel’s bombing of the Gaza Strip (see the full list of events here).
Despite the low average toxicity, several messages contained problematic and harmful language, featuring toxic keywords such as idiot, scum, fake news, new world order, freak, immigration, puppets, sexist, fascists, and fucking.
Toxicity frequently involved:
Ridicule (15%), Politics, Hatred of Jews, and/or Contempt.
Threatening content: About 4% of the messages were classified under this category.
Disinformation: Approximately 2% of the messages involved misleading or false information.
Most frequent category combination: Politics + Threatening (see category explanations here).
The word cloud below highlights the 50 most frequently used toxic keywords in June 2024. The size of each word reflects its frequency relative to other terms. Notably, black was the most frequently used keyword, followed by Africa, racist, corruption, and fighting.
These words form the top 50 of used key words by bot comments in June 2024
The table below highlights the different toxic categories detected within the June 2024 dataset, along with their respective frequencies (represented by black dots and further explained below). In this month, no single category was significantly more prominent than the others. However, content categorized as Disinformation had a notably high toxicity score of 0.40, and content targeting Jews had an even higher toxicity score of 0.50 (on a scale from 0 to 1, where 1 represents the highest level of toxicity).
The bar chart below highlights that the most prominent categories with the highest toxicity scores in June 2024’s dataset of toxic bot comments were Politics and Profanity, followed by Ridicule. The majority of the toxic bot content fell under the Politics category, indicating that much of the bot activity was aimed at influencing political discussions (see the explanation of the different categories here).
Number of toxic messages by category
The keyword network below illustrates the main focus of toxic bot comments detected in June 2024. Different colors represent clusters of words frequently used together, revealing the bot’s primary narratives. In this case, keywords like he, racist, and immigrants frequently appeared together, indicating a narrative centered around racial issues and immigration.