Analytics

A bot detection algorithm was created and customised for finding harmful bot content on the mainstream social media platform TikTok. To do so, Textgain built a social media GDPR-compliant monitoring pipeline with the use of a list of keywords with the most potentially polarising topics in order to further analyse the resulting database. Below are the findings;

On this page, one can find here the amount of suspected bot comments and their classification and a word cloud with the words used by toxic bot comments. Second, one can find information regarding the hash tags used by bots and a word cloud of the hash tags. Finally, one can adjust the desired time period to filter and see the corresponding information.

  • A bot are software programmes that automate repetitive tasks, mimicking or replacing human actions at a faster pace.

    Bots can be used for various purposes, including web crawling, chat interactions, but also to spread disinformation and hate.

  • 0 - APPROPRIATE: no target

    1 - INAPPROPRIATE: contains terms that are obscene, vulgar; but the text is not directed at any person specifically) - has no target

    2 - OFFENSIVE: including offensive generalization, contempt, dehumanization, indirect offensive remarks

    3 - VIOLENT: author threatens, indulges, desires or calls for physical violence against a target; it also includes calling for, denying or glorifying war crimes and crimes against humanity

  • A hashtag is a word or phrase preceded by the “#” sign. Hashtags are used on social media to tag posts as part of a larger conversation or topic. Hashtags are searchable and serve a similar role to keywords.scription

Network visualization with data since August 1st, 2024

The circles represent hashtags and the lines represent connections between hashtags that are used by the same bot(s). Hashtag circles that are often seen together attract each other and vice versa resulting in a grouping of those hashtags. The bigger the size of the circle the more the hashtag appears in the data. The colour of the circles represents community, which is calculated using the Louvain method. A community is a hub in which the circles interact significantly more with each other than with circles outside of the community. The network as a whole displays bot message interaction among tiktok posts whereby the hashtags are a representation of bot-targeted topics.

This data is sourced in collaboration with Textgain. Textgain was founded in 2015 as a spin-off of the University of Antwerp (Belgium). They specialize in the development of Artificial Intelligence that automatically detects and monitors harmful online societal trends and tensions, such as hate speech and disinformation. In 2016, Textgain gained significant attention for its efforts to detect jihadist propaganda on social media and the company has since then expanded its software stack to detect online signs of radicalization in all of its aspects, including extreme left and extreme right rhetoric. In 2021, Textgain became the coordinator of the European Observatory of Online Hate, an initiative to monitor online hate speech across the European Union. Check out their website here.

Textgain, as technological partner of IMSyPP, is tackling hate speech in a multidisciplinary fashion combining machine learning, computational social science and linguistic approaches to support a data-driven approach to hate speech regulation, prevention and awareness-raising. The goal of this initiative is automated detection and sustainable monitoring of hate speech. Therefore, they developed near real-time hate speech detection models tuned to language, culture and legislation, taking into account the context of the message.

June 2024

Overview of the analysis on toxic bot comments per quarter

January 2024

Q1 2024

Q2 2024

Monthly reports with more in-depth data

July 2024

Glossary

February 2024

March 2024

April 2024

May 2024