Monitoring political misinformation and disinformation on TikTok

Credit: Unsplash / CC0 Public Domain

TikTok is one of the five largest social media platforms in the world this year.

In Southeast Asia, 198 million people, about 29% of the region’s population, used TikTok last year. It is not an exaggeration to say that the platform has become one of the centers of ideas and opinion for the people in the region.

Like other interested scientists, my research team was interested in TikTok. Specifically, we wanted to examine how information, including political misinformation and disinformation, flows on the platform. The difference between the two forms of false information is that disinformation is deliberately and maliciously misleading.

During our eight months of research, we found it quite difficult to track down political misinformation and disinformation about TikTok. It did so despite the fact that in 2020 the platform, in collaboration with independent fact-checking organizations, launched a fact-checking program that would help “review and evaluate the accuracy of the content” on the platform.

As part of this program, TikTok discloses potential misinformation to its partners. It may contain videos that have been flagged for misinformation by TikTok users, or those related to COVID-19 or other topics “that spread misleading information”.

However, we are still struggling to track misinformation and disinformation on the platform such as: B. Checking audiovisual content and identifying foreign languages ​​and terms.

Fact check of audiovisual content

It’s difficult to review audiovisual content on TikTok.

To effectively track misinformation / disinformation, all content should be carefully observed and understood based on the local context. To ensure correct assessment, this required many hours of human observation and video analysis (observation of language, non-verbal cues, terms, images, text and captions).

It is for this reason that fact checkers worldwide rely on public participation to report misleading content, aside from the fact that human fact checkers focus on checking mostly viral content.

AI technology can help verify some of these contributions. However, fact-checking audiovisual content still relies heavily on human judgment to verify its accuracy.

To date, audiovisual content is arguably one of the most difficult formats to review in the world. Other social media platforms face the same challenge.

In our research, we found that much of the content we were monitoring contained no verifiable claims. This meant it could not be objectively confirmed or debunked and labeled as misinformation.

In order to determine which videos or comments contained inaccurate claims, we developed a misinformation framework based on the criteria for determining verifiable claims used by VERA Files in the Philippines and Tirto.id in Indonesia. Both organizations are signatories of Poynter’s International Fact-Checking Network.

We also considered the 10-point list of warnings and tips for identifying misinformation provided by Colleen Sinclair, an Associate Professor of Clinical Psychology at Mississippi State University.

We based our misinformation framework on the criteria for determining verifiable statements used by VERA Files in the Philippines and Tirto.id in Indonesia. Photo credit: Nuurrianti Jalli (2021)

On the basis of this misinformation framework, we found that the majority of the monitored videos and corresponding comments only contained subjective statements (opinion, calls to action, speculations) or were difficult to verify due to their lack of feasibility.

Examples included comments on Indonesia’s controversial new labor law known as the Omnibus Act, debates about the inappropriateness of rape jokes in schools that initiated the #MakeSchoolASaferPlace movement in Malaysia, arguments about poor government policies in Malaysia amid COVID-19, which is another Online campaign #kerajaangagal. launched, and the Philippines Anti-Terrorism Act. These comments were deemed unverifiable because they were emotionally motivated and based on users’ views on the issues. Therefore, they could not be marked as containing or possibly containing misinformation / disinformation.

These results could be different if content creators and video commentators incorporated assertions of fact or “workable claims” that we could match against credible and authoritative sources.

Identification of different languages, slang and jargon on TikTok

Some fact checkers and researchers have previously found that different languages ​​and dialects in the region have made fact-checking difficult for local authorities.

In this study, we also found that slang makes it harder to track political misinformation / disinformation on TikTok, even when analyzing content uploaded in our native language.

Factors such as generational differences and a lack of awareness of trending slang and jargon used by content creators and users should not be underestimated when reviewing content on the platform. Undoubtedly, this will also be an issue for AI-powered fact-checking mechanisms.

Difficult for everyone

During our research, we found that tracking down misinformation on the platform can be a bit more of a challenge for the research team and ordinary people.

Unless you’re a data scientist who can code the Python API to collect data, scraping data on TikTok requires manual labor.

For this project, our team opted for the latter because most of our members were not equipped with data science knowledge. We tracked misinformation on the platform by manually assigning relevant hashtags using TikTok’s search function.

All TikTok videos were manually extracted and organized for fact-checking. The fact checking framework for this project was developed based on the framework used by VERAfiles and Tirto.id. Photo credit: Nuurrianti Jalli (2021)

One downside we found using this strategy is that it can be time consuming due to the limitations of the search functionality.

For one, TikTok’s Discover tab allows users to sort the results based on relevance and / or total number of likes only. You cannot sort the results by total number of views, shares, and / or comments.

It also allows the results to be filtered by the date of the upload, but only for the last six months. This makes it difficult to find older data, as in our case.

As a result, we had to manually search through the listings to find relevant videos with the most views or the highest number of interactions uploaded within our chosen monitoring period.

This made the process pretty overwhelming, especially for the hashtags that resulted in thousands (or more) of TikTok videos.

TikTok should think about improving its platform so that users can filter and sort videos in search results. In particular, they should be able to sort by the number of views and / or interactions and the custom date of the upload. Interested individuals and fact checkers could then pursue political misinformation / disinformation more efficiently.

This would help TikTok to be less burdened with false information, as more people would have the ability to efficiently monitor misinformation / disinformation. This could complement the existing efforts of TikTok’s own fact-checking team.

AFP launches fact-checking program with TikTok, provided by The Conversation

This article was republished by The Conversation under a Creative Commons license. Read the original article.The conversation

Quote: “Mission impossible?”: Tracking political misinformation and disinformation about TikTok (2021, December 21), accessed on December 21, 2021 from https://techxplore.com/news/2021-12-mission-impossible-tracking-political -misinformation.html

This document is subject to copyright. Except for fair trade for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.

Comments are closed.