By its very nature, TikTok is harder to moderate than many other social media platforms, according to Cameron Hickey, project director at the Algorithmic Transparency Institute. The shortness of the videos and the fact that many may include audio, visual, and textual elements makes human judgment even more necessary to decide if something violates the platform’s rules. Even advanced artificial intelligence tools, like using text-to-speech to quickly identify problem words, are more difficult “when the audio you’re dealing with is also accompanied by music,” says Hickey. “The default mode for people creating content on TikTok is also to embed music.”
It becomes even more difficult in languages other than English.
“What we generally know is that platforms are better at dealing with problematic content where they are based or in the languages that the people who created them speak,” says Hickey. “And there are more people making bad things than there are people in these companies trying to get rid of bad things.”
Many of the disinformation pieces Madung found were “synthetic content,” videos created to look like an old news broadcast, or they use screenshots that appear to be from legitimate news outlets.
“Since 2017, we have noticed that there was an emerging trend at the time to appropriate the identities of mainstream media brands,” says Madung. “We’re seeing widespread use of this tactic on the platform, and it seems to work exceptionally well.”
Madung also spoke with former TikTok content moderator Gadear Ayed to better understand the company’s moderation efforts more generally. Although Ayed has not moderated Kenya‘s TikToks, she told Madung that she is often asked to moderate content in languages or contexts she is unfamiliar with, and that she would not have had the context to tell if a medium had been manipulated.
“It’s common to find moderators being asked to moderate videos in different languages and contexts than they understood,” Ayed told Madung. “For example, at one point I had to moderate videos that were in Hebrew when I didn’t know the language or the context. All I could rely on was the visual image of what I could see, but I couldn’t moderate everything that was written.
A TikTok spokesperson told WIRED that the company prohibits election misinformation and the promotion of violence and is committed to protecting the integrity of [its] platform and have a dedicated team working to protect TikTok during Kenya’s elections. The spokesperson also said he was working with fact-checking organisations, including Agence France-Presse in Kenya, and planned to roll out features to connect his “community with authoritative information on Kenya’s elections in our app”.
But even if TikTok removes the offending content, Hickey says that may not be enough. “A person can remix, duet, reshare someone else’s content,” says Hickey. This means that even if the original video is deleted, other versions may live on undetected. TikTok videos can also be uploaded and shared on other platforms, like Facebook and Twitter, which is how Madung first encountered some of them.
Several of the videos flagged in the Mozilla Foundation report have since been removed, but TikTok did not respond to questions about whether it removed other videos or if the videos themselves were part of a coordinated effort. .
But Madung suspects they might be. “Some of the more egregious hashtags were things I would find by looking for coordinated campaigns on Twitter and then I would think, what if I searched for this on TikTok?”