TikTok bad actors are using AI to churn out political misinformation, new report shows
TikTok view bait has gotten a boost from AI over the last year, including content spreading political falsehoods ahead of the election, according to new data from misinformation watchdog NewsGuard.
In a new report detailing the use of AI tools among TikTok’s bad actors, the organization discovered at least 41 TikTok accounts posting false, AI-enhanced content in both English and French. Between March 2023 and June 2024, the accounts posted 9,784 videos totaling over 380 million views, coming out to an average of one to four AI-narrated videos each day. Many of the videos used identical scripts, hinting at a coordinated effort. Several of the accounts also qualified for monetization on their videos, under TikTok’s Creator Fund.
Much of the content included essay or fun-fact style videos sharing false narratives about U.S. and European politics and the Russia-Ukraine war, NewsGuard explained, such as the incorrect claim that NATO had deployed combat troops in Ukraine and that the U.S. was behind the March Crocus City Hall terrorist attack in Moscow.
The research was conducted in partnership with AI detection tool TrueMedia.org.
Last year, NewsGuard documented the rise of a small network of TikTok accounts using AI-facilitated text-to-speech tools to spread celebrity conspiracy theories. The short form videos used AI to instantly generate narration and additional voices across a mass of videos sharing false information. In total, the accounts garnered 336 million views and 14.5 million likes in a span of three months.
The latest report shows a substantial increase in similarly AI-boosted content on the app, this time with political motivations. The trend hints at an increasingly incentivized and growing AI content farm on the app — defined by the organization as “entities that generate large volumes of low-quality content, typically to attract views and ad revenue.”
The spread of AI-facilitated misinformation and disinformation hasn’t gone unnoticed by TikTok, with the platform pledging to more effectively label and even watermark content using generative AI. But political misinformation, backed by the efficiency of AI features, has continued to proliferate.
Meanwhile, on July 9, the Justice Department announced it had identified and taken down an AI-powered Russian bot farm running at least 1,000 pro-Kremlin accounts on X (formerly Twitter) and backed by Russia’s KGB successor. A few months prior, OpenAI announced it had terminated accounts of confirmed foreign state actors attempting to use its AI technology to support potential cyber attacks.
The U.S. itself has harnessed AI technology and bot farms to spread its own counter narratives and even outright disinformation campaigns, including a 2020 initiative to curb foreign influence by spreading misinformation about COVID-19.
As the election nears (and voter turnout and candidate trust comes under question) watchdogs and advocates continue to monitor both targeted disinformation and AI-boosted misinformation spread on social media platforms.