Researchers say that Facebook and TikTok approve ads that contain ‘blatant’ misinformation regarding midterm voting

According to a report by Global Witness and the Cybersecurity For Democracy Team (C4D), both Facebook and TikTok did not block ads with “blatant” misinformation regarding the US midterm elections and the integrity of the voting process.

Researchers submitted 20 advertisements with false claims to Facebook TikTok, YouTube, and YouTube in an experiment. These ads targeted battleground states like Arizona and Georgia. According to the report, YouTube was able to detect and reject all test submissions and suspended the channel that posted them. However, other platforms did not fare as well.

The researchers discovered that TikTok approved 90% of d content that was blatantly false and misleading. According to the report, Facebook approved a significant number of ads that contained misleading or blatantly false information, but less than TikTok.

Both Spanish and English ads were submitted. They included false information stating that voting days could be extended and that social networking accounts could double as voter verification. These ads made claims to discourage voter participation, including claims that election results could have been hacked or that the outcome had already been decided.

After going through the approval process, the researchers pulled the ads. If they were approved, the ads that contained misinformation were not displayed to the users.

“YouTube’s performance in our experiment shows that it is possible to detect damaging election disinformation,” Laura Edelson, co-director of NYU’s C4D group, stated in a statement included with the report. All the platforms that we examined should have received an A on this assignment. We urge Facebook and TikTok not to give false information about elections to voters.

A spokesperson for Meta, Facebook’s parent company, stated that the tests were based on “a very small sample” of ads and weren’t representative of the many political ads we review every day across the globe. He added, “Our ads review process involves multiple layers of analysis, detection, and both before and after an advertisement goes live.”

TikTok spokespeople stated that the platform is a platform for authentic and entertaining content. This is why we ban and remove paid political advertising and election misinformation from our platform. We appreciate feedback from academics and other experts, which allows us to continuously improve our policies and processes.

Google stated that it has developed “extensive measures to combat misinformation on its platforms, including false claims regarding voting procedures.” It also said: “We know it is important to protect our users against this type of abuse – especially ahead of major elections such as those in Brazil and the United States – and we continue investing in and improving our enforcement systems to better detect and eliminate this content.”

Although the scope of the experiment is limited, it could raise concerns about the actions of some of the largest social platforms to combat misinformation not only about candidates and issues but also about misinformation about voting, which has just weeks before the midterms.

According to a blog post by the company, TikTok’s influence and scrutiny have increased in US politics in recent elections. In August, they launched an Elections Center to “connect people who engage in election content to authoritative info.” This includes guidance on where to vote and additional labels to identify content that is related to the midterm election.

TikTok made additional efforts to ensure the integrity of political content in the lead-up to the midterm elections. The platform started to require “mandatory authentication” for accounts that are based in the United States. It also imposed a blanket ban against all political fundraising.

Blake Chandlee (President of Global Business Solutions at TikTok) stated in a blog post that “as we have laid out before, we won’t continue to develop policies which foster and promote an environment that brings people together. Not divide them.” We do this by trying to keep misleading misinformation off the platform and prohibiting political advertising.

Meta stated in September that it would remove false claims about who can vote and how. It also said that violence related to elections would be removed from its midterm plan. Meta did not ban claims of fraudulent or rigged elections. The Washington Post was informed by the company that these types of claims would not be removed from any content related to the 2020 election. According to company policy, Meta has banned US advertisements that “call into doubt the legitimacy of an ongoing or upcoming election,” which includes the midterms.

Google took additional steps in September to guard against election misinformation. They raised the trustworthiness of information and displayed it more prominently across search and YouTube.

Social media giants rely on a combination of artificial intelligence systems as well as human moderators to review the large number of posts they receive. The study shows that even though the platforms have similar goals and approaches, they can vary in how they enforce content.

Researchers found that the only ad that TikTok rejected claimed that voters must have had a Covid-19 vaccine to vote. Facebook, however, accepted the submission.

About The Author

Leave a Comment

Scroll to Top