Social media platforms Facebook, TikTok and Twitter did not live up to their election integrity pledges during Kenya’s August elections, according to a new study by the Mozilla Foundation. The report says content labeling failed to stop misinformation, as political advertising served to amplify propaganda.
The study found that hours after voting ended in Kenya these social media platforms were awash with mis- and disinformation on candidates that were purported to have won the elections, and that labeling by Twitter and Tiktok was spotty and failed to stop the spread of these falsehoods. It says that the spotty labeling of posts calling the elections ahead of the official announcement affected some parties more than others, which made the platforms seem partisan.
Facebook failed majorly on this front by not having “any visible labels” during the elections, allowing the spread of propaganda — like claims of the kidnapping and arrest of a prominent politician, which had been debunked by local media houses. Facebook recently put a label on the original post claiming kidnapping and arrest of the prominent politician.
“The days following Kenya’s federal election were an online dystopia. More than even, we needed platforms to fulfill their promises of being trustworthy places for election information. Instead, they were just the opposite: places of conspiracy, rumor, and false claims of victory,” said Odanga Madung, the Mozilla Tech and Society Fellow who conducted the study and previously raised concerns over the platforms inability to moderate content in the lead up to the Kenya’s elections. Mozilla found similar failures during the 2021 German elections.
“This is especially disheartening given the platform’s pledges leading up to the election. In just a matter of hours after the polls closed, it became clear that Facebook, TikTok and Twitter lack the resources and cultural context to moderate election information in the region.”
Prior to the elections these platforms had issued statements on measures they were taking in the lead up to Kenya’s elections including partnerships with fact-checking organizations.
Madung said that in markets like Kenya, where the trust level of institutions is low and challenged, there was need to study how labeling as solution (which had been tested in western contexts) could be applied in these markets too.
Kenya’s general election this year was unlike any other as the country’s electoral body the Independent Electoral and Boundaries Commission (IEBC) released all results data to the public in its quest for transparency.
Media houses, parties of main presidential contenders– Dr. William Ruto (now president) and Raila Odinga, and individual citizens conducted parallel tallies that yielded varying results, which further “trigger[ed] confusion and anxiety nationwide.”
“This untamed anxiety found its home in online spaces where a plethora of mis- and disinformation was thriving: premature and false claims of winning candidates, unverified statements pertaining to voting practices, fake and parody public figure accounts…”
Madung added that platforms implemented interventions when it was too late, and ended soon after elections. This is despite knowledge that in countries like Kenya, where results have been challenged in court in the last three elections, more time and effort is required to counter mis- and disinformation.
The study also found that Facebook allowed politicians to advertise 48 hours to the election day, breaking Kenya’s law, which requires campaigns to end two days before the polls. It found that individuals could still purchase ads, and that Meta applied less stringent rules in Kenya unlike in markets like the U.S.
Madung also identified several ads containing premature election results and announcements, something Meta said it did not allow, raising the question of safety.
“None of the ads had any warning labels on them — the platform (Meta) simply took the advertiser’s money and allowed them to spread unverified information to audiences,” it said.
“Seven ads may hardly be considered to be dangerous. But what we identified along with findings from other researchers suggests that if the platform couldn’t identify offending content in what was supposed to be its most controlled environment, then questions should be raised of whether there is any safety net on the platform at all,” said the report.
Meta told TechCrunch that it “relies on advertisers to ensure they comply with the relevant electoral laws” but has set measures that ensure compliance and transparency including verifying persons posting ads.
“We prepared extensively for the Kenyan elections over the past year and implemented a number of measures to keep people safe and informed- including tools to make political ads more transparent, so people can scrutinize them and hold those responsible to account. We make this clear in our Advertising Standards that advertisers must ensure they comply with the relevant electoral laws in the country they want to issue ads,” said Meta Spokesperson.
Mozilla is calling on the platforms to be transparent on the actions they take on their systems to uncover what works in stemming dis- and misinformation, and to initiate interventions early enough (before elections are held) and after sustain the efforts after the results have been declared.