In the 2017 election, social media especially Facebook, played a huge role. This was largely done through the infamous British company Cambridge Analytica, which was accused of widespread manipulation.
Five years later, Facebook’s role has dramatically increased. Since 2017, the number of Kenyans with a Facebook account has nearly doubled from 7.1 million to more than 13 million. So one in four Kenyans online, is on Facebook.
According to the Reuters Digital News Report 2022, online sources and social media are now the most-accessed sources of news for urban and educated Kenyans. The same report also indicates that Facebook is the most popular social media platform for news.
So, for majority of city-dwelling Kenyans, Facebook is a key battleground for their vote. As academic George Ogola wrote in The Conversation, even before 2017, misinformation at election time was hardly novel. What’s changed is the sheer scale and speed with which Facebook allows false information to be fabricated and disseminated.
In recent years, Kenya has witnessed highly contested elections fraught with violence. Candidates have often weaponised tribal politics to turn communities against one another. The foregoing has been heightened in an era where big tech companies have allowed large-scale disinformation campaigns that now spread hate speech like wildfire.
Even without the now-defunct Cambridge Analytica, studies by Mozilla have revealed a thriving disinformation-for-profit industry in Kenya ready to swing into action ahead of August 9.
What has Facebook done in the last five years to better protect Kenyans this time round? They’ve produced a pamphlet which, frankly, gives little detail on the specific measures to deal with harmful content.
Happily, Facebook’s content moderation hub for much of East Africa is actually based in Nairobi which would seem, on the surface, like good news.
Less happily, they are currently being sued by a former content moderator from the Nairobi office, on charges of wrongful termination, union-busting, forced labour and human trafficking. Facebook’s defence is that they do not operate in Kenya, which seems rich, given they have produced a glossy booklet about their intentions here.
Instead, they are throwing their outsourcing company Sama, who run the Nairobi content moderation office for them, into the path of the lawsuit brought by former moderator Daniel Motaung. But every line of software used by Sama, Facebook (now Meta) wrote. And the policies they implement? Sama didn’t write those policies. Meta also defines the process and metrics for deciding whether moderators did a good job. It begs the question – how can Meta guarantee the fairness and safety of an entire election when they can’t even be trusted to protect the safety of their staff in Kenya?
As a judge considers tomorrow whether to accept Meta’s jurisdiction argument, we should be concerned about the seriousness with which the social media giant treats our election and - our lives.
Even more worrying is the company’s contemptuously neo-colonial attitude, with their main defence in the Labour court being a transparent untruth that they don’t trade in Kenya at all – irrespective of revenue generated from Kenyans and their impact on our democracy.
Facebook only has one criterion necessary for an ad to be posted on the platform – a paying customer.
For Kenyans, that’s an election health warning we cannot ignore.
-The writer is Executive Director of the Kenya Human Rights Commission. The views here are his.