SECTIONS

Social media platforms must regulate misinformation and hate speech

There has been an explosion of misinformation, disinformation and hate speech on Kenyan social media over the past two weeks coming from politicians, bloggers, supporters, and others who have blatantly misrepresented how they are doing, how their opponents are allegedly acting, and how the Independent Electoral and Boundaries Commission (IEBC) was conducting the elections. To inform users of the potential for misinformation, Twitter flagged false information sources with a disclaimer.

The number of active Facebook users in Kenya is 9.95 million, which represents 18 per cent of the population. Additionally, YouTube, Instagram, LinkedIn, Snapchat, and Twitter each have 9.26, 2.50, 1.75, and 1.35 million users. The market dominance of these corporations means they control massive amounts of information, unilaterally dictating what content can be displayed and regulating online speech through opaque processes.

Online activity and freedom of expression are regulated under the Constitution and statutes. Among them is Article 33 of the Constitution, which enshrines freedom of speech while banning hate speech, advocacy for hatred, vilification, incitement to violence, and discrimination.

Other laws include the Penal Code, the controversial Computer Misuse and Cybercrimes Act and the National Cohesion and Integration Commission (NCIC) Act which outlaw hate speech, dissemination of false publications and incitement to violence. It is noteworthy that critics of the law on false publications argue that it is overly subjective and wrongfully makes the State the arbitrator of truth.

There is growing concern that social media platforms threaten democracy, freedom of choice, national cohesion, and other human rights. During the 2016 elections, Cambridge Analytica used 87 million individuals' personal data to manipulate politics through micro-targeted messaging. Personal information was used to create personality profiles, based on which political messaging was curated. A divisive election took place in Kenya in 2017 that was also worked on by Cambridge Analytica.

The Institute for Strategic Dialogue reported in June 2022 that Islamic State and Al Shabaab are recruiting terrorists in Eastern Africa, including Kenya, through Facebook. In another report, it was revealed that online influencers were paid to attack activists and judges concerning proposed constitutional changes.

In the global south, social media platforms such as Facebook and WhatsApp are disproportionately used as their primary means of accessing the Internet. Therefore, these platforms' actions have a profound effect on freedom of expression and access to information. Although the platforms make millions in countries like Kenya, they do not invest in content moderation as much as they do in Europe or the Americas.

Consequently, there are fewer, insufficiently trained content regulators and less customised Artificial Intelligence programmes available to flag, review, and remove inappropriate content. A further risk is that reviewers and AI systems designed to identify and flag inappropriate content may not understand the local languages and contexts, which could lead to further harm.

While domestic laws protect against defamation, harassment, incitement, obscenity, terrorist recruitment, and child abuse, social media companies enforce their own terms of service. It is also imperative to ensure transparency regarding the rules, tools, standards, and actions taken by social media platforms in Kenya and Africa.

Perhaps we should set up remedial and grievance mechanisms that are legitimate, accessible, predictable, equitable, rights compatible and transparent. Recently, the Council for Responsible Social Media, a non-partisan group of experts, CSOs and eminent Kenyans, called upon these platforms to invest more in content moderation and to publicly commit to a transparent code of practice.