On March 20, 2024, Interior Cabinet Secretary Kithure Kindiki announced a plan by the government to limit the use of TikTok (a Chinese social media platform) by government officials. He further disclosed that the office of the Data Protection Commissioner had written to TikTok demanding confirmation of compliance to the Data Protection Act, 2019.

Earlier in February, the European Commission banned the use of TikTok by its employees while in March, the Unites States congress voted in favour of banning it.

Last year, the same app was hit with a fine of over two billion shillings for breach of data privacy laws in the UK.

Bigger threat

The fines and bans have shown goodwill by governments in safeguarding privacy. However, the spotlight shone on social media platforms risks overlooking a bigger threat to data privacy, one that the existing laws cannot cure: Artificial Intelligence (AI).

When the Data Protection Act, 2019, became law in Kenya, it was hailed as a landmark legislation for data privacy. However, just five years after its ratification, the Act is facing a tough challenge from the rapid evolution of AI.

Firstly, the law requires that the data subject (individual providing private data) be made aware that their data is being collected and give consent to its use. However, unlike platforms like TikTok which collect user data explicitly, AI systems collect vast amounts of data from different sources without their knowledge.

For instance, AI will analyse your purchase history, record the places you visit and monitor your browsing habits. It will then analyse these combined data and develop patterns about you. Not only will this be one without your consent, you will not even be aware that the data is being collected and processed.

Secondly, the law assures the individual’s right to withdraw consent and have their personal data erased. However, AI does not come with an opt-out choice. The very nature of AI learning challenges the law’s grip on the principle of withdrawal of consent. The way AI works is that, as algorithms learn and evolve, they create new data points or patterns that are not explicitly covered by initial consent.

Thirdly, with AI no one takes ultimate responsibility for AI-driven outcomes. While the data law requires data controllers to register, disclose intended use and hold data only for as long as it is necessary, with AI we don’t know the intended use of the data nor when it is being collected. Worse still, we don’t know who is collecting and holding it. Simply put, it is impossible to single-out malpractice for any data controller to take responsibility.

Stricter regulations

Instead of blanket bans on social media platforms, stricter data privacy regulations that apply to all forms of data use including AI, are the way to go. Additionally, investment in ethical AI will make it easier to come up with laws. For instance, AI systems that would allow users to understand the logic behind decisions made by algorithms would provide an avenue for granting and withdrawal of consent by users.

A ban on select TikTok users might deal with a single source of concern. It will, however, not solve the bigger problem which is that regulation is not moving as fast as technology. The TikTok cases present an opportunity to enact stronger data privacy regulations.