Kenya's AI regulatory model must consider global trends, human rights

Last year, the Robotics Society of Kenya (RSK) proposed a draft bill to the National Assembly.

The bill suggests the creation of the Kenya Robotics and Artificial Intelligence Society, a professional body that would regulate the use of robotics, AI, and the Internet of Things (IoT).

The petitioner's aim is to establish a governing body for robotics and AI, similar to the Law Society of Kenya, which oversees law practice in Kenya.

The proposed objectives include promoting responsible and ethical development and usage of robotics and AI technologies in Kenya, as well as encouraging collaboration and knowledge exchange among robotics and AI practitioners, researchers, and stakeholders.

However, the introduction of licensing for practitioners and the prohibition of anyone working on robotics and AI without being a member of society is a cause for concern.

There has been a significant outcry among Information Technology (IT) professionals regarding the proposed bill, which appears to lack an understanding of the varied roles and responsibilities involved in AI. The bill seeks to regulate, license, and control the field, which experts argue is impractical given the diversity of machine learning engineers, project managers, data entry personnel, and developers.

Unlike other regulated professions such as lawyers, doctors, engineers, and accountants, the AI industry cannot be categorised based on education, certification, and continuous professional development. 

Kenya lacks a regulatory framework for AI. This is a challenge faced by many countries, even those considered leaders in AI, such as the USA, China, the UK, Israel, Japan, Brazil and the EU bloc, which are only now coming up with regulation drafts.

Due to the rapid expansion of technology, countries have found it challenging to legislate. Additionally, the cross-border nature of AI makes international consensus necessary, which is currently lacking. Furthermore, the vast array of applications across industries and the limited technical expertise of legislative bodies add to the complexity of AI regulation. There is also the risk of stifling innovation.

Kenya is among the African leaders in AI, along with South Africa, Nigeria, and Egypt. We are utilising AI on an unprecedented scale, with some cutting-edge tools, such as ChatGPT, trained in Kenya. The Technology and Innovation Report 2021 reveals that Kenya is far ahead of some developed economies in utilising Fourth Industrial Revolution technologies to solve various challenges, especially considering per capita income.

Internationally, AI regulation is focused on safeguarding the human rights and dignity of those interacting with the AI systems. The European Union AI Act attempts to regulate by categorising AI based on the level of risk it poses to society. In the future, regulations will likely require AI providers to provide users with information about their AI products, including explanations of how the system reaches certain decisions or outcomes.

Users will be able to contest AI decisions or request human intervention, particularly if the decision could significantly impact them such as in self-driving cars, hiring, criminal justice applications, credit evaluation, or biometric identification.

AI developers are now required to conduct thorough risk assessments before deploying AI systems. Greater attention is given to possible ‘high-risk’ AI applications such as health care, biometric identification, criminal justice, and credit scoring. Ultimately, a model must be established that makes developers responsible for any damage caused by their AI systems, with those developing high-risk products held to an even higher liability standard.

Kenya must develop a regulatory model that considers the dynamic nature of AI, global trends, human rights protection, and innovation promotion while preventing harmful AI systems from being developed and deployed.