Why regulators are wary of new wave of â€˜thinking computersâ€™
SEE ALSO :Ojaamong: All projects will be doneFacebook is still dealing with the fallout of the Cambridge Analytica scandal where data from tens of millions of people was used to develop hyper-targeted ads that exploited psychographic traits in users. These cases have fuelled calls from policymakers for more scrutiny into AI systems that are finding more application in everyday life. Kenya’s rapid adoption of mobile technology and easy access to high-speed broadband has seen the country emerge as one of the leading African economies where the use of artificial intelligence systems is quickly taking root. Millions of Kenyans use Facebook, Twitter and Google every day while fintechs (financial technology firms) employ data analytics and machine learning to determine the risk of lending to each individual mobile borrower. Last week, Safaricom launched Zuri, an AI chatbot that its millions of subscribers can interact with on Facebook Messenger and Telegram.
SEE ALSO :ICT boosts war on fall armywormMobile subscribers can ask Zuri for help in making airtime top-ups, checking M-Pesa and airtime balances and reverse transactions almost as seamlessly as a real customer care agent. Experts now warn that the regulatory approach adopted in policing sub-sectors in the telecommunications industry is not adequate for artificial intelligence. The AI Now Institute, a research unit at New York University led by Microsoft and Google researchers Kate Crowford and Meredith Whittaker respectively, says there is a growing accountability gap in AI. “The technology scandals of 2018 have shown that the gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” states AI Now in its 2018 report. The report argues that the lack of governmental regulation, insufficient governance structures in tech firms and power asymmetries between companies and the people they serve is leading to the widening gap. “These gaps are producing growing concern about bias, discrimination, due process, liability, and overall responsibility for harm,” says the report in part. Part of the cause of these gaps is attributed to corporate secrecy that is said to dominate the development of the AI industry. “Many of the fundamental building blocks required to understand AI systems and to ensure certain forms of accountability – from training data, to data models, to the code dictating algorithmic functions, to implementation guidelines and software, to the business decisions that directed design and development – are rarely accessible to review, hidden by corporate secrecy laws,” say the researchers in part. This is part of the reason regulators are often steps behind technology firms and thus have a limited capacity to prescribe remedies for problems they have little insight into. It was only after Facebook revealed that Russian operatives bought misleading ads and created false pages to manipulate online debate did the scale of the Russian involvement in the 2016 US election become clearer. “Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit and monitor these technologies by domain,” explains the study in part. The study further says that while other sectors such as health and education have developed regulatory frameworks and histories over time, establishing the same overarching approach to AI regulation might not work. “A national AI safety body or general AI standards and certification model will struggle to meet the sectoral expertise requirements needed for nuanced regulation,” notes the study. This means rather than have the Communications Authority assume the role of regulating AI applications in Kenya across all sectors, policymakers in each sector should establish regulatory codes of AI use in their respective fields. The report also says the Government and technology companies should ensure users are not robbed of the right to reject the application of these technologies.
We are undertaking a survey to help us improve our content for you. This will only take 1 minute of your time, please give us your feedback by clicking HERE. All responses will be confidential.