×
The Standard Group Plc is a multi-media organization with investments in media platforms spanning newspaper print operations, television, radio broadcasting, digital and online services. The Standard Group is recognized as a leading multi-media house in Kenya with a key influence in matters of national and international interest.
  • Standard Group Plc HQ Office,
  • The Standard Group Center,Mombasa Road.
  • P.O Box 30080-00100,Nairobi, Kenya.
  • Telephone number: 0203222111, 0719012111
  • Email: [email protected]
Premium

Inside Artificial Intelligence, regulation and business

Opinion
 Robot humanoid using a laptop for big data analysis. [Getty Images]

For decades, artificial intelligence or foundational AI was the engine of high-level science, technology, engineering, and mathematics (STEM) research.

Many of us became aware of the technology's power and potential through the internet platforms such as Google, Facebook, and the retailer Amazon.

Today, with ChatGPT and other generative AI programs raising questions about the future -well, everything-it suddenly feels as if AI is everywhere, all at once.

The technology is advancing at breakneck speed, with uncertain implications and outcomes.

Its game-changing promise to do things like improve efficiency, bring down costs and accelerate research and development has been tempered of late with worries that these complex, opaque systems may do more societal harm than economic good.

With very little oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice, without having to answer for how they're ensuring that programs aren't encoded, consciously or unconsciously, with structural biases.

Unprecedented capabilities

Its growing appeal and utility are undeniable. Worldwide, business spending on AI is expected to hit $98 billion (Sh12.3 trillion) this year and $160 billion (Sh20.1 trillion) by 2024.

While AI is itself a technology that is becoming more present in our lives, offering unprecedented capabilities to help solve our most complex and important problems.

Equally, ethical questions around AI will naturally arise.

AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment.

In 2018, Microsoft CEO Satya Nadella highlighted the need for greater diversity in the data used to train AI systems, as well as more transparency and accountability in the decision-making processes.

This was his remedy for the potential bias as AI systems are only as unbiased as the data they are trained on, and as such, any biases in that data would be reflected in the system's decision-making process.

This leads to discrimination against certain groups, such as people of colour, women, and other marginalised communities.

Consider an AI system that is used to approve or deny loan applications.

One question to ask is whether the system disproportionately approves loans for applicants from a certain race or gender, for no reason other than their race or gender.

In his book Hit Refresh, Nadella further shares his vision for the future of technology, and how AI can be a force for good but only if it is developed and deployed in a responsible and ethical manner.

He also shares ethical dilemmas brought about by AI such as job displacement, privacy, responsibility and oversight of AI and ethical use of AI.

These are just but some of the many ethical concerns related to AI. So, what do we mean when we refer to ethical AI?

At its core, ethical AI is about considering the impact the use of AI systems will have on real people.

One framework for approaching this topic is to start by identifying who might be impacted, and how, and then taking steps to mitigate any potential adverse impact. Another question to consider is how the system arrives at its output. Understanding the basis for its output would help us check for bias.

Oversight and regulation

Given its power and expected ubiquity, some argue that the use of AI should be tightly regulated. But there has been little consensus on how that should be done and who should make the rules.

Thus far, companies that develop or use AI systems largely self-police, relying on existing laws and market forces, like negative reactions from consumers and shareholders.

As I read through the EU Ethics guidelines for trustworthy AI, my interpretation was mainly sentimental with a lot more questions and glaring gaps in oversight such as:

Privacy and data ethics

AI models are trained on huge datasets built and stored over long periods of time, so then, how do we ensure that appropriate consent has been obtained for any personal data used to train the model?

The 2019 Kenya Data Protection Act only makes reference to AI, under section 35 where it makes provisions requiring dual-factor authentication as a safeguard against AI-assisted individual decision-making in relation to data subjects.

"Every data subject has a right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning or significantly affects the data subject."

Security

What type of security controls need to be put in place to protect the model from hacking and more importantly should we think of insuring our AI models from cyber threats?

Bias and fairness

Does the model output discriminate on the basis of a protected class?

How do we ensure that the datasets that the models are trained on are sufficiently representative and diverse?

Transparency

Do we understand how the system reached its output? Can we offer a plain-language explanation to individuals who are impacted by the system's output?

Are the individuals who interact with the system aware that they are interacting with an AI system?

Human oversight and accountability

Is there a "human in the loop" who oversees and approves the model output using informed judgment? Is there a system of logging and documentation to track the system's output over time?

Performance and safety

Before the deployment of AI systems, how do we ensure an appropriate level of testing and validation has been performed to ensure the model output is sufficiently accurate?

Is there a plan in place for ongoing testing and monitoring to ensure the model continues to function properly over time?

Sustainability

What is the potential environmental impact of the energy required to train the model being taken into account?

The ethical considerations surrounding AI are complex and multifaceted and will continue to evolve as the technology itself advances.

However, it is clear that a collaborative, data-driven approach that places the needs and interests of all stakeholders at the centre is essential.

By embracing responsible AI and putting ethics and regulation at the forefront, organisations can harness the power of AI in a way that benefits society as a whole, while minimising the risks and challenges that arise from its use.

Ultimately, the success of AI will depend on our ability to manage its impact on society in a responsible and ethical way.

The writer is the Group CEO of Jubilee Insurance

Related Topics


.

Trending Now

.

Popular this week