Ethics should guide use of artificial intelligence

 

Artificial Intelligence (AI) refers to the theory and development of computer systems able to perform tasks that normally require human intelligence.

AI makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks.

Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognising patterns in the data.

AI is popular thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide specific benefits in every industry.

The proliferation of personal computers, laptops and cell phones has changed our lives, but by replacing or augmenting systems that were already in place. 

Email supplanted the post office; online shopping replaced the local supermarket; digital cameras and photo sharing sites pushed out film and bulky, hard-to-share photo albums. AI presents the possibility of changes that are fundamentally more radical.

Fear of a mythical “evil AI” derived from reading too much sci-fi won’t help. The debate should be about the values instilled in the people and institutions creating this technology.

In his book Machines of Loving Grace, John Markoff writes, ‘The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.’ It’s an intriguing question, and one that our society must discuss and answer collectively.

What are our values? And what do we want our values to be? Ethics is about having an intelligent discussion, not about answers; it’s about having the tools to think carefully about real-world actions and their effects, not about prescribing what to do in any situation. Discussion leads to values that inform decision-making and action.

The word “ethics” comes from “ethos,” which means character. “Morals” comes from “mores,” which basically means customs and traditions. If you want rules that tell you what to do in any situation, that’s what customs are for.

If you want to be the kind of person who executes good judgment in difficult situations, that’s ethics. Doing what someone tells you is easy. Exercising good judgement in difficult situations is a much tougher standard.

Exercising good judgement is hard, in part, because we believe that a right answer has no bad consequences; but that’s not the kind of world we have.

We’ve damaged our sensibilities with medical pamphlets that talk about effects and side effects. There are no side effects or unintended consequences; there are just effects and consequences.

All actions have effects and consequences. The only question is whether the negative effects or consequences outweigh the positive ones.

That’s a question that doesn’t have the same answer every time, and doesn’t have to have the same answer for every person. And doing nothing because thinking about the effects makes us uncomfortable is, in fact, doing something.

More important

The effects of most important decisions aren’t reversible. You can’t undo them. The myth of Pandora’s box is right: once the box is opened, you can’t put the stuff that comes out back inside.

But the myth is right in another way: opening the box is inevitable. It will always be opened; if not by you, by someone else.

Therefore, a simple “we shouldn’t do this” argument is always dangerous, because someone will inevitably do it, for any possible “this.”

You may decide not to work on a project, but any ethics that assumes people will stay away from forbidden knowledge is a failure.

It’s far more important to think about what happens after the box has been opened. If we’re afraid to do so, we will be the victims of whoever eventually opens the box.

Finally, ethics is about exercising judgement in real-world situations, not contrived situations and hypotheticals.

The latter are of very limited use, if not actually harmful. Decisions in the real world are always more complex and nuanced.

We should be completely uninterested in whether a self-driving car should run over the grandmothers or the babies.

An autonomous vehicle that can choose which pedestrian to kill surely has enough control to avoid the accident altogether.

The real issue isn’t who to kill, where either option forces you into unacceptable positions about the value of human lives, but how to prevent accidents in the first place.

Above all, ethics must be realistic, and in our real world, bad things happen.

Mr Wanjawa teaches at the School of Humanities and Social Sciences, Pwani University [email protected]