Don't ignore warnings over potential dangers of artificial intelligence

The increasing availability of large datasets, improved algorithms, improved computer hardware, and substantial corporate investment have all accelerated progress in recent years. [iStockphoto]

On Wednesday, over 1000 tech industry leaders signed an open letter urging Artificial Intelligence (AI) labs to stop developing advanced forms of AI. To many, this does not come as a surprise as one of the most prolific geniuses of the last century, theoretical physicist, Stephen Hawkins warned that AI could eventually end the human race if it was ever developed to extent in which it would take off on its own and redesign itself in an ever-increasing rate.

Recently, various companies, including Microsoft’s Bing, Google’s Bard, and ChatGPT deployed AI systems that can create humanlike conversations, essays on almost any subject and coding. AI is becoming ubiquitous in the application as automated decision-making in various fields and functionalities is being adopted by governments, corporations and businesses to make crucial decisions that have real-life consequences for human beings, their lives and the enjoyment of rights.

The increasing availability of large datasets, improved algorithms, improved computer hardware, and substantial corporate investment have all accelerated progress in recent years, and there are a few signs that progress will slow or stall in the near future.

As with any advanced technology, there are potential dangers associated with AI, especially the issue of unintended consequences regarding systems that are capable of making decisions and taking actions on their own. This scenario is described as the “black box effect” in which AI models arrive at conclusions or decisions without explaining how they were reached.

The engineers who developed the systems are oblivious of how the system made certain conclusions. Policymakers and concerned industry leaders are proposing regulations that only allow the use of explainable AI created so that a typical person can understand its logic and decision-making process. Literally, the antithesis of black box AI. Another issue with AI systems is that they are trained to make certain inferences that may also include the engineers’ bias or may be biased due to the data it was trained on.

If the data contains biases, those biases can be amplified by the AI system, leading to discrimination and the perpetuation of unfair practices. For example, job recruitment systems have been found to have a bias against minorities. Sadly, algorithmic discrimination is often undetected and the victims, in most cases, are unaware that they are being discriminated against.

A system could learn to make unfair decisions if the training data is not inclusive and balanced enough, which could explain why most facial recognition systems have a problem accurately identifying racial minorities, especially black people.

Another concern is that as AI becomes more sophisticated, there is a risk that it could replace human workers in many industries. This could lead to widespread job losses and economic disruption, rendering entire professions redundant. Especially at risk are white collar jobs such as computer coders, writers, journalist who are already witnessing machines that can do certain tasks.

Some experts also warn that AI-powered autonomous weapons could be the next arms race. These weapons could make decisions on their own and potentially cause unintended harm. AI is problematic because it is data driven and may be used in a manner that infringes on privacy and other human rights. Corporations, such as social media companies, can now collect seemingly benign data about their users, allowing them to make inferences about their political ideology, health condition and other sensitive data.

States must adopt and enforce a smart mix of right-based policies and legal measures to govern the development, deployment and use of new technologies like AI before it’s too late.