Task force on guidelines for artificial intelligence use by media timely

Media Council of Kenya CEO David Omwoyo. [Wilberforce Okwiri, Standard]

On Monday, the Media Council of Kenya (MCK) inaugurated a task force comprised of experts mandated to develop guidelines for using and applying artificial intelligence (AI) in the media landscape in Kenya.

According to David Omwoyo, CEO of MCK, the guidelines will create a mechanism to ensure appropriate and ethical integration of AI, social media and data for use in professional journalism.

AI can be described as algorithms, programmes or machines capable of completing tasks that would otherwise require cognition. Basically, computers perform tasks that were only considered capable by man, except without man's intelligence.

Since the launch of ChatGPT and other subsequent Large Language Models (LLMs), the world has come to grips with generative AI accessible to everyone, including journalists. This deep learning AI can produce high-quality or previously unseen data or content, including text, imagery, audio and synthetic data trained from enormous data sets sourced from books, articles, websites, and other text-based sources. What does this mean for journalism in Kenya?

Like any technological transformation in history, AI is a double-edged sword that builds and destroys. AI is developing rapidly, making understanding its ramifications and challenges and developing regulations challenging.

AI benefits media houses by analysing large quantities of data, meaning journalists can uncover hidden insights from large datasets to support in-depth investigative reporting. It can also transcribe audio content, generate image descriptions, or provide text-to-speech content.

On the flip side, there are concerns regarding the automated creation of news articles and reports that may need to meet specific standards such as the need for objectivity and authenticity of stories. AI-based article writing, or video editing could also decrease demand for human media professionals, thus displacing them.

AI can generate compelling deepfake videos and images that may spread misinformation and disinformation. As such, public opinion can be manipulated, discord sowed, and media and information sources can be undermined. In volatile, conflict-prone areas, a news story, video or image about what a leader has done or said can cause conflict.

According to the law, media organisations must provide accurate and trustworthy information to the public. There is, however, a risk of inaccurate or biased information being disseminated when AI is used to generate content. AI-generated content, therefore, needs robust editorial oversight and verification processes. Editors and journalists must ensure that AI-generated information meets journalistic standards and is accurate.

Decisions made by AI algorithms are opaque and complex based on the training data, layers of the algorithms, programmes and perspectives of the creators, making it difficult to understand how they work or the basis of decisions. In the media, this opacity can lead to concerns about bias, discrimination, and a lack of accountability.  

Because AI relies on data, including personal data, there are concerns about user privacy as personal data might be used without consent, leading to privacy breaches in violation of our data protection frameworks. By drawing profound correlations from seemingly innocuous bits of data, AI can uncover some of our most intimate secrets, violating privacy considerations if disclosed or needlessly accessed.

AI innovations are raising new questions about how copyright laws will apply to content created or used by AI such as authorship, infringement, and fair use. Identifying the intellectual property rights of content created by AI is a legal challenge for the media industry.

The task force has its work cut out because of the questions it will need to grapple with considering the global efforts, frameworks, technological developments and domestic context and laws.

Related Topics

AI MCK ChatGPT