Regulating and enforcing AI regulation is as complex as the tech itself 

Used in the public interest, generative AI has the potential to increase access to information, enhance freedom of expression, and expand knowledge about healthcare, education, agriculture, transportation, and other issues. 

However, recent studies show a dramatic rise in misinformation generated by artificial intelligence and presented as authentic news in Africa, driving the explosion of misinformation and disinformation. Discussions of AI and related risks often feature calls for regulation. However, establishing and enforcing AI regulation is as complex as the technology itself. 

The 2024 World Economic Forum (WEF) global risk report has flagged disinformation powered by innovative artificial intelligence as a threat to democracy, a polarising force, posing serious risks to economies. The WEF report ranks fake news and disinformation as the most serious risks over the next two years, highlighting how rapid technological advances also create new problems or worsen existing ones. 

In March this year, the Africa Center for Strategic Studies (ACSS) reported that the proliferation of disinformation is a fundamental challenge to stable and prosperous African societies. Disinformation campaigns for political purposes are increasing, with 189 documented campaigns in Africa, nearly quadrupling the number reported in 2022. Given the opaque nature of disinformation, this figure is surely an undercount the ACSS report said. 

In a recent interview with the Voice of Africa, however, Kenya’s special envoy on technology and advisory board member on AI for the United Nations Secretary-General Philip Thigo called on users to embrace artificial intelligence, particularly the opportunities it has created in the employment sector, and to improve lives. 

Indeed, AI has the potential to do profound good for the world including Africa: to advance human rights and dignity, develop medicines to treat and cure diseases, improve agricultural production, and help with planning and disaster response. 

However, to harness AI’s potential effectively, Africa must establish mechanisms to address its socioeconomic challenges. This requires more collaboration and sustainable engagement between government, industry, academia, and civil society. 

The Malabo Convention, a legal framework for data protection ratified by the African Union (AU) in 2023, serves as a standard for AI policy in Africa. The African Union Commission is developing a continental AI strategy to outline the potential benefits of the emerging technology for African development and the legal and regulatory safeguards needed to protect users and societies. 

There is also other progress in AI regulation in Africa. Earlier this year, the AU published a White Paper titled ‘Regulation and Responsible Adoption of AI for Africa towards Achievement of AU Agenda 2063’. This White Paper is expected to introduce greater policy coherence and provide frameworks for an AI regulatory regime aimed at ensuring data safety, security, and protection to promote the ethical use of AI. 

The 2023 ‘State of AI in Africa Report’ by the Centre for Intellectual Property and Information Technology Law (CIPIT) reveals, however, that Africa still lags in creating an environment conducive to a responsible AI ecosystem. This includes financial support, AI enablers, and other incentives. 

However, it’s not all negative. African countries are increasingly developing or looking to develop national AI strategies to guide its adoption. Countries such as Mauritius, Egypt, Zambia, Tunisia, and Botswana have created national AI programmes, and others like South Africa, Nigeria, Ghana, and Kenya have approved data privacy legislation that could govern AI technology. Yet, these policy frameworks are nascent, leaving AI deployment largely unregulated. 

African countries will have to develop an enabling environment and incentives essential for AI growth and ensure an ethical and responsible ecosystem similar to those in first-world nations like China, the United States, and the European Union (EU). The EU Artificial Intelligence Act (EU AI Act), passed on March 13 this year, is the first-ever comprehensive legal framework on AI worldwide and is viewed positively across the global AI landscape. 

Graphic: Getty Images

But, as noted at an event co-hosted in March this year by Global Affairs Canada and the International Development Research Centre in Nairobi, Africa should be cautious about the “Brussels effect”. This refers to the extent to which EU regulations influence global norms due to doing business with the bloc. Speakers said African countries should not be forced to align with global regulations like the General Data Protection Regulation (GDPR) if they do not fit Africa’s communal considerations or recognise the nuances of marginalised communities. It is crucial to tailor AI policies to local realities. 

That said, African countries must accelerate the rate of initiating AI policy and regulatory frameworks, particularly those addressing responsible AI. Strategic policies that address the responsibility aspect of AI are deficient. This need has been discussed at various forums, including the Africa AI Conference in Rwanda last year and the Connected Africa Summit in Kenya this year. 

There are undeniable challenges due to regulatory gaps. However, censorship concerns and the need to balance freedom of expression with the need to tackle harmful misinformation must be considered when regulating AI in Africa. In an interview with Africa in Fact, Natasha Karanja, a tech policy researcher, emphasised the need for inclusive conversation in developing regulations to tackle AI-generated misinformation. 

“Inclusive conversations in developing AI strategies, considering voices from marginalised groups, are essential, and so is the need for a multi-stakeholder approach to inform policy,” says Karanja. “Policy and strategy development should be driven by a clear understanding of specific objectives, challenges, and opportunities AI presents in the local context.” 

Research ICT Africa’s policy brief, ‘Navigating the Intersection of Artificial Intelligence and Economic Development in Africa: Policy Requirements and Implications’, published in April, shows that Africa faces significant disadvantages and disparities in adopting and using AI technologies compared to the Global North. For example, the capital investment landscape in AI is heavily concentrated in North America and Asia, with comparatively low investment in Africa. 

The development of AI technology relies heavily on large data sets, which can perpetuate existing inequalities and biases, especially in regions like Africa, where data may not be adequately representative or regulated. Noteworthy, AI tools make decisions based on datasets. However, information sourced from African countries forms a small part of the data used by AI models. 

The Research ICT Africa policy brief underscores the need for an enabling environment that mitigates risks. It recommends robust regulatory frameworks to address AI-related harms, holding providers accountable, nullifying liability exclusions, mandating algorithm disclosure for risky systems, and presuming AI fault for harm. It also calls for distinguishing the legal position of developers from service providers and granting qualified immunity to compliant researchers. 

The UN University Centre for Policy Research’s policy brief, ‘Artificial Intelligence-Powered Disinformation and Conflict’, highlights how disinformation on social media has fuelled political conflicts in sub-Saharan Africa. The phenomenon has become more aggressive with the advent of generative AI, allowing false and dangerous content to spread rapidly, even to those without internet access. A key recommendation is that disinformation-related efforts should work within a multilateral system across global, regional, and national initiatives to govern AI and digital spaces. 

The EU and UN General Assembly resolutions on AI, the latter passed in March this year, are seen by some as models for Africa to develop and adopt its regulatory framework. Some experts argue that Africa needs a regulatory framework to prevent AI manipulation from undermining election integrity, information integrity, and democracy. 

But Giovanni De Gregorio, PLMJ Chair in Law and Technology at Catolica Global School of Law and Catolica Lisbon School of Law, speaking to Africa in Fact, cautions against following Europe’s path on AI regulation without adaptation. He notes that effective AI regulation in Africa must consider the underrepresentation of African datasets, perpetuating biases in global AI systems. At the same time, effective oversight should account for local context and enforcement challenges. 

“While government regulation is necessary to control hate speech and disinformation, some laws might threaten online freedom of expression and access to information,” notes De Gregorio. 

Landry Signe, co-chair of the World Economic Forum regional action group for Africa, agrees that Africa is lagging in investment and regulation. He emphasises that AI’s complexity makes holistic governance challenging and advocates for strategies to leverage AI’s benefits rather than just prevent harm. 

Mulle Musau, the national coordinator of the election observer group in Kenya, in an interview with Africa in Fact, proposes regulatory frameworks to govern AI in generating and spreading disinformation, and he believes collaboration between tech giants, government institutions, and civil society is crucial for combating disinformation while upholding freedom of expression 

“Comprehensive, clear, and enforceable regulatory frameworks are essential,” says Musau. “The African Union’s task force on AI is a positive direction that can be cascaded to regional blocs and individual countries. Transparency and accountability are vital for AI’s ethical deployment.” 

Tech experts like Shain Rahim, Cisco’s Country Manager for Kenya, argue against stringent AI regulations, fearing they might stifle innovation. Instead, they propose regulatory sandboxes and innovation hubs to facilitate AI application experimentation and testing. 

During the Connected Africa Summit 2024 held in Nairobi in April this year, discussions favoured government support over strict regulation to foster seamless AI adoption and promote the fourth industrial revolution. Clear regulatory frameworks, robust data protection, and privacy regulations are essential to safeguarding individual rights and promoting trust in AI systems. 

Raphael Obonyo is a public policy analyst. He’s served as a consultant with the UN Department of Economic and Social Affairs (UNDESA). An alumnus of Duke University, he has authored and co-authored numerous books, including Conversations about the Youth in Kenya (2015). He is a TEDx fellow and has won various awards.


Raphael Obonyo is a public policy analyst. He’s served as a consultant with the UN Department of Economic and Social Affairs (UNDESA). An alumnus of Duke University, he has authored and co-authored numerous books, including Conversations about the Youth in Kenya (2015). He is a TEDx fellow and has won various awards.

Comments are closed.

Useful Links

Good Governance Africa

© 2023 Africa In Fact. All Rights Reserved.