Countering fake news and disinformation is complicated, the solution elusive 

Two decades back, a minor story emerged from Khartoum. A mysterious West African man had allegedly been visiting local merchants and offering a friendly handshake that would cause the merchant’s penis to wither. Panic ensued, with warnings passed on by SMS. The disjunct between the message and the medium was glaring. 

As the commentator Mark Steyn wrote: “The telling detail of the vanishing penis hysteria is that it was spread by text message. You can own a cellphone yet still believe that foreigners are able, with a mere handshake, to cause your penis to melt away.” 

The point is that the march of technology does not imply greater rationality and understanding of the world, neither in the past nor now. The Egyptian Pharaohs were loath to record their setbacks, hoping to communicate a legacy befitting their divinity. Radio broadcasts could fortify dictators and – as the Rwandan genocide showed – be a tool to encourage mass murder. As digital access grows, the possibilities and threats must be assessed. 

And growing it is. Digital technology has long been seen as a step-changing opportunity for Africa. It represents new, cost-effective means of connecting a continent whose infrastructural deficiencies have long retarded its development. The key to this has been the mobile phone. 

According to the International Telecommunication Union, in 2023, some 523 million people, or 63% of the continent’s population, owned a mobile phone. The proportion of Africa’s population covered by a mobile network grew to 93.3% in 2023, while those covered by at least a 3G network stood at 83.6% in 2023, up from 22.2% in 2010. Perhaps most importantly, 37% of Africa’s population used the internet, up from a mere 6% in 2010. 

Isabel Bosman of the South African Institute of International Affairs notes the expansive use of this technology in both commercial and socio-political spaces. “Civil society,” she remarks, “has been using social media as a platform for sharing ideas, civic engagement and education, and increasingly as a tool to hold government accountable.” 

There is an underside, however. Bosman adds: “It is also interesting to note that with these increases in mobile phone and internet usage, the internet and social media often become the first avenues authoritarian regimes seek to close when protests or other major political events occur.” Technology is not always the companion of liberty and democracy. 

During the early years of the AIDS epidemic, Soviet intelligence seeded a hoax in media in the developing world that the disease was a US bioweapon. This was intended to play on tropes of imperialism, racism, and genocide. For those sharing this outlook, it was a convincing line. One proponent was an exiled South African communist intellectual, Jabulani Nobleman Nxumalo, in a 1988 article, ‘AIDS and the Imperialist Connection’. 

Years later, some would point to it as the origin of AIDS denialism in the government of President Thabo Mbeki. Certainly, the narratives, involving racist assumptions about Africans and profiteering corporations, showed a great deal of continuity. 

To understand this, recall the prominent phrase of the past decade: “fake news”, the manufacture and spread of false information masquerading as the truth. Not only empirically incorrect, “fake news” is also deliberately produced and disseminated. In its more benign form, this may be for simple amusement; amusing “urban legends” are common in Africa. 

But as the case of AIDS illustrates, information has a long pedigree as a weapon. Strategically placed, it can influence decision-makers. Spread within a suitable information ecosystem, it can shift public opinion. And where information is misleading, it can manipulate policy and public action. 

Africa in Fact spoke to Steven Boykey Sidley, author, technology entrepreneur and Professor of Practice at the Johannesburg Business School, an institution specialising in the digital economy. Politics and elections, he notes, are intertwined with communication, which is now a function of technology. “Elections are always accompanied by spin,” he says, “and this is often ugly. This gets uglier as polarisation rises, and the spin itself gets uglier.” 

Nevertheless, where channels for information distribution are limited – as they have been historically, such as needing space in a printed newspaper – the information environment is at least contained. Differing views were typically rooted in a common understanding of basic facts. As news and information have moved online, these limits no longer apply. 

Online platforms allow millions of people an unmediated voice; with minimal investment, new media “outlets” could be set up to cater for distinct political perspectives. Established media groups had to adapt to these new realities. Straight, factual reporting came under pressure, partly to satisfy increasingly polarised audiences. Narrative creation assumed heightened importance in a fragmented and contested information space. 

Social media platforms, most prominently Twitter/X, accelerated this process. Designed for short, appealing messages, they were particularly useful for spreading visual and audio content – like the ubiquitous meme and video clip – that required scant reflection by the user. 

Hence, the centrality of the mobile phone with an internet connection also exists. It is a technology that fits a planet-worth of information into the user’s hand, allowing virtually up-to-the-minute interaction, typically with those sharing one’s outlook. Frustration, indignation, and partisanship could be constantly available. Naturally, this could be applied to politics. 

Examples of this are legion. In South Africa during Jacob Zuma’s presidency, a network of compromised, politically connected businesspeople pushed a narrative presenting themselves as victims of “white monopoly capital”. Spread by a combination of fake websites and social media accounts, automated applications, and some cooperative activists, this was designed to sow societal division, stirring up historically attuned bitterness. 

In the 2021 election in Uganda, social media accounts sprang up promoting the candidature of the incumbent President Yoweri Museveni and his National Resistance Movement. Some of these were run directly by government institutions, while others were purportedly independent. A particular target was opposition presidential candidate Bobi Wine, whom the campaign depicted as a homosexual, implying that Wine was fundamentally opposed to the values espoused by the society he aspired to lead, 

During last year’s election in Nigeria, the electorate was hit by videos of Hollywood celebrities, along with business mogul Elon Musk and former US president Donald Trump, applauding Labour Party presidential candidate Peter Obi. These were weighty endorsements for a candidate with a particular appeal to aspirant young people. They were also bogus. Each of these was produced by artificial intelligence and only genuine to the extent that they impersonated their subjects. 

Deepfakes seem to have emerged around 2017. Their initial application was in the adult entertainment space, with images of mainstream celebrities manipulated into explicit footage. Fakes of celebrities and historical personalities have existed for decades, typically as “photoshopped” still images. AI, however, allowed this to be done with a degree of intricacy and in a format that had never been possible before. 

For Ivo Vegter, freelance journalist and technology aficionado, this is the nub of the problem. “It is now possible to use and alter video footage to lip-synch. During the opening phase of the Ukraine war, there was a video showing Volodymyr Zelenskyy telling Ukrainian forces to lay down their weapons. The way the body moved made it obvious that this was a fake, but it raises questions about the future. Technology only gets better.” 

Indeed. Sidley points out that in the 19th century, the photograph became the “gold standard” for communication. It could reproduce and communicate information in a manner that was hitherto unimaginable. Video footage took that to a new level, and the ability of AI is to simulate reality. “Audio is there already,” he remarks, “and video is getting there with existing apps.” 

“AI is a completely different kettle of fish from the fakes we’ve seen in the past; it has changed the game,” he says. “AI algorithms can target individuals as individuals, and they do this at an exponential rate. Also, AI can produce content autonomously, at scale and hyper-realistically. The possibilities for propaganda are enormous. Think of it as a factory that churns things out at a rate that was never possible before.” 

AI-generated deepfakes have the potential to create alternative realities tailored to the predilections of almost any person, indistinguishable from the genuine articles. Indeed, since AI is itself able to learn, it can constantly improve its effectiveness in an extraordinarily short space of time. So, the future might not just be about fake footage of riots or press conferences that never took place – it could be an entire virtual world with its own invented celebrities and commentators, faux media houses and academic journals, and even AI-generated university websites for added credibility. 

So far, politically motivated deepfakes have been rare in Africa, but there is plenty of evidence of its uptake. Following a coup in Burkina Faso in 2022, videos began to do the rounds on social media in which enthusiastic pan-Africanists endorsed the military’s actions and called on Burkinabe to do likewise. Yet the pronunciation was off, and the video images were poorly aligned with their words. 

These were made using software developed by AI firm Synthesia, the “pan-Africanists” avatars. It was unclear who was behind the fakes, though they seemed to be pushing a line about pernicious western influence and African sovereignty, along with the dispensability of democracy. Much has been speculated about these being possible Russian disinformation operations, a legacy of the expertise developed during the Cold War. 

In 2019, President Ali Bongo of Gabon delivered a customary New Year’s broadcast. There had been speculation about his health for some time, and his appearance in the video encouraged suspicion that he wasn’t there at all. This was alleged to be a deepfake, and parts of the army mutinied in an attempt to seize power. 

Bongo remains alive (he left office in 2023). This vignette illustrates the knock-on effects of deepfake technology: how it poisons the communication and information ecosystem. This is what the American academic Danielle Citron has termed “the liar’s dividend”; this technology can call into question virtually all claims of fact, empowering those seeking to stoke division and conflict. Reality becomes increasingly subjective. 

Vegter comments that fake information has found a willing audience in more developed and technologically engaged societies – sophistication and familiarity with technology have offered little protection. In fact, it may even aggravate the problem as more activity is carried out in these spaces. A degraded information environment seems capable of sweeping all before it, irrespective of context. 

For Africa, the threat to democracy is especially acute. For one thing, despite widespread support for democracy as an ideal, its actual practice is uneven, sometimes amounting to no more than elections of dubious probity. Indeed, the continent has suffered some dreadful reverses, including the return of the coup. Challenged societies are at particular risk of polarisation and politically charged violence. 

With global tensions on the rise, Africa has become a site of contestation for global influence: authoritarian powers, such as Russia and China, have offered kindred governments on the continent not just economic and security support, but also ideological legitimation. They have also been active in trying to shift public opinion through information operations. 

Africa’s vulnerability is highlighted by a recent report by information technology security firm KnowBe4. Interrogating adults across five countries – Mauritius, Egypt, Botswana, South Africa, and Kenya – it found that 51% were aware of deepfakes, 21% were aware of them without much understanding, and 28% were not aware at all. Around three-quarters admitted to having been duped by one. Expect this number to rise as the technology improves and ill-intentioned users become more adept at deploying it. 

Malleable communications and a toxic information environment are frightening resources for opportunistic politicians, identarian hustlers, and malign external actors. It is hard to see how a competitive political system can be sustained under these conditions. Just as communication was foundational to civilisation, making it incoherent would bring its very endurance into question. 

If the problem is complicated, the solution is elusive. Fake news and AI-generated fake media are inserted into heated and divided environments, spinning narratives that are difficult to counter. Not only are they able to constitute a convincing simulation of reality, but the very act of challenging them requires that they be referenced and described, perhaps by creating links to the images or video material in question. Unfortunately, this provides another avenue for their spread, or at least a trace of their influence to endure. 

Fact-checking services are one attempt to do so. The Real411 initiative in South Africa – primarily driven by the NGO Media Monitoring Africa – encourages people to report disinformation (and other pathologies), which are then checked and rated. Recognising the danger of disinformation to a democracy, it has partnered with the country’s electoral commission and has dealt with an extraordinary volume of allegations. 

However, the weaknesses here are obvious. Not only is it unclear who is conducting the assessments (there may be biases at work), but it is likely that soon, AI’s capacity to process this output will simply outstrip fact-checkers’ capacity. 

Observers and media experts are generally at a loss as to how to counter all of this with any confidence. The deployment of technology would be an essential countermeasure. Vegter says that using digital signatures to verify the provenance of information is a good start. A consortium of companies and NGOs, the Content Authenticity Initiative, is working on this, producing open-source tools to assist in proving that content is legitimate.

However, even solutions like this come up against people’s desire to accept information that confirms their views and that they deem credible. Sorting through deception, distortion, and disinformation is tedious and time-consuming, and beyond the interests of many media consumers. A more vigilant media culture must be cultivated to meet this threat, and whether this is possible is, at best, uncertain. 

Can it be done? Reflecting on the dangers posed by a toxic information environment – the liar’s dividend writ large – Professor Sidley is deeply concerned. “I’ve got nothing for you,” he sighs. 

+ posts

Terence Corrigan is an independent researcher, political consultant, writer, editor and illustrator. He is currently a research fellow at the South African Institute of International Affairs (SAIIA) in its Governance and African Peer Review Mechanism Programme and a policy fellow at the Institute of Race Relations (IRR).

Share.

Terence Corrigan is an independent researcher, political consultant, writer, editor and illustrator. He is currently a research fellow at the South African Institute of International Affairs (SAIIA) in its Governance and African Peer Review Mechanism Programme and a policy fellow at the Institute of Race Relations (IRR).

Comments are closed.

Useful Links

Home
Good Governance Africa
Advertise

© 2023 Africa In Fact. All Rights Reserved.