On 3 November 2021, Professor Meareg Amare, a respected chemistry academic at Bahir Dar University in Ethiopia’s Amhara region, was gunned down outside his home by men wearing special forces uniforms. He lay dying in the street for seven hours while his killers warned onlookers they too would be shot if they offered medical assistance. His crime? Being Tigrayan in a region consumed by ethnic violence, and having his photograph, address, and fabricated accusations against him spread virally on Facebook weeks earlier. 

What news outlets like TIME and CNN documented in their coverage of Professor Meareg’s murder was Facebook’s algorithmic amplification of hate speech that led to a real-world killing. What they didn’t uncover – and what my research with the Data Workers’ Inquiry reveals – is the hidden labour force that trained these deadly algorithms in the first place. In digital sweatshops across Nairobi, Accra, and Gulu (Uganda), African workers earning as little as $1.50 per hour have unknowingly taught AI systems to recognise faces, moderate content, and analyse behaviour patterns. 

This is digital colonialism at its most insidious: the extraction of African labour to build systems that ultimately surveil, target, and destroy African communities while serving foreign interests. I call this surveillance colonialism, the process by which foreign powers extract data and labour from African populations to build AI systems that ultimately police, repress, and destabilise those very populations. Unlike historical colonialism, which relied on boots and bullets, surveillance colonialism operates through algorithms, platforms, and biometric contracts, outsourcing control while entrenching dependency. 

Photo: Getty Images

Professor Meareg’s death illuminates a broader, largely invisible pattern transforming conflicts across the continent. From Pegasus spyware hunting Rwandan dissidents to Chinese facial recognition tracking Zimbabwean protesters, a new kind of mercenary force is reshaping African conflicts. 

My interviews with data workers in Nairobi, Kenya, through the Data Workers’ Inquiry, revealed a devastating truth about Professor Meareg’s death: the system that failed to protect him was broken by design. During the height of the Ethiopian conflict, there was a critical shortage of Tigrinya and Amharic-speaking content moderators in Facebook’s Nairobi hub. Despite being marketed as cutting-edge AI, automated moderation systems proved nearly useless at detecting hate speech in the primary languages of the conflict itself. 

Nuredin Ali, a PhD candidate at the University of Minnesota and research intern at the Distributed AI Research Institute (DAIR), explains the deadly consequences: “During conflict times, hateful content and misinformation spread widely. In the specific case of the Tigray war, platforms left unmoderated genocidal content [online] such as calls for civilians to be killed, dehumanising content, and campaigns denying well-documented massacres.” More damning still, Ali notes that “during the conflict, the platforms knew they were amplifying harmful content that contributed to the war, stating they were working to address the situation. However, they didn’t do enough. This shows the ethical disconnect between knowing that their algorithms are spreading hate and failing to do enough to stop it. While they have enough resources, they ignore users in these countries.” 

Yet Meta continued operating in Ethiopia, deploying fundamentally inadequate systems in a region where algorithmic failures cost lives. This exposes the central lie of AI colonialism. These systems aren’t deployed because they work well in African contexts, but because African lives are deemed expendable testing grounds for broken technology. 

Milagros Miceli, who leads the Data Workers’ Inquiry, explains: “These companies extract expertise from data workers – be it language expertise, be it knowledge of a specific terrain – but the products created with that expertise are not made thinking of their well-being. This is not a bug in the system; it is a feature.” Syrian refugees in Lebanon label satellite images that may monitor their homeland. Venezuelan workers trained facial recognition systems that were later deployed against protesters in their communities. African workers process thousands of surveillance images daily, teaching algorithms to recognise protest patterns and identify dissidents, rarely understanding the ultimate purpose of their labour until it’s too late. 

Foreign surveillance companies have thus emerged as a new kind of mercenary force, selling tools that reshape rather than resolve African conflicts. These digital forces deploy algorithms, spyware, and facial recognition systems that can destabilise entire societies while extracting valuable data as spoils of war. Israel’s NSO Group exemplifies this model. Their Pegasus spyware has infiltrated phones across Rwanda, Uganda, and Ethiopia, officially marketed for counter-terrorism but predominantly targeting journalists, activists, and political opponents. In Rwanda, Pegasus enabled the government to monitor exiled dissidents like Paul Rusesabagina, extending political conflicts beyond national borders and enabling transnational repression. 

Zimbabwe’s surveillance infrastructure reveals how these digital mercenaries operate. In his comprehensive study of Chinese AI surveillance in Zimbabwe, published in the global studies journal Transcience last year, researcher L. Travers documents how the 2018 CloudWalk deal marked a watershed moment; the first time a Chinese company had entered Africa with AI surveillance technology. The agreement required Zimbabwe to turn over vast amounts of biometric data to the Chinese firm, enabling CloudWalk to train its facial recognition systems on African faces. 

Despite several years of operation, Travers found that Zimbabwe’s Chinese-supplied surveillance cameras had not resulted in “one public conviction of a criminal,” undermining official claims that the systems serve public safety rather than political control. Instead, these systems enable what one of his research informants described as the government’s desire to “keep citizens, especially dissenting voices, in check”. 

Photo: Getty Images

The evolution of digital mercenaries was on full display during Kenya’s 2024 Gen Z protests against the Finance Bill. According to the Kenya Human Rights Commission, Safaricom, Kenya’s leading telecom, unlawfully shared customers’ location data with security forces, enabling them to track and detain protesters. Combined with CCTV analysis, this digital dragnet led to what Human Rights Watch documented as at least 82 enforced disappearances, with 29 people still missing as of December 2024. 

The bitter irony is that this surveillance unfolded in the same city, Nairobi, where data workers moderate content for global platforms, believing they’re protecting online safety. At the same time, their own government deploys digital tools to hunt down citizens. Odanga Madung, a tech and society researcher who covered the protests, explains the lasting impact: “Surveillance breeds a lot of paranoia within populations. The idea that big brother is watching and that you don’t really know who exactly is monitoring your movements injects fear into a population, making them manifest very significant behavioural changes in how they communicate.” This fear has driven measurable changes: “Kenya has had a very high uptick in VPN usage over the past year, and the usage of apps like Signal and the movement to encrypted messaging has really increased across many different demographics.” 

These surveillance deployments create new forms of dependency that undermine African governance and amplify conflicts. Beyond individual transactions, Travers’s research reveals how Zimbabwe became what he terms a “testing ground for Chinese technological advancements”, establishing a pattern where African states trade sovereignty for surveillance capabilities. 

African governments purchase surveillance systems without understanding their full capabilities or long-term implications, becoming locked into relationships where foreign entities control both the technology and the extracted data. The geopolitical implications are profound. When Ethiopian authorities relied on Chinese sentiment analysis tools during the Tigray conflict, they weren’t just buying technology but outsourcing critical governance functions to foreign entities with their own strategic interests. 

The algorithms that determined which social media posts were flagged as “ethnic incitement” were trained by data workers in Kenyan annotation centres, creating a global supply chain of oppression that connects struggling workers across continents. This dependency extends beyond individual contracts to reshape governance itself. When governments become reliant on foreign surveillance capabilities to monitor their populations, they lose the ability to respond to legitimate citizen grievances. Surveillance becomes a substitute for genuine engagement, transforming legitimate calls for transparency and reform into security threats. 

Digital surveillance also creates new forms of violence that extend far beyond physical harm. In Zimbabwe, Travers documented what he calls a “chilling effect” – the deterrence of people from exercising fundamental rights because of surveillance fears. Citizens attending protests, journalists investigating corruption, and activists organising communities modify their behaviour when they believe they’re being watched. This psychological warfare is particularly effective because of the opacity surrounding these systems. Citizens don’t know which cameras are operational, what data is being collected, or how it might be used against them. The mere possibility of surveillance becomes a form of control. 

Yet resistance is emerging from unexpected quarters. According to Amnesty International, the $1.6 billion lawsuit against Meta in Kenya represents a new form of legal resistance, seeking not just compensation but systemic changes to how algorithms operate in conflict zones. The case demands that Meta halt its algorithms from recommending violent content and create meaningful victims’ funds. Through the Data Workers’ Inquiry, we’ve documented how content moderators are developing informal networks to resist harmful annotation tasks. At the same time, the African Union’s emerging data governance frameworks offer hope for regulatory solutions. 

Professor Meareg’s death represents more than individual tragedy – it symbolises what happens when Africa becomes a testing ground for inadequate foreign technology deployed without regard for local consequences. His murder was enabled by algorithms trained on African labour but designed to serve foreign platform interests, content moderation systems that couldn’t understand the languages and contexts they were meant to protect, and surveillance infrastructure that prioritised data extraction over human safety. 

The choice facing Africa is stark: continue as a data colony providing cheap labour for surveillance tools that serve foreign interests, or assert digital sovereignty by developing governance frameworks that prioritise African lives over foreign profits. This requires moving beyond procurement decisions to understanding how surveillance systems transform social relationships and governance. 

Proper security comes not from omnipresent monitoring but from legitimate governance that responds to citizens’ needs. Surveillance systems that undermine social trust ultimately destabilise the very societies they claim to protect. The workers in Nairobi’s data centres, the activists demanding accountability in Kenya’s courts, and the technologists building privacy-preserving alternatives all point toward a different digital future – one where African labour serves African interests. 

+ posts

Adio-Adet Dinika is a writer, researcher and affiliated PhD Fellow at the Bremen International Graduate School of Social Science (BIGSSS). His areas of interest are Digitalisation and the Future of Work. He has published opinion pieces on Digitalisation and socio-economic development in several print and online publications, and his first unpublished novel, They like us dead, was long listed for the 2021 James Currey Prize for African Literature. He is currently based in Bremen, Germany.

 

 

Share.

Adio-Adet Dinika is a writer, researcher and affiliated PhD Fellow at the Bremen International Graduate School of Social Science (BIGSSS). His areas of interest are Digitalisation and the Future of Work. He has published opinion pieces on Digitalisation and socio-economic development in several print and online publications, and his first unpublished novel, They like us dead, was long listed for the 2021 James Currey Prize for African Literature. He is currently based in Bremen, Germany.    

Comments are closed.

© 2023 Africa In Fact. All Rights Reserved.