EthicsOpinionGoogleMicrosoftMetaIntelPalantirAfrica · Nigeria6 min read29.3k views

When Google's Predictive Policing Meets Lagos: Who Pays for AI's 'Justice' in Africa?

Everyone's celebrating the promise of AI in criminal justice, but I have questions. When algorithms decide who is a suspect or how long they stay in jail, especially in places like Nigeria, we must ask: whose justice is this, and who truly benefits?

Listen
0:000:00

Click play to listen to this article read aloud.

When Google's Predictive Policing Meets Lagos: Who Pays for AI's 'Justice' in Africa?
Nkirukà Ezenwà
Nkirukà Ezenwà
Nigeria·May 12, 2026
Technology

The sirens wail, not just on the streets of Lagos, but in the digital corridors where algorithms are being trained to predict crime, to assess risk, to even suggest sentencing. It sounds like progress, doesn't it? A more efficient, data-driven justice system. A future where human error and bias are supposedly eradicated by the cold, impartial logic of artificial intelligence. But let me tell you, from where I sit in Nigeria, this vision of AI in criminal justice feels less like a utopian dream and more like a carefully constructed nightmare waiting to unfold.

Unpopular opinion, perhaps, but the global rush to implement predictive policing and algorithmic sentencing tools, often championed by tech giants like Google and Palantir, is not a step towards universal justice. It is, instead, a dangerous leap into a new era of digital colonialism, where the biases embedded in Western datasets are imported, amplified, and then weaponized against already marginalized communities, particularly here in Africa. We are being sold a solution to problems that AI itself often exacerbates, and the terms of this deal are deeply unsettling.

Consider predictive policing. The idea is simple: feed an AI historical crime data, and it will tell you where and when future crimes are likely to occur. On the surface, it sounds brilliant. But what if that historical data is inherently biased? What if it reflects decades of over-policing in certain neighborhoods, leading to more arrests for minor offenses, which then feeds the algorithm to predict more crime in those same areas? This creates a vicious, self-fulfilling prophecy. As Dr. Ruha Benjamin, a Princeton professor and author, eloquently puts it, "Technology is not neutral; it is shaped by human hands and reflects human intentions, biases, and desires." Her work consistently highlights how new technologies can reinforce old injustices, a sentiment that resonates deeply with our experiences here.

In Nigeria, our criminal justice system is already fraught with challenges: corruption, inadequate resources, and a deeply entrenched class divide. The introduction of AI, without stringent ethical oversight and localized, unbiased data, would not fix these issues. It would merely automate and accelerate them. Imagine an algorithm, trained on data from a country with a vastly different socio-economic landscape and legal framework, being deployed in a city like Kano or Port Harcourt. The outcomes would be catastrophic, leading to an even greater disproportionate targeting of the poor and vulnerable. We already see how technology, like facial recognition, can be misused; adding predictive policing to that mix is like pouring petrol on a smoldering fire.

Everyone's celebrating the efficiency, the promise of reduced crime rates, the allure of a 'smarter' system. But I have questions. Who collects the data? Who owns it? Who audits the algorithms? And crucially, who is held accountable when these systems make mistakes, or worse, perpetuate systemic injustice? These are not abstract concerns. They are real, tangible threats to the rights and freedoms of our people. The Nigerian Bar Association, for instance, has repeatedly called for reforms to ensure fair trials and due process. Introducing opaque AI systems into this delicate balance without clear regulatory frameworks would undermine decades of hard-won legal battles.

Proponents of these technologies, often from the very companies developing them, argue that AI can be a force for good, that it can reduce human bias. They claim that algorithms, unlike humans, do not harbor racial or socio-economic prejudices. This is a naive, if not deliberately misleading, assertion. As Timnit Gebru, a prominent AI ethics researcher, has consistently pointed out, "The data reflects the world as it is, and if the world is biased, the data will be biased." The idea that you can simply 'de-bias' an algorithm without fundamentally addressing the societal biases reflected in its training data is a fantasy. It's like trying to clean a dirty pot with dirty water. It simply doesn't work.

Furthermore, the concept of algorithmic sentencing, where AI suggests or even determines the length of a prison term, is terrifying. It strips away the nuance, the empathy, and the individual consideration that a human judge, for all their flaws, can bring to a case. It reduces a person's life, their circumstances, and their potential for rehabilitation to a series of data points and probabilities. The very notion of justice is rooted in human dignity and the right to be seen as an individual, not merely a statistic. We cannot outsource our moral responsibilities to machines, no matter how sophisticated they appear.

Let's talk about what nobody wants to discuss: the potential for these systems to be used for social control. In countries with less robust democratic institutions, predictive policing and surveillance technologies can quickly morph into tools of oppression. We've seen examples globally where technology initially presented as a crime-fighting tool becomes a means to monitor dissent, suppress protests, or target political opponents. The allure of a 'safer' society can easily mask a slide into a surveillance state, especially when the technology is controlled by external entities with their own agendas or by local powers with questionable human rights records.

So, what is the alternative? Do we simply reject technology? No, that's not my argument. My argument is for caution, for critical engagement, and for prioritizing human rights and local context above technological expediency. We need to demand transparency from companies like Google and Microsoft when they propose these solutions. We need independent audits of their algorithms. We need to invest in local data infrastructure and expertise, ensuring that any AI deployed in our justice system is trained on relevant, unbiased, and ethically sourced data, and that its development is led by African voices who understand the complexities of our societies.

Instead of blindly importing solutions from Silicon Valley, we should be asking how AI can truly serve our unique needs, not just replicate Western models of policing and punishment. Can AI help streamline court processes, reduce case backlogs, or identify systemic inefficiencies? Perhaps. But predictive policing and algorithmic sentencing, as currently conceived and implemented, carry too great a risk of deepening existing inequalities and eroding fundamental rights. We must resist the temptation to embrace these technologies without a thorough, critical examination of their real-world implications, especially for the most vulnerable among us. Our justice system, however imperfect, must remain accountable to the people, not to an algorithm. For more on the ethical implications of AI, consider exploring resources from MIT Technology Review. The conversation around AI ethics is global, and Africa must be at its forefront, not just its receiving end. The future of justice, for us, depends on it. You can also find more discussions on AI's societal impact on Wired. We need to build our own frameworks, our own safeguards, and ensure that technology serves humanity, not the other way around. This isn't just about crime; it's about our sovereignty, our dignity, and the very fabric of our society. For more on AI's impact on African societies, see Meta's AI Takes Over WhatsApp and Instagram in Africa: A Digital Crossroads for Ubuntu [blocked]. The stakes are too high to get this wrong.

Enjoyed this article? Share it with your network.

Related Articles

Nkirukà Ezenwà

Nkirukà Ezenwà

Nigeria

Technology

View all articles →

Sponsored
AI PlatformGoogle DeepMind

Google Gemini Pro

Next-gen AI model for reasoning, coding, and multimodal understanding. Built for developers.

Get Started

Stay Informed

Subscribe to our personalized newsletter and get the AI news that matters to you, delivered on your schedule.