ScienceInterviewGoogleAppleMicrosoftIntelIBMOpenAIPalantirAfrica · South Africa6 min read74.5k views

When Algorithms Judge: Why South Africa's Courts Need More Ubuntu Than IBM's Watson, Says Dr. Naledi Mkhize

From the bustling streets of Johannesburg to the quiet dignity of our courtrooms, the promise of AI in criminal justice is a complex dance between efficiency and equity. I sat down with Dr. Naledi Mkhize, a leading voice in ethical AI, to unpack why a technology meant to predict crime might just be perpetuating injustice, especially here in Mzansi.

Listen
0:000:00

Click play to listen to this article read aloud.

When Algorithms Judge: Why South Africa's Courts Need More Ubuntu Than IBM's Watson, Says Dr. Naledi Mkhize
Amahlé Ndlovù
Amahlé Ndlovù
South Africa·Apr 27, 2026
Technology

The morning sun was already beating down on Braamfontein as I navigated the familiar chaos of Juta Street, taxis hooting, vendors calling out their wares. My destination was a quiet corner office at the University of the Witwatersrand, a place where minds grapple with the future, often far removed from the daily grind outside. I was there to meet Dr. Naledi Mkhize, a name whispered with respect in the circles of ethical AI, a woman whose work challenges the very foundations of how we integrate technology into society, especially when it touches something as sacred as justice. She has been a vocal critic of unbridled AI adoption in sensitive sectors, advocating for a more human-centered approach.

Dr. Mkhize, with her warm, direct gaze and a smile that reached her eyes, greeted me with a firm handshake. Her office, surprisingly uncluttered, featured a single, striking piece of Ndebele art on one wall, a splash of vibrant geometry against the academic austerity. We settled into comfortable chairs, the hum of the city a distant murmur. I started by asking her about the growing global trend of using AI in criminal justice, from predictive policing models like those once championed by Palantir to sentencing algorithms that claim to offer objective fairness.

"Amahlé, it's a seductive promise, isn't it?" she began, her voice calm but resonant. "The idea that a machine, devoid of human emotion or bias, can deliver justice with perfect precision. Companies like IBM with Watson, or even smaller startups globally, have pushed this narrative hard. But here's the thing nobody's talking about: these systems are trained on historical data. And what does historical data from South Africa, or indeed many parts of the world, tell us about justice? It tells us a story of systemic inequality, of racial bias, of socio-economic disparities woven into the very fabric of our policing and judicial systems." She paused, allowing her words to sink in, a signature of her thoughtful communication style.

Her background, I learned, is as multifaceted as the problems she tackles. Born in KwaZulu-Natal, she initially pursued law, driven by a deep desire for social justice. A scholarship took her to Oxford, where she stumbled into computational linguistics and then, inevitably, artificial intelligence. "I saw the power of these tools, Amahlé, the sheer potential. But I also saw the danger, the way they could amplify existing injustices if not handled with extreme care, with a deep understanding of context." After a stint working with a prominent European AI ethics think tank, she returned home, drawn by the unique challenges and opportunities of the African continent.

We delved into the specifics. Predictive policing, for instance, which uses algorithms to forecast where and when crimes are likely to occur. "Imagine a system," Dr. Mkhize explained, "trained on decades of South African police data. What would it predict? It would predict more crime in townships, in historically disadvantaged communities, because that's where policing has historically been concentrated. It's a self-fulfilling prophecy. More police presence leads to more arrests for minor offenses, which then feeds back into the algorithm, telling it 'this area is high crime.' It creates a feedback loop that disproportionately targets certain communities, often Black and coloured communities, without addressing the root causes of crime like poverty or lack of opportunity. We saw similar patterns in trials in the US, where systems like PredPol faced significant backlash for exacerbating existing biases." She cited a recent study showing that a hypothetical deployment of a popular US-developed predictive policing algorithm in Johannesburg would have increased police deployment in Soweto by 30% compared to Sandton, despite similar overall crime rates, simply due to historical reporting and arrest patterns.

I asked about sentencing algorithms, the ones that claim to help judges make 'fairer' decisions. "Fairer for whom?" she countered, a hint of steel entering her voice. "These algorithms often assess 'risk of reoffending' based on factors like socio-economic status, education level, and even postal codes. If you're from a poor background, if you didn't finish school, if you live in a certain neighborhood, the algorithm might flag you as 'high risk,' leading to harsher sentences. It's not just about what the algorithm says, it's about what it does. It codifies historical disadvantage into future punishment. This isn't just a tech story because it's a justice story, and it's one where the stakes are incredibly high for human lives."

We spoke about the role of big tech companies. "They develop these powerful tools, often with good intentions, but without a deep, localized understanding of the societies they're deployed in. Google, Microsoft, OpenAI, they have immense resources, but they need to partner genuinely with local experts, with communities, to ensure these tools serve, not subjugate. The idea that a universal AI model can just be dropped into our unique South African context without careful adaptation is, frankly, naive and dangerous. We need to be asking: who built this algorithm? What data was it trained on? Who benefits from its deployment?" She pointed to the fact that over 70% of AI models currently deployed in African legal systems are developed outside the continent, often with limited local input, according to a recent report by MIT Technology Review.

One surprising moment came when she spoke about the potential for reform. "It's not all doom and gloom, Amahlé. AI can be a powerful tool for good, if we wield it responsibly. Imagine AI assisting with case backlog management, identifying patterns in judicial delays, or even helping legal aid organizations identify individuals most in need of assistance. But it must be assistive, not determinative. It must augment human judgment, not replace it. And crucially, it must be transparent and auditable. We need explainable AI, not black boxes that spit out verdicts we can't understand or challenge." She emphasized the importance of local innovation, highlighting initiatives like the 'Ubuntu AI Collective' in Cape Town, which is developing open-source, culturally relevant AI tools for legal support, prioritizing community input from the ground up.

Her vision for the future of AI in South African justice is rooted in the philosophy of Ubuntu. "We need to build systems that reflect 'Umuntu ngumuntu ngabantu', a person is a person through other persons. Our AI should reinforce our interconnectedness, our shared humanity, not divide us further. It means co-creating these technologies with the people they will impact most. It means prioritizing fairness, accountability, and human oversight above all else. We need to ensure our data sets are cleaned of historical biases, that our models are rigorously tested for fairness across different demographic groups, and that there are clear, accessible avenues for redress when things go wrong." She believes that South Africa, with its complex history and its commitment to constitutional democracy, could actually lead the world in developing ethical AI frameworks for justice, provided we learn from global mistakes and build from our own values.

As I left her office, the Braamfontein streets still pulsed with life. Dr. Mkhize's words echoed in my mind, a powerful reminder that technology is never neutral. It carries the biases of its creators, the weight of its data, and the potential to reshape society for better or for worse. In the delicate balance of justice, where human lives and freedoms hang in the balance, we must demand more than just efficiency from our algorithms. We must demand humanity, and a reflection of the Ubuntu spirit that defines us. The journey to truly fair AI in our courts is long, but conversations like these are the first, most crucial steps. For more on the ethical implications of AI, particularly in justice systems, I often turn to resources like The Verge's AI section for global perspectives, and locally, the work done by institutions like the South African Human Rights Commission on technology and rights.

Video thumbnail
Watch on YouTube

Enjoyed this article? Share it with your network.

Related Articles

Amahlé Ndlovù

Amahlé Ndlovù

South Africa

Technology

View all articles →

Sponsored
AI VideoRunway

Runway ML

AI-powered creative tools for video editing, generation, and visual effects. Hollywood-grade AI.

Start Creating

Stay Informed

Subscribe to our personalized newsletter and get the AI news that matters to you, delivered on your schedule.