The morning sun, as it filters through the jacaranda trees in my neighborhood, often brings a sense of hope, a vibrant promise of a new day. But lately, when I read about the latest advancements in artificial intelligence, particularly its creep into our justice systems, that hope feels shadowed. We are told that AI will make policing more efficient, that it will deliver impartial justice. But from where I stand, here in Mexico, I see a different, more troubling picture, one where the cold logic of algorithms threatens to entrench injustice rather than dismantle it.
Let us be clear: the idea of AI in criminal justice, with its predictive policing models and sentencing algorithms, is not some abstract, distant problem. This affects every family in Latin America. It is a very real, very present danger to the fabric of our societies, particularly for those who have historically been marginalized and underserved by the justice system. The glossy presentations from companies like Google, showcasing their advanced machine learning models for crime prediction, often omit the human cost, the very real lives that these systems can irrevocably alter. They speak of data, efficiency, and optimization, but they rarely speak of equity, dignity, or the deep-seated biases that can be amplified by these tools.
Consider predictive policing. The concept is simple enough: use historical crime data to forecast where and when future crimes are likely to occur, deploying resources accordingly. Sounds logical, right? But what if that historical data is inherently biased? What if it reflects decades of over-policing in certain neighborhoods, often those inhabited by our Indigenous communities, our working-class families, or our migrant populations? If the data says crime is higher in a specific colonia, it might not be because more crime is happening there, but because more police are looking for it there. An algorithm fed this data will simply learn to send more police to that same colonia, creating a vicious cycle of surveillance, arrests, and disproportionate criminalization. It is a self-fulfilling prophecy of injustice.
We see this playing out in various forms across the border in the United States, where systems like PredPol and HunchLab have been deployed. While these exact systems might not yet be widespread in Mexico, the underlying technology and the philosophical approach are already being discussed and piloted in various forms. The allure of a technological silver bullet for complex social problems is powerful, especially in nations grappling with high crime rates. But we must resist this siren song. As Professor Ruha Benjamin, a leading scholar on race and technology, has eloquently stated, “We have to understand that technology is not neutral, it is not objective. It is shaped by the values, priorities, and biases of the people who create it and the societies in which it is embedded.” Her words resonate deeply with me, reminding us that these tools are not magic, they are reflections.
Then there are sentencing algorithms, tools designed to assist judges in determining bail, parole, or even the length of prison sentences. These systems often claim to reduce human bias, to bring a cold, hard objectivity to judicial decisions. But again, the data they are trained on is not objective. It reflects past sentencing patterns, which themselves are often riddled with racial, ethnic, and socioeconomic disparities. An algorithm that learns from these patterns will simply replicate them, often with an added layer of opacity. When a judge relies on a 'risk score' generated by a black box algorithm, how can we truly scrutinize the decision? How can we appeal a judgment rendered by a mathematical model whose inner workings are proprietary secrets?
I spoke recently with Dr. Ana Paula Hernández, a human rights lawyer and advocate based in Mexico City, who has been vocal about the ethical implications of AI. She told me, “The idea that an algorithm can fairly assess the complex social and economic factors that lead to someone’s involvement in the justice system is not just naive, it is dangerous. It strips away human dignity and the possibility of true rehabilitation.” Her concern is valid. Our justice system, for all its flaws, must still strive for individual consideration, for empathy, for the possibility of redemption. Algorithms, by their very nature, generalize and categorize, often reducing individuals to data points.
Some will argue that these systems are still in their early stages, that with more data and better design, they can be made fair. They will say that human judges and police officers also have biases, and perhaps the algorithms are even less biased. This is a tempting argument, a narrative that seeks to absolve us of responsibility by shifting the blame to technology. But it is a false equivalency. The biases of a human can be challenged, debated, and reformed. The biases embedded deep within a complex algorithm, often hidden behind layers of proprietary code and opaque methodologies, are far more insidious and difficult to root out. They scale injustice at an unprecedented rate, affecting thousands, even millions, with a single flawed line of code.
Moreover, the very premise that AI can solve deeply rooted societal issues like crime and inequality is a distraction. It diverts attention and resources from the real, systemic changes that are needed: investing in education, creating economic opportunities, strengthening social safety nets, and reforming policing practices from the ground up. La tecnología es para todos, yes, but it must be used to uplift, not to oppress. It must serve justice, not undermine it.
Mexico's AI story is not being told, until now. We are a nation rich in culture, resilience, and ingenuity. We have a unique perspective on justice, shaped by our history and our diverse communities. We must not simply import technological solutions from other nations without critical examination. We must demand transparency, accountability, and ethical oversight for any AI system deployed in our public services, especially those that touch upon fundamental human rights. We need to develop our own ethical frameworks, informed by our values and our experiences, ensuring that technology serves our people, not the other way around.
The path forward is not to reject technology outright, but to harness it responsibly and ethically. It means investing in local AI talent, fostering research that prioritizes fairness and cultural relevance, and empowering civil society to scrutinize these powerful tools. It means asking the hard questions: Who benefits from these systems? Who is harmed? And are we truly building a more just society, or simply automating our prejudices? The answers to these questions will define not just our technological future, but the very soul of our justice system.








