BusinessOpinionSouth America · Colombia3 min read43.8k views

The EU AI Act is Here. Will Google and Microsoft Finally Listen to the Global South?

The EU AI Act's enforcement begins, a moment many celebrate as a step toward responsible AI. But from Colombia, I see a familiar pattern: regulations shaped by the powerful, for the powerful. It is time for tech giants to truly hear the voices beyond Brussels and Silicon Valley.

Listen
0:000:00

Click play to listen to this article read aloud.

The EU AI Act is Here. Will Google and Microsoft Finally Listen to the Global South?
Valentinà Lopèz
Valentinà Lopèz
Colombia·Apr 27, 2026
Technology

The news hit us like a tropical storm, not unexpected but still powerful: the European Union's AI Act is officially beginning its enforcement. For many in the global North, this is a moment of triumph, a beacon of regulatory foresight in the wild west of artificial intelligence. And yes, it is a significant step, a necessary one even. But from my vantage point here in Bogotá, looking out at the Andes, I cannot help but feel a familiar unease. Will this monumental legislation truly foster global equity, or will it simply solidify the power of a few tech giants, leaving countries like Colombia to navigate the wake?

My opinion is clear: the EU AI Act, while well-intentioned, risks becoming another example of regulatory imperialism if its principles are not adopted with a truly global, inclusive lens. It is not enough for OpenAI or Google to simply comply with European standards. They must understand that their algorithms impact communities far beyond the EU's borders, communities with unique histories, vulnerabilities, and aspirations. This is about more than technology because it is about justice.

Think about it. The Act categorizes AI systems by risk, with 'unacceptable risk' systems banned and 'high-risk' systems facing stringent requirements. This sounds good on paper, right? But who defines these risks? Whose societal values are embedded in these definitions? Historically, these frameworks have been designed by and for the global North, often overlooking the specific contexts and potential harms experienced in places like Latin America. For instance, an AI system used for predictive policing might be deemed 'high-risk' in Europe due to privacy concerns, but in a nation grappling with post-conflict challenges, such a system could have entirely different, and potentially more severe, implications for human rights and social cohesion. The nuances are lost when the conversation is not truly global.

Consider the impact on data. The EU Act emphasizes data governance and quality. This is crucial. But where does the data come from that trains the foundational models of companies like Meta and Anthropic? Often, it is scraped from the internet, a vast ocean of information that disproportionately reflects the cultures, languages, and biases of wealthier nations. If an AI system is trained primarily on English language data, how will it accurately understand the complexities of Colombian Spanish dialects, indigenous languages, or the subtle cultural cues essential for effective communication in our diverse society? We have seen how facial recognition algorithms struggle with non-Caucasian faces, and how natural language processing models perpetuate stereotypes against non-Western cultures. Without diverse data inputs and rigorous testing in varied contexts, these 'compliant' AI systems will continue to fail, or worse, harm, those outside the dominant cultural sphere.

I spoke recently with Dr. Elena Ramirez, a leading AI ethicist at the Universidad Nacional de Colombia. She told me,

Enjoyed this article? Share it with your network.

Related Articles

Valentinà Lopèz

Valentinà Lopèz

Colombia

Technology

View all articles →

Sponsored
AI CommunityHugging Face

Hugging Face Hub

The AI community building the future. 500K+ models, datasets & spaces. Open-source AI for everyone.

Join Free

Stay Informed

Subscribe to our personalized newsletter and get the AI news that matters to you, delivered on your schedule.