Enterprise AIEnterpriseGoogleMicrosoftIntelOpenAIAzureMcKinseyAsia · Sri Lanka5 min read13.5k views

When AI's Algorithms Fail in Colombo: Who Pays the Price, Microsoft or the Local Enterprise?

As Sri Lankan businesses increasingly adopt AI, the question of liability for algorithmic failures looms large. This investigative report uncovers the growing anxieties among local enterprises and workers, scrutinizing whether global tech giants or their regional implementers will bear the burden when the promises don't match the reality.

Listen
0:000:00

Click play to listen to this article read aloud.

When AI's Algorithms Fail in Colombo: Who Pays the Price, Microsoft or the Local Enterprise?
Ravi Chandrasekharàn
Ravi Chandrasekharàn
Sri Lanka·May 7, 2026
Technology

The fluorescent hum of the data center at LankaTech Solutions, a prominent IT services firm in Colombo, usually signifies progress. Yet, a palpable tension has settled over its operations in recent months. A new AI powered inventory management system, implemented with much fanfare and leveraging Microsoft Azure's cognitive services, recently misclassified a critical shipment of medical supplies meant for a district hospital in Jaffna. The delay, though eventually rectified, caused significant disruption and, more importantly, raised an uncomfortable question: who is responsible when the algorithms falter?

This is not an isolated incident. Across Sri Lanka, as enterprises from apparel manufacturing to financial services embrace artificial intelligence, the specter of AI liability is becoming increasingly real. The promises of efficiency, cost reduction, and enhanced decision making are alluring, but the practical implications of AI gone awry are only now beginning to surface. I've been tracking this for months, observing the cautious optimism give way to genuine concern among business leaders and the workforce alike.

Globally, the adoption of AI continues its relentless march. Reports from McKinsey indicate that over 50% of organizations worldwide have adopted AI in at least one business function, a figure that has more than doubled since 2017. While specific data for Sri Lanka is still emerging, anecdotal evidence suggests a similar, albeit slower, trajectory. Local conglomerates like John Keells Holdings and Hemas Holdings have publicly announced initiatives to integrate AI into their logistics, customer service, and healthcare operations. The potential for return on investment, or ROI, is often cited as the primary driver, with some early adopters reporting efficiency gains of 15-20% in specific processes.

However, the enthusiasm is tempered by the nascent legal and ethical frameworks surrounding AI. When an AI system makes a faulty diagnosis in a hospital, or a predictive policing algorithm disproportionately targets certain communities, or an automated trading system triggers a flash crash, the lines of accountability blur. Is it the developer of the foundational model, the integrator who customized it, the enterprise that deployed it, or the operator who relied on its output? This ambiguity is particularly acute in a developing nation like Sri Lanka, where regulatory bodies are still grappling with the basics of digital governance.

Consider the case of a major apparel exporter in Katunayake. They invested heavily in an AI driven quality control system, touted by its vendor, a regional partner of a global AI platform provider, as capable of identifying fabric defects with unparalleled accuracy. For weeks, the system performed admirably, reducing manual inspection time by 30%. Then, a batch of garments, deemed flawless by the AI, was shipped to a European buyer only to be rejected due to subtle, yet pervasive, stitching errors that the system had inexplicably missed. The financial cost of recall and reprocessing was substantial, but the damage to reputation was arguably greater. The exporter is now locked in a dispute with the vendor, each pointing fingers at the other, while the global AI provider maintains that their foundational models are merely tools, and responsibility lies with their implementation.

Here's what the data actually shows: a recent survey conducted by a local industry body, albeit with a smaller sample size, found that nearly 60% of Sri Lankan enterprises adopting AI have not clearly defined their internal protocols for addressing AI induced errors or failures. Furthermore, only 15% reported having specific insurance policies covering AI related risks. This lack of preparedness suggests a dangerous complacency, perhaps fueled by the relentless marketing of AI as an infallible solution.

Workers, too, are caught in this evolving landscape. While AI is often framed as a job creator or enhancer, it also introduces new vulnerabilities. An employee whose performance is judged by an AI algorithm, or whose job requires interacting with an AI system, faces a unique set of challenges. If the AI makes a mistake, is the human operator held accountable? This question weighs heavily on the minds of many, particularly in sectors like customer service and data entry, where AI tools are rapidly being integrated. “We are told to trust the system, but when the system fails, it’s our job on the line, not the algorithm’s,” remarked a call center agent at a leading telecommunications company in Colombo, who wished to remain anonymous for fear of reprisal.

Experts are urging a more proactive approach. Dr. Sanath Alahakoon, a legal scholar specializing in technology law at the University of Colombo, emphasizes the need for clear legislative frameworks. “The current legal landscape, largely based on tort law and contract law, was simply not designed for autonomous intelligent systems,” Dr. Alahakoon stated in a recent symposium. “We need to develop new principles of accountability that consider the unique characteristics of AI, such as its opacity, its probabilistic nature, and its capacity for independent learning. Without this, we risk stifling innovation or, worse, creating a Wild West scenario where victims of AI harm have little recourse.” His sentiment echoes similar calls from international bodies and legal scholars, highlighting a global gap in governance, one that smaller nations like Sri Lanka can ill afford to ignore.

The European Union, with its ambitious AI Act, is attempting to provide a blueprint, categorizing AI systems by risk level and imposing stricter obligations on high risk applications. While not without its critics, this approach offers a starting point for nations like Sri Lanka. The challenge, however, lies in adapting such comprehensive regulations to a local context, considering the specific economic conditions, technological infrastructure, and legal traditions.

Looking ahead, the onus will likely fall on a combination of stakeholders. Technology providers, particularly those offering foundational models like OpenAI's GPT or Google's Gemini, will face increasing pressure to build in safety mechanisms and offer clearer indemnification clauses. Integrators will need to enhance their due diligence, ensuring proper testing and validation of AI systems before deployment. And enterprises, the ultimate deployers, must develop robust internal governance structures, including clear lines of responsibility, human oversight mechanisms, and comprehensive risk assessments. The Sri Lankan government, through institutions like the Information and Communication Technology Agency of Sri Lanka ICTA, has a critical role to play in fostering dialogue, developing guidelines, and eventually, enacting legislation that protects both consumers and businesses without stifling innovation. The promises don't match the reality if we ignore the potential for harm.

As AI continues to embed itself into the fabric of our daily lives and economy, the question of who is responsible when AI causes harm will only grow louder. It is a complex puzzle, one that demands urgent attention and collaborative solutions, lest the benefits of this transformative technology be overshadowed by unforeseen liabilities and eroded trust. The time for proactive measures is now, before the next algorithmic misstep leads to irreversible consequences for our local enterprises and the people who depend on them. For further reading on the broader implications of AI in enterprise, one might consult articles from MIT Technology Review.

Enjoyed this article? Share it with your network.

Related Articles

Ravi Chandrasekharàn

Ravi Chandrasekharàn

Sri Lanka

Technology

View all articles →

Sponsored
AI SafetyAnthropic

Anthropic Claude

Safe, helpful AI assistant for work. Analyze documents, write code, and brainstorm ideas.

Learn More

Stay Informed

Subscribe to our personalized newsletter and get the AI news that matters to you, delivered on your schedule.