Walk into any café in Belgrade these days, and you will see people glued to their phones, streaming, calling, working. We take our connectivity for granted, a constant hum in the background of our lives. But behind every flawless video call and every speedy download, there is an increasingly complex dance orchestrated by artificial intelligence. This is not some far-off Silicon Valley fantasy, this is happening right now, in Serbia, in Europe, and it brings with it a set of risks we are only just beginning to truly understand.
Let us talk about what is actually working, and what is making us nervous. Telecommunication companies, including our own Telekom Srbija, Telenor, and A1, are deploying AI for everything. Network optimization, predictive maintenance, customer service chatbots, and even strategic planning for the rollout of 5G and future 6G networks. The promise is efficiency, lower costs, and better service. The reality, however, can be a bit more complicated, particularly when we consider the potential for systemic failures or unintended consequences.
Consider the scenario: a major telecommunications provider, let us say Telekom Srbija, relies heavily on an AI system for dynamic network optimization. This system, perhaps powered by a sophisticated model from a company like Ericsson or Huawei, is designed to reroute traffic, allocate bandwidth, and predict outages based on real-time data. It is brilliant at its job, reducing latency by 15% and improving uptime by 8% in pilot programs, according to internal reports. But what if this AI develops a subtle bias, perhaps due to skewed training data, that consistently prioritizes certain types of traffic or certain geographical areas over others? Imagine a situation where, during a peak demand event, the AI decides to deprioritize traffic from a less affluent neighborhood, or even a critical service, to ensure optimal performance for a high-value business district. This is not a malicious act, but an algorithmic decision based on its programmed objectives and the data it has learned from.
The technical explanation here is rooted in the nature of machine learning. These network optimization algorithms are often deep reinforcement learning models. They learn by trial and error, adjusting their parameters to maximize a reward signal, such as network throughput or user satisfaction. The problem arises when the reward function is incomplete, or when the training data does not fully represent the complexity and ethical considerations of the real world. If the model is primarily optimized for raw efficiency or revenue generation, without explicit constraints or penalties for social equity or critical service access, it can make decisions that are technically optimal but ethically questionable. Dr. Elena Petrović, a senior researcher in AI ethics at the University of Belgrade's Faculty of Electrical Engineering, explains, “These systems are incredibly powerful, but their 'intelligence' is narrow. They optimize for what they are told to optimize for. If we do not explicitly build in safeguards for fairness, resilience, and human oversight, they will simply find the most efficient path to their goal, regardless of the human cost.”
The expert debate on this is lively, even here in Serbia. On one side, proponents argue that AI is essential for managing the sheer scale and complexity of modern networks. “Without AI, we simply cannot handle the data volumes and dynamic demands of 5G and 6G,” states Marko Jovanović, Chief Technology Officer at one of Serbia's leading mobile operators. “The benefits in terms of efficiency and service quality are undeniable. We are talking about reducing operational costs by 20% and improving customer experience across the board.” He points to the success of AI-powered chatbots, which now handle over 60% of routine customer inquiries, freeing up human agents for more complex issues. This is a common refrain heard across the industry, echoed by major players like Google and Microsoft who are heavily invested in providing AI solutions for telecommunication providers globally. You can find more on industry trends from sources like TechCrunch.
However, others are more cautious. Professor Dragan Marković, a telecommunications policy expert at the Serbian Academy of Sciences and Arts, raises concerns about accountability. “When an AI system makes a decision that leads to a service disruption, or worse, a critical failure, who is responsible? Is it the developer of the algorithm, the operator who deployed it, or the data scientists who trained it?” He highlights a recent incident in a neighboring Balkan country where an AI-driven network update caused a four-hour outage in a regional hospital's internet service, disrupting critical medical procedures. The incident, while eventually resolved, exposed significant gaps in their incident response protocols and accountability framework.
This brings us to the real-world implications, particularly for a country like Serbia. The Balkans have a different relationship with technology. We are often adopters, not primary developers, meaning we rely on solutions built elsewhere. This can be a double-edged sword. While we benefit from cutting-edge tools, we also inherit their inherent biases and potential vulnerabilities. The reliance on foreign vendors for core AI infrastructure in telecommunications means that ethical considerations and regulatory frameworks developed in Belgrade might clash with the design principles embedded in algorithms from Beijing or Silicon Valley. Furthermore, the strategic importance of telecommunications infrastructure, especially with the advent of 5G and 6G, makes these AI systems potential targets for cyberattacks. A compromised network optimization AI could be weaponized to cause widespread disruption, or even to selectively target communications.
“Our digital sovereignty is at stake,” warns Ana Kovačević, a cybersecurity analyst working with the Serbian government's Office for IT and eGovernment. “We need to ensure that the AI systems managing our critical infrastructure are transparent, auditable, and resilient to both accidental failures and malicious intent. This is not just about efficiency, it is about national security and the well-being of our citizens.” She points to the increasing sophistication of state-sponsored cyber threats, which could exploit vulnerabilities in AI-driven network management systems. The need for robust cybersecurity measures around these AI deployments is paramount, a point often stressed by publications like Ars Technica.
So, what should be done? First, there needs to be a stronger emphasis on explainable AI, or XAI, in telecommunications. Operators and regulators need to understand not just what decisions the AI is making, but why it is making them. Black box models, however efficient, are a liability in critical infrastructure. Second, robust regulatory frameworks are needed, perhaps drawing inspiration from the European Union's AI Act, but tailored to the specific context of telecommunications and national security. These regulations should mandate independent audits of AI systems, stress testing for adversarial attacks, and clear accountability mechanisms. Third, investment in local AI talent and research is crucial. Belgrade's tech scene is real, not hype, and we have brilliant minds capable of developing and scrutinizing these technologies. We need to foster an ecosystem where we can build our own secure and ethically aligned AI solutions, or at least deeply understand and adapt those we import. This includes training more engineers and researchers in AI safety and ethics, ensuring a pipeline of expertise to manage these complex systems.
Finally, public awareness and education are vital. Citizens need to understand how AI is shaping their digital lives, and what the potential risks are. This is not about fear-mongering, but about informed participation in the digital age. As we move towards a future where AI is deeply embedded in the very fabric of our communication, we must proceed with caution, pragmatism, and a clear understanding of both the immense benefits and the profound responsibilities that come with it. The stakes are too high to simply trust the algorithms without question. For further perspectives on AI's broader societal impact, one might consult MIT Technology Review.








