The sun was just beginning to paint the eastern sky with hues of orange and pink as Captain Sipho Dlamini settled into the cockpit of the Eswatini Air Embraer 145. Below, the Mdzimba mountains stood sentinel, and the Great Usutu River snaked its way towards Mozambique. For decades, Captain Dlamini, a man whose smile lines tell tales of countless landings and takeoffs, relied on his sharp instincts, the whisper of the wind, and the hum of the engines to guide his aircraft. Today, however, another presence was at his side: an intricate web of artificial intelligence, silently optimizing, predicting, and even advising.
“It’s a different kind of co-pilot now, Thandiwè,” he told me over a cup of strong coffee at Matsapha International Airport a few weeks ago, his eyes twinkling with a mix of awe and a touch of apprehension. “Sometimes, I feel like I am just supervising a very smart child, rather than truly flying the plane myself.”
This sentiment, I have found, echoes across the globe, but it resonates particularly deeply here in Eswatini. In Eswatini, we say 'a person is a person through other people', and this philosophy extends to our relationship with technology. We embrace progress, but we also ask: what is the human cost? What does it mean for our pilots, our air traffic controllers, and even our ground crew when the machines take on more and more cognitive load?
The aviation industry, a sector where precision and safety are paramount, has been a fertile ground for AI innovation. Companies like Google DeepMind, NVIDIA, and even Apple, through its underlying AI research, are contributing to advancements that promise safer, more efficient air travel. From sophisticated flight optimization algorithms that calculate the most fuel-efficient routes in real time, to predictive maintenance systems that flag potential component failures long before they occur, AI is deeply embedded. Air traffic control, too, is seeing a quiet revolution, with AI systems assisting controllers in managing complex airspace and anticipating potential conflicts. For example, a recent report by MIT Technology Review highlighted how AI could reduce flight delays by up to 15 percent, saving airlines billions annually.
But what does this technological embrace do to the human mind? Dr. Nompumelelo Nxumalo, a cognitive psychologist at the University of Eswatini, has been studying this very question. “We are seeing a phenomenon called ‘cognitive offloading’,” she explained to me during a visit to her campus office, sunlight streaming through the window. “Pilots, for instance, are increasingly relying on AI for tasks that once required intense mental calculation and decision-making. While this reduces immediate workload and stress, it can also lead to a degradation of innate skills and intuition over time.”
Imagine a pilot who has always relied on their spatial reasoning to navigate tricky weather patterns. Now, an AI system, perhaps powered by NVIDIA’s advanced GPUs for real-time data processing, provides a precise, optimized flight path, factoring in wind shear and turbulence with unparalleled accuracy. The pilot follows the recommendation. Over months and years, how much of that inherent spatial reasoning remains sharp? Dr. Nxumalo pointed to studies suggesting that excessive reliance on automation can lead to a decrease in vigilance and an increased difficulty in taking over manual control during unexpected system failures. “It’s like using a calculator for every sum,” she said, “eventually, your mental arithmetic gets rusty.”
This concern isn't just theoretical. Air traffic controllers, too, are grappling with this shift. Mr. Themba Maseko, a veteran air traffic controller at Matsapha, shared his perspective. “The AI systems, like those developed by companies using OpenAI’s advanced models for predictive analysis, are incredible tools. They can process more data than any human, identify patterns we might miss, and suggest solutions faster than we can think. But there’s a subtle shift. Instead of actively scanning, predicting, and commanding, sometimes I find myself waiting for the AI’s suggestion, then verifying it. The active problem-solving part of my brain feels less engaged.”
Mr. Maseko’s observation points to a critical psychological impact: the erosion of agency and the potential for a ‘deskilling’ effect. If the AI is always right, or at least almost always right, what happens to human confidence and the ability to act decisively when the AI falters, or when an unprecedented situation arises? The human element, the ability to improvise, to understand nuance, and to apply wisdom beyond cold data, remains irreplaceable. As a journalist covering tech, I often see how The Verge reports on new AI products, but rarely do they delve into these deeper psychological shifts.
Broader societal implications extend beyond the cockpit and control tower. The perception of safety, for instance, could change. If passengers know AI is flying their plane, will they feel more secure, or less? There is a certain comfort in knowing a human, with all their fallibility but also their empathy and adaptability, is at the controls. Mr. Sibusiso Hlophe, a frequent flyer and local businessman, voiced this concern. “I trust our pilots. They are like family, we see them at the market, at church. If a machine is making all the decisions, what happens to that trust? What if the machine makes a mistake I cannot understand?”
This tiny kingdom has big ideas about technology, but we also value human connection above all else. The challenge, then, is not to reject AI, but to integrate it wisely, ensuring it augments human capabilities rather than diminishes them. We need to design AI systems that foster collaboration, allowing humans to retain their critical skills and decision-making authority. This means creating interfaces that are intuitive, providing clear explanations for AI recommendations, and perhaps most importantly, ensuring that training programs evolve to teach pilots and controllers how to effectively partner with AI, not just follow its commands.
Dr. Nxumalo suggests a future where AI acts as a sophisticated assistant, handling routine tasks and flagging anomalies, while the human remains the ultimate decision-maker, honing their skills through simulated challenging scenarios that AI helps create. “We need to cultivate a ‘human-in-the-loop’ approach where the human is not merely a supervisor, but an active participant, constantly evaluating and learning from the AI, and vice versa,” she concluded. “It’s about finding that delicate balance, ensuring that our reliance on AI doesn't dull the very human brilliance it was designed to enhance.”
For Captain Dlamini, the journey continues. He embraces the efficiency AI brings, but he also makes a conscious effort to stay sharp, to question, and to trust his own judgment when the situation demands it. The skies above Eswatini may be getting smarter, but the heart and mind of the human pilot, I believe, will always be their most valuable asset. The future of aviation, particularly here, will depend on how well we learn to fly together, human and machine, in a dance of shared responsibility and mutual respect. After all, sometimes the smallest countries have the biggest vision when it comes to balancing progress with our humanity.







