Let us be frank. When I hear the pronouncements from Silicon Valley about Artificial General Intelligence, about machines that will think and reason like humans, or perhaps even surpass us, my first thought is rarely about the technological marvel. My first thought is usually, 'Who benefits from this grand vision, and who is left behind?' It is a question that echoes differently here in Amman, where the daily realities of water scarcity, regional stability, and economic development shape our perspectives far more than abstract philosophical debates about sentient algorithms.
Sam Altman, the face of OpenAI, has become a prophet for this AGI future. He speaks of a world transformed, a new era of abundance. Yet, the very structure of his organization, OpenAI, remains a source of constant bewilderment and, frankly, suspicion. A non-profit parent overseeing a for-profit subsidiary, with a cap on returns for investors, all while pursuing the most powerful technology humanity has ever conceived. It sounds less like a benevolent quest for universal good and more like a carefully crafted mechanism to concentrate power and influence, perhaps even control, in the hands of a select few. The West has it backwards, I often think, focusing on the shiny new toy while ignoring the fundamental questions of equity and control.
Consider the recent drama surrounding Altman's brief ousting and subsequent return. It exposed deep fissures within OpenAI's leadership and raised serious questions about the true nature of its mission. Was it about safety, as some board members claimed, or was it about a power struggle over the pace and direction of AGI development? The opacity of it all is troubling. For us in Jordan, where transparency and accountability are vital for building trust in nascent technologies, this kind of corporate intrigue feels profoundly unsettling. How can we trust a technology that promises to reshape our world when its architects cannot even agree on their own internal governance, or articulate it clearly to the public?
"The pursuit of AGI by a handful of private entities, particularly those with such convoluted governance models, presents a significant risk to global stability and equitable access," stated Dr. Rania Al-Abdullah, a prominent Jordanian AI ethics researcher at the Princess Sumaya University for Technology. "We need international frameworks and public oversight, not just corporate self-regulation, to ensure these powerful systems serve humanity broadly, not just a privileged few." Her words resonate deeply here. Our concerns are not just about whether AI works, but for whom, and under whose terms.
While Silicon Valley obsesses over the next breakthrough model or the latest funding round, OpenAI reportedly sought valuations upwards of $80 billion in recent investor discussions, according to Bloomberg Technology, our focus in Jordan is far more grounded. We are looking at how AI can help manage our precious water resources more efficiently, how it can optimize agricultural yields in a changing climate, or how it can improve healthcare access in underserved communities. These are not futuristic fantasies; these are immediate, tangible problems that AI can address today. Jordan's approach makes more sense than Silicon Valley's, I believe, because it prioritizes practical, ethical application over speculative, potentially dangerous, grand ambitions.
Take, for instance, the work being done by local startups like Mawared AI, which uses machine learning to predict water demand and optimize distribution across our national grid. Or initiatives at the Royal Scientific Society, where researchers are developing AI models to analyze crop health from satellite imagery, providing farmers with actionable insights. These are not about creating god-like intelligences; they are about using intelligent tools to solve real-world problems that directly impact the lives of millions. This is the kind of AI that truly matters, the kind that fosters resilience and sustainability, not just profits and power.
An unpopular opinion from Amman, perhaps, but the constant drumbeat from OpenAI and others about an imminent AGI feels like a distraction. It shifts the conversation away from the very real ethical dilemmas and societal impacts of the AI we have today. Bias in algorithms, job displacement, the spread of misinformation via generative AI models, these are not problems for a theoretical AGI future; they are problems we are grappling with right now. The billions poured into chasing AGI could, in my humble estimation, be far better spent addressing these immediate challenges and building a more robust, ethical AI ecosystem globally.
Moreover, the concept of "safety" as defined by these Western tech giants often feels too narrow. Is safety merely about preventing AI from going rogue, or does it also encompass ensuring that AI development does not exacerbate global inequalities, concentrate wealth, or undermine democratic institutions? The latter, I would argue, is a far more pressing concern, especially for regions like ours that are often on the receiving end of technological shifts initiated elsewhere.
"The narrative around AGI often overlooks the geopolitical implications and the potential for technological colonialism," observed Dr. Omar Al-Khateeb, a professor of international relations at the University of Jordan. "When a single entity, largely controlled by Western interests, holds the keys to such a transformative technology, it creates an imbalance of power that could have profound consequences for nations striving for self-determination and equitable development." This is not a theoretical fear; it is a historical pattern we have seen play out with every major technological revolution.
While OpenAI and its ilk continue their high-stakes gamble on AGI, perhaps it is time for the rest of the world, particularly nations in the Global South, to chart a different course. We should focus on developing AI that is locally relevant, ethically sound, and publicly accountable. We need to invest in our own AI talent, foster local innovation, and build systems that serve our unique needs and values. The notion that we must simply wait for Silicon Valley to deliver its next technological marvel, and then adapt to its consequences, is a dangerous one.
The conversation around AI, especially AGI, needs to move beyond the boardrooms of San Francisco and into the global public square. We need diverse voices, diverse perspectives, and diverse governance models. Otherwise, Sam Altman's vision, however grand, risks becoming yet another chapter in a long history where technological progress serves the few, while the many are left to pick up the pieces. We in Jordan are not content to be mere spectators in this unfolding drama; we are active participants, demanding a seat at the table and a say in our own technological destiny. For a deeper dive into the broader implications of AI's rapid advancements, you might find this article on AI culture and society insightful. It is a complex landscape, and our future depends on how we navigate it, together, with our eyes wide open.









