EnvironmentTrend AnalysisGoogleMicrosoftMetaIntelOpenAIAWSHugging FaceEurope · Belgium6 min read21.6k views

Together AI's Open Frontier: Is the 'Anti-OpenAI' a True Paradigm Shift or Just a Clever Rebranding of Cloud Computing?

The promise of open-source infrastructure for AI models, championed by Together AI, challenges the proprietary dominance of OpenAI. Yet, Michèl Lambertè questions if this movement truly democratizes AI or merely reconfigures the power dynamics within the cloud. Brussels has questions and so should you.

Listen
0:000:00

Click play to listen to this article read aloud.

Together AI's Open Frontier: Is the 'Anti-OpenAI' a True Paradigm Shift or Just a Clever Rebranding of Cloud Computing?
Michèl Lambertè
Michèl Lambertè
Belgium·May 7, 2026
Technology

The narrative surrounding artificial intelligence often oscillates between breathless innovation and existential dread. In this dynamic, a new contender has emerged, positioning itself as the antithesis to the walled gardens of established AI giants. Together AI, a firm gaining considerable traction, champions an open-source infrastructure designed to run any model, ostensibly democratizing access and challenging the proprietary dominance of entities such as OpenAI. But is this truly a paradigm shift, or merely a sophisticated rebranding of cloud computing with a new AI veneer? As a Belgian observer, accustomed to the intricate layers of European policy and the pragmatic skepticism it often engenders, I find myself asking: what is the actual substance beneath this compelling surface?

The historical context of technological shifts offers a valuable lens. We have witnessed cycles where innovation, initially open and collaborative, eventually consolidates into proprietary ecosystems. Think of the early internet, born from academic collaboration, now largely controlled by a handful of corporations. The open-source software movement, while robust, has always contended with this gravitational pull towards commercialization and control. In the realm of AI, the initial burst of academic research and publicly available models, exemplified by projects like Google's Transformer architecture, quickly gave way to the rise of large, proprietary models developed by well-funded private companies. OpenAI, with its GPT series, stands as the most prominent example of this trajectory, transforming from a non-profit dedicated to open AI research into a commercial powerhouse with significant backing from Microsoft.

Together AI enters this arena with a compelling proposition: to provide the infrastructure that allows developers to run, fine-tune, and deploy various open-source large language models and other AI models efficiently and cost-effectively. Their platform aims to abstract away the complexities of GPU management and model serving, offering a unified API for a multitude of models, including those from Meta's Llama family or Mistral AI. This approach directly contrasts with the monolithic, closed-source offerings of companies like OpenAI, where users are largely confined to their specific models and APIs. The company has reportedly raised significant capital, with recent funding rounds valuing it in the billions of dollars, signaling strong investor confidence in this 'anti-OpenAI' thesis. Their pitch resonates with a segment of the developer community wary of vendor lock-in and eager for greater control over their AI deployments.

Data points underscore the growing appetite for open models. Hugging Face, a key enabler of the open-source AI ecosystem, hosts hundreds of thousands of models, with downloads numbering in the tens of millions monthly. This vibrant community demonstrates a clear preference for flexibility and transparency. However, the operational reality of running these models, particularly the larger ones, demands substantial computational resources. This is where Together AI positions itself, offering a managed service that leverages distributed GPU clusters. Their infrastructure aims to make deploying a Llama 3 model as straightforward as calling an API, effectively democratizing access to powerful AI capabilities without requiring individual developers or smaller enterprises to invest in prohibitively expensive hardware.

Yet, the question remains: does this fundamentally alter the power dynamics, or merely shift the locus of control? "While Together AI offers a crucial service by making open models more accessible, we must critically examine where the true value and control ultimately reside," observes Dr. Isabelle Dubois, a leading AI policy analyst at the European Centre for Digital Rights in Brussels. "If the infrastructure itself becomes a bottleneck, or if the economic model disproportionately benefits the infrastructure provider, then the 'openness' becomes a facade. Brussels has questions and so should you, particularly regarding data sovereignty and competitive fairness." Her point is salient; access to infrastructure, even for open models, can still create new forms of dependency.

Another perspective comes from Professor Jan Van der Velde, a computer science expert at KU Leuven, who highlights the practical advantages. "For many Belgian startups and SMEs, the cost and complexity of managing their own GPU clusters for large models is simply prohibitive," he states. "Platforms like Together AI provide an essential bridge, allowing them to experiment and innovate with cutting-edge open models without the massive upfront investment. This fosters local innovation, which is crucial for Europe's digital sovereignty." Indeed, for a small nation like Belgium, where technological prowess often relies on smart specialization and leveraging global platforms, such services can be transformative. Our local AI initiatives, from Ghent to Liège, could certainly benefit from reduced barriers to entry for advanced model deployment.

However, the long-term implications merit scrutiny. While Together AI allows users to run various models, it still centralizes the compute infrastructure. This means that while the model code might be open, the actual execution and scaling are reliant on a third-party service. This is not entirely dissimilar to how cloud providers like AWS or Google Cloud offer managed services for open-source databases or operating systems. The convenience is undeniable, but it does not eliminate reliance on a central provider. The EU's approach deserves more credit than it gets for anticipating these nuances with regulations like the Digital Markets Act, which aims to prevent gatekeeping in digital services.

The environmental footprint of such large-scale AI infrastructure also warrants consideration. Running myriad large models, even efficiently, consumes substantial energy. While Together AI emphasizes optimized inference and resource sharing, the aggregate demand for compute power continues to climb. As a nation deeply committed to sustainability, Belgium understands the importance of scrutinizing energy consumption in all sectors. The 'anti-OpenAI' movement, while laudable in its aims for accessibility, must also demonstrate a clear path towards sustainable computing practices. Without this, the environmental cost could outweigh the benefits of perceived democratization.

My verdict, therefore, is one of cautious optimism tempered by pragmatic skepticism. Together AI represents a significant and positive force in the AI landscape, pushing back against the trend of proprietary lock-in and making powerful AI more broadly accessible. This is undeniably beneficial for innovation, particularly for smaller players and academic researchers who lack the resources of major tech conglomerates. It aligns well with the European ethos of fostering diverse ecosystems and preventing monopolistic control. The ability to choose between models, fine-tune them with proprietary data, and deploy them without being tethered to a single vendor's ecosystem is a powerful advantage. This is not a fad; it is a necessary evolution in the AI infrastructure stack.

However, we must remain vigilant. The 'open-source infrastructure' model, while offering greater flexibility at the model layer, still centralizes compute resources. The critical question for the future is whether this centralization of compute will eventually lead to a new form of gatekeeping, where access to the underlying hardware and optimized serving becomes the new bottleneck. For now, it offers a compelling alternative, fostering a healthier, more competitive AI environment. But as ever, the devil is in the details of implementation and the long-term economic incentives. We must ensure that the 'anti-OpenAI' does not simply become 'another cloud provider' in disguise, albeit one with a more open philosophy. The journey towards truly democratized AI is long, and every step, while celebrated, must be thoroughly examined. For more insights into the evolving AI landscape, particularly concerning infrastructure, one might consult resources like TechCrunch's AI section or MIT Technology Review for deeper analysis of technological trends.

Enjoyed this article? Share it with your network.

Related Articles

Michèl Lambertè

Michèl Lambertè

Belgium

Technology

View all articles →

Sponsored
Generative AIStability AI

Stability AI

Open-source AI for image, language, audio & video generation. Power your creative workflow.

Explore

Stay Informed

Subscribe to our personalized newsletter and get the AI news that matters to you, delivered on your schedule.