The hum of servers, a familiar symphony in the digital age, often masks a more complex reality. In the bustling, yet often opaque, world of artificial intelligence, companies like Together AI are making significant noise. From their headquarters in Mountain View, California, they are not merely building; they are actively dismantling, or so they claim, the walled gardens erected by industry giants such as OpenAI and Google. Their proposition is compelling: an open-source platform that allows developers to run, fine-tune, and deploy large language models without being tethered to a single vendor. For many, particularly in regions like ours, where digital sovereignty and cost efficiency are paramount, this sounds like a liberation.
I’ve been tracking this for months, observing the rapid ascent of companies promising democratized AI. Together AI, founded by a team with deep roots in academia and prior experience at Apple and Google, emerged from stealth in 2022. Their vision, articulated by CEO Vipul Ved Prakash, is to create a 'decentralized cloud for AI,' a network of compute resources that can host and serve a multitude of open models. This is not just about technical infrastructure; it is a philosophical stance against the perceived monopolization of AI by a few well-funded entities. Prakash, a veteran of several startups and a former Google engineer, has often emphasized the importance of accessibility. "We believe that the future of AI should not be controlled by a handful of corporations," he stated in a recent interview with TechCrunch, "but rather be open and accessible to everyone." This sentiment resonates deeply in places like Colombo, where access to cutting-edge technology often comes with prohibitive costs and restrictive terms.
Together AI's business model is multifaceted, reflecting the complex needs of the AI ecosystem. At its core, they provide a platform for inference and fine-tuning of open-source large language models. Developers pay for compute time and model usage, similar to traditional cloud providers, but with a crucial difference: the underlying models are often open-source, reducing licensing fees and increasing flexibility. They also offer a managed service for enterprises, handling the complexities of model deployment and scaling. Furthermore, Together AI is actively involved in model development, contributing to and releasing their own open-source models, such as Together Computer. This dual approach of providing infrastructure and contributing to the open-source model ecosystem is central to their strategy. They are, in essence, selling shovels and also digging for gold, a common Silicon Valley adage.
However, the promises don't match the reality for every aspiring AI developer. While the open-source ethos is admirable, the computational demands of large language models remain immense. Accessing Together AI's platform, while potentially cheaper than OpenAI's API, still requires significant financial commitment for serious applications. For a startup in Sri Lanka, for instance, even a reduced cost can be a barrier. The company has raised substantial capital, including a reported $102 million Series A round in 2023 led by Kleiner Perkins, valuing the company at over $1 billion. This funding fuels their expansion, allowing them to acquire more GPUs and attract top talent. Yet, the pressure to deliver returns on such investments inevitably clashes with the pure idealism of open source. Venture capitalists, after all, are not known for their philanthropic tendencies.
In the competitive landscape, Together AI finds itself in a curious position. On one flank, they contend with the established giants: OpenAI with its proprietary GPT models and API, Google Cloud's Vertex AI, and Microsoft Azure's AI services. These players offer integrated ecosystems, often with deep enterprise relationships and vast compute resources. On the other flank, they face other open-source focused companies and initiatives, such as Hugging Face, which has built a formidable community around model sharing and collaboration. Together AI's differentiation lies in its focus on inference infrastructure for open models, aiming to be the preferred runtime rather than just a model hub. "Our goal is to make running any model as simple as making an API call," explained a Together AI spokesperson to Reuters last year, highlighting their infrastructure-first approach. This is a subtle but critical distinction, aiming to capture the execution layer of the AI stack.
The team at Together AI, led by Prakash, is often described as technically brilliant and driven by a strong belief in the open-source movement. The company culture, as gleaned from employee reviews and media reports, emphasizes innovation, collaboration, and a fast-paced environment. However, scaling a rapidly growing tech company while maintaining a strong open-source identity is a tightrope walk. The challenges are manifold: securing enough NVIDIA GPUs in a supply-constrained market, attracting and retaining top AI talent, and constantly innovating to keep pace with the breakneck speed of AI research. Moreover, the long-term viability of an open-source business model, particularly one competing against companies with virtually unlimited resources, is always a subject of scrutiny. Here's what the data actually shows: while open models are gaining traction, the vast majority of enterprise AI spending still flows towards proprietary solutions due to perceived reliability, support, and ease of integration.
The bull case for Together AI is compelling: as more powerful open-source models emerge, driven by Meta's Llama series and other academic efforts, the demand for efficient, cost-effective inference platforms will skyrocket. If Together AI can position itself as the default runtime for these models, its market opportunity is immense. They could become the 'AWS for open AI models,' a critical piece of infrastructure that underpins a vast ecosystem. Furthermore, their contributions to open-source model development could create a virtuous cycle, attracting more users and developers to their platform. The global shift towards digital sovereignty, particularly in Asia, also plays into their hands, as countries seek alternatives to US-centric proprietary AI solutions.
However, the bear case cannot be ignored. The AI landscape is notoriously volatile. Large incumbents could simply replicate Together AI's offerings, leveraging their existing cloud infrastructure and customer bases. The 'open-source' label itself can be a double-edged sword; if a model is truly open, competitors can take it, optimize it, and offer it on their own platforms, potentially undercutting Together AI. Furthermore, the economic realities of running a massive GPU cluster are brutal. The cost of compute, coupled with the need for continuous innovation, requires constant capital infusion. If the market for open-source model inference does not grow as rapidly as anticipated, or if margins are squeezed by intense competition, Together AI could struggle to achieve sustainable profitability. As a journalist from a developing nation, I have seen many promising technologies falter when the economic realities of scale and competition set in.
What's next for Together AI? They will undoubtedly continue to expand their compute capacity, refine their platform, and contribute to the open-source model community. Their success hinges on their ability to build a robust, developer-friendly ecosystem that can withstand the gravitational pull of the AI behemoths. The question remains: can they truly build an 'anti-OpenAI' that remains genuinely open, or will the pressures of growth and profitability inevitably lead them down a similar path of controlled access and proprietary advantage? The answer will have significant implications for the future of AI, not just in Silicon Valley, but for developers and innovators across the globe, from Colombo to California. The battle for the soul of AI, between open access and proprietary control, is far from over, and Together AI is a key player in this ongoing drama. For more insights into the broader AI landscape, consider reading analyses on MIT Technology Review. The evolution of such platforms is critical, especially for economies like Sri Lanka, which are striving to leverage AI for national development, as discussed in various reports on Reuters. The future of open AI models is a topic frequently covered by TechCrunch.










