
Investment Notes: Baseten Series-E
We’re delighted to participate in Baseten’s Series-E. This investment has been a joyful reminder that no matter the stage, the very best founders think of customers as community
We’re delighted to participate in Baseten’s Series-E, this marks our second time supporting Tuhin and Phil (Shape Analytics, 2015LP) but our first time with Amir and Pankaj. This investment has been a joyful reminder that no matter the stage, the very best founders think of customers as community. Proximity drives understanding, and understanding refines performance. It’s a culture that starts at the top, and enables the entire team to win and retain the best customers, even in the most competitive sectors.

An explosion of complexity
Keeping pace with AI innovation right now is, frankly, exhausting. New labs are popping up daily, checking Hugging Face for the latest small model releases, while peering at the Artificial Analysis to grok the ramifications of Kimi2.5.
But beneath that noise, a clear signal is emerging: fragmentation.
First, the intelligence gap between open-weight and closed-source models has become negligible, opening up the decision to tune open models on proprietary data to retain control and consistency while halving your API bill. Second, there’s the proliferation of small, task-specific models.
The future isn't one foundation model, it's a constellation of specialised ones. GLM-4.7-Flash for reasoning, Lucid Origin for image generation and Supertonic for text to speech. Managing this fleet - routing the right query to the right model at the right time - is where the complexity lies.
This is where Baseten earns its keep. They’ve won the trust of companies like Mercor, Abridge, and Cursor not just by hosting models, but by achieving sublime performance and flawless reliability. As these customers scale diverse, mission-critical workloads, Baseten is the platform of choice to tame the complexity.
Serve any hardware
For the last few years, "hardware" was synonymous with "NVIDIA." But that monopoly is softening.
We are entering an era where hardware updates will improve model-specific inference, and distinct chips will be better suited for distinct tasks. The infrastructure winning this generation won't be the one that just bets on the right chip, it will be the one that abstracts the choice.
This is the genius of Baseten’s Multi-Cloud Management (MCM). It’s an orchestration layer ready to handle a multi-silicon future. Whether a workload runs best on a Blackwell, an AMD MI350, or a TPU, Baseten will be able to handle it. MCM turns disparate hyperscaler capacity into a unified, reliable reservoir of compute.
This allows Baseten to be beautifully unopinionated about the future of AI while serving category-defining companies. Decoupling orchestration from the underlying silicon, they ensure that as hardware gets faster and cheaper, their customers benefit immediately without having to re-architect their entire stack.
Open-Source adoption is at the starting line
Baseten rocketed through 2025, compounding revenue by growing alongside pinnacle application companies. But the first cohort of AI-native companies are a fraction of the opportunity ahead. Agentic systems and reasoning loops are ballooning inference demands.
This is where the economics of open source models are impossible to ignore. Specialised models trained on task-specific data will consistently outperform generalist models, while cutting your inference bill with Open AI or Anthropic by 60-80%. The question isn’t if teams migrate, it's when and how much.
To remove any barrier, Basten acquired Parsed, founded by three Aussies - Charlie, Max and Mudith. We have a sneaking suspicion it will prove to be one the best acquisitions in this space. Parsed will build Baseten’s customer’s tailored models and accelerate migration onto the Baseten platform.
Despite Baseten’s incredible momentum, we are still in the early innings. AI-native startups are just beginning to adopt inference infrastructure at scale, and enterprise adoption hasn't truly begun. But Baseten are primed to serve both. They provide flexibility and control of a trusted inference stack, distinct from hyperscalers who are tied to a frontier lab.
Crucially, we think that choosing an inference provider will be one of the stickiest decisions of the next decade. Baseten’s product is the architectural bedrock that sits at the core of your customer experience. Customer’s near insatiable demand for intelligence makes Baseten an essential product to application AI companies building separately to the frontier labs.
Full Circle
We have watched this team evolve for ten years, and they are now operating at the peak of the sector. We are thrilled to reform this partnership as they build a foundational pillar of the new economy.
If you have connections to CTOs at large-scale enterprises grappling with the costs and complexity of deploying bespoke AI models, we would value the opportunity to introduce them to the Baseten team.
Tristan Edwards and Michael Tolo, on behalf of Blackbird

.jpg)





.avif)



