Portkey open-sources its AI gateway after processing 2 trillion tokens a day
Host A: Welcome to DevTools Radio, I'm your host, and today we're talking about a pretty significant move in the AI infrastructure space — Portkey just open-sourced its AI gateway, and the numbers behind this thing are genuinely staggering.
Host B: Yeah, two trillion tokens processed in a single day — like, how do you even wrap your head around that scale?
Host A: Right, and for listeners who aren't familiar, Portkey positions itself as a control plane for production AI deployments — think of it like an API gateway, but specifically built to manage and monitor all that AI model traffic flowing through an organization.
Host B: So it's basically the traffic cop sitting between your engineering teams and all the various AI models and providers they're hitting?
Host A: Exactly. And the big news here is they've made the core gateway fully open-source — their CEO Rohit Agarwal put it pretty bluntly: governance and observability for production AI should just be standard reference architecture, not something every team needs a separate SaaS contract to access.
Host B: That's a bold philosophical stance, and honestly a pretty generous one — though I imagine the cynical read is that open-sourcing the foundation is a classic strategy to drive adoption and then monetize the premium layers on top.
Host A: That's a totally fair point, and to their credit, Agarwal basically admits that openly — the foundation is free, the business value gets built on top of it. But at the scale they're operating, 24,000 organizations and $180 million in annualized AI spend, the foundation clearly needs to be rock solid.
Host B: What really caught my attention though is the MCP gateway piece — can you break that down for listeners who haven't been following the MCP buzz?
Host A: Sure — MCP, or Model Context Protocol, is essentially how AI agents connect to and interact with external tools and systems. And Portkey is arguing that as agents go from just generating text to actually *doing things* inside your enterprise, the governance stakes get way higher.
Host B: Which makes total sense — like, there's a massive difference between an LLM writing you a summary and an agent that can actually execute actions inside your internal systems, potentially touching sensitive data or breaking things.
Host A: Agarwal framed it really well — he said you can't have a thousand engineers all routing through an MCP server with no way to shut it down if something goes wrong, and apparently the MCP gateway has been the fastest-adopted thing they've ever shipped.
Host B: That honestly tracks — enterprises aren't trying to block AI agents, they just need a way to trust them, and a kill switch definitely helps build that trust.
Host A: Alright, so the bottom line here: Portkey is betting that agentic AI is now mission-critical infrastructure, not just a feature, and the control plane that governs it should be open and accessible to every engineering team building in this space.
Host B: It's a compelling vision, and honestly, given the pace at which AI is hitting production environments, this kind of tooling feels less like a nice-to-have and more like a necessity.
Host A: Couldn't agree more. Alright, that's going to do it for today's episode of DevTools Radio — thanks for tuning in.
Host B: Stay curious, keep building, and we'll catch you next time.
Prefer to listen? Head back to the episode page for the full audio.