Why pgEdge thinks MCP (not an API) is the right way for AI agents to talk to databases
Host A: Welcome to DevTools Radio, I'm your host, and today we're talking about something that's been generating a lot of buzz in the developer community — how AI agents should actually be talking to databases.
Host B: And this isn't just a theoretical debate, right? There's a real product announcement driving this conversation.
Host A: Exactly. pgEdge just dropped what they're calling a production-ready MCP Server for Postgres, and the core argument they're making is that MCP — the Model Context Protocol — is a fundamentally better interface for AI agents than a traditional API.
Host B: Okay, so for listeners who might be scratching their heads — why not just use an API? That's been the standard for years.
Host A: Great question, and pgEdge's co-founder Phillip Merrick has a pretty compelling answer. Without those predefined tools that an MCP server provides, LLMs are actually prone to hallucinating API calls, using wrong parameters, or pulling from outdated API versions. They can also burn through way more tokens than necessary.
Host B: Oh, that's a real cost concern — tokens aren't free, and if your AI agent is just... making stuff up about how to query your database, that's a serious reliability problem.
Host A: Exactly. And with Postgres specifically, the alternative to MCP isn't even a proper API — it's dropping down to the psql command line utility, which gives you basically zero guardrails.
Host B: So what does pgEdge's MCP server actually bring to the table that makes it better than that raw approach?
Host A: Three big things according to Merrick — built-in security, full schema introspection, and reduced token usage. On security, you get HTTPS and TLS support, token-based authentication, and the server defaults to read-only access, which is a really smart safety net.
Host B: Read-only by default — I love that. That's the kind of "protect developers from themselves" feature that saves someone's production database at two in the morning.
Host A: The schema introspection piece is arguably even more interesting though. The LLM doesn't just see table and column names — it sees primary keys, foreign keys, indexes, constraints, the whole picture. That means it can actually reason about your data model instead of blindly poking at it.
Host B: So the AI can write better SQL, suggest schema optimizations, that kind of thing — because it actually understands the relationships between your data?
Host A: Right, and on the token efficiency side, they've switched from JSON to tab-separated values internally, combined with pagination and context window compaction. Merrick says that can cut token usage by thirty to fifty percent, which is genuinely significant at scale.
Host B: That's not a rounding error — that's real money saved for teams running high-volume agentic applications. And this works with Claude, Cursor, Copilot, the whole modern AI dev toolchain?
Host A: It does — Claude Code, Cursor, Windsurf, VS Code Copilot, models from OpenAI and Anthropic, and even locally hosted models through Ollama or LM Studio. Oh, and the whole thing is open source under the Postgres license, free to download.
Host B: So pgEdge is essentially betting that MCP becomes the standard handshake between AI and data infrastructure — and they're planting their flag early.
Host A: That's exactly the play. Postgres has been around for thirty years and keeps reinventing itself — this feels like its AI-era chapter.
Host B: Alright, that's a wrap on this one. If you're building agentic apps on Postgres, definitely worth checking out the pgEdge MCP Server — link in the show notes.
Host A: Thanks for tuning in to DevTools Radio. We'll see you next time — keep building.
Prefer to listen? Head back to the episode page for the full audio.