Your AI Agents Are Drowning in Bad Context
Airbyte's Michel Tricot demoed a single Gong query that burned 30K extra tokens. Most teams are eating that cost across dozens of queries and never measuring it.
A single Gong query. That's all it took to context poison his AI agent.
Michel Tricot, co-founder of Airbyte (the open source data integration platform with 600+ connectors and a $1.5 billion valuation), demoed it live for me. Two versions of the same agent task: find all sales calls for a specific rep.
The context store version: 45 calls, about a minute, reasonable token usage.
The raw API version: the agent had to paginate through every call because Gong's API doesn't let you filter by user. Three minutes. 30,000 extra tokens all from one query.
Scale that across a real workload and you start to see why so many production agents are slow, expensive, and unreliable for reasons that have nothing to do with the model. You start to see why you’re hitting your Claude Code limits so fast.
Michel is building Airbyte's next product and rebuilding their business around this problem: an agent engine (now in public beta) built on the thesis that agents fail because of data, not models. We explored what this looks like in practice and why he’s fighting for new methods of data storage and context provisioning.
The RAG death spiral:. Michel walked through the sequence most teams go through: you start with chunk-embed-retrieve. Accuracy is off. So you annotate chunks with metadata ("this sales conversation was about pricing, but let me also tag that Conor was on the call"). Before long, you've rebuilt data centralization with extra steps. The context store approach skips the embedding layer entirely, using schema structure and fuzzy matching to resolve entities across Salesforce, Zendesk, and Gong. Less elegant on paper. More reliable in production.
RAG isn't dead. But the chunk-embed-retrieve default that most teams start with is hitting a wall, and the fix looks a lot more like data engineering than most AI teams want to admit.
Context engineering is a real role: Michel thinks it'll eventually be automated, the same way prompt engineering became mechanical. But right now, someone has to design how information flows to agents: what freshness, what scope, what governance. It's closer to software engineering than data engineering. If someone on your team is already spending their weeks on this, they're a context engineer. They just probably don't have the title yet.
Context is how our agents understand the world. The model is rarely the bottleneck today. The context architecture is.
Watch the full episode 👇 or listen on your app of choice
Connect with Michel: LinkedIn · Agent Blueprint (Substack)
Links mentioned: Airbyte · Agent Engine docs · Agent Connector SDK
Coming up: Richmond Alake, Director of AI Developer Experience at Oracle, on why memory engineering deserves its own discipline and how to architect agent memory systems that work in production. Plus Sudhir Hasbe, President and CPO of Neo4j, on why he thinks knowledge graphs are the missing infrastructure layer for enterprise AI agents.
I’ll be bringing all of this together into my next longform essay in the next couple of weeks.
Cheers,
Conor
In case you missed it: Two weeks ago I sat down with Yujian Tang, the guy who started r/AI_Agents (now over 300K members) and just filed paperwork to launch his own AI venture fund. We got into the mechanics of starting a fund from scratch, why pre-seed AI valuations have doubled in two years, and what two failed startups taught him that successful founders sometimes miss. Listen here if you haven't caught it yet.

