Why 95% of AI strategies are just expensive theater
Why execution beats strategy in AI, startups, and everything else (according to new MIT data)
MIT just published research that validates what I see daily: 95% of enterprise AI initiatives are failing to deliver measurable business impact.
Meanwhile, 19-year-old founders are building $20M ARR businesses in 12 months using the same AI tools.
The difference isn't the technology. Its execution.
Here's what MIT's GenAI Divide report reveals about the 5% who win:
They purchase specialized tools instead of building them internally (67% vs. 33% success rate). They empower line managers, not just central AI labs. They focus on back-office automation instead of flashy sales tools. Most importantly, they execute small, fast, evaluate, and iterate based on real feedback.
The 95% who fail? They're building elaborate strategy theater.
47-slide decks full of buzzwords. Implementation outsourced to consultants who've never deployed production AI. Timelines that ignore basic technical realities. Internal builds that reinvent wheels badly.
Generic GPTs can excel for individual use cases because of their flexibility, but they stall in enterprise use unless adapted for the reality of delivering consistent value. AI is not a magical silver bullet; it takes customization, context, and orchestration to make it successful.
This pattern extends far beyond AI:
Political campaigns commission expensive polling studies instead of talking to voters. Startups raise money on TAM analysis instead of proving people want their product. Companies hire strategy consultants to avoid the messy work of actually testing ideas.
I should know, I’ve done that work.
What separates the winners across any domain:
They solve real problems - Not the problems they want to exist, but the ones that actually cost people time/money/sanity
They start small and prove it works - As Paul Graham says, "Do things that don't scale." Pilots with 10 users before platforms for 10,000 - and learn by doing.
They execute faster than they plan - Less PowerPoint, more prototypes, constant adjustment based on real feedback
They build reliable partnerships - It’s hard going it alone. In complicated technical integrations - which AI is,
Two frameworks to start with:
The Theater Test: If your strategy looks more impressive in PowerPoint than in production, you're building theater. Great strategies are often boring documents focused on measurable outcomes and specific next steps. War is logistics.
The "5% Principle": In any hyped technology category, ~90-95% of participants fail while 5% capture disproportionate value. Your job is figuring out which group you're in before you waste time and money.
What's coming in this newsletter:
Strategic frameworks for cutting through hype in any industry
Real deployment stories from AI, politics, and business. Why systems fail and occasionally succeed
Pattern recognition across domains (the same execution mistakes keep happening everywhere)
Practical insights for making better decisions under uncertainty, and for navigating the future
Want better strategic instincts? Hit reply or leave a comment and tell me about a decision you're wrestling with. I read every response, and the best questions will become newsletter deep-dives.
Talk soon,
Conor
P.S. - I went quiet here for 8 months because I was busy with Galileo and overthinking this newsletter strategy. The irony isn't lost on me.
I'm Conor Bronsdon, Head of Developer Awareness at Galileo.ai and host of the Chain of Thought Podcast. I help teams cut through complexity and execute what matters. You can find me on LinkedIn where I share insights with ~10,000 followers.

