Wedge, Workflow, Scale: A Sanity Check for AI GTM Tools
A framework to cut through ERR, demos, and AI tools that just distract from hard decisions we inevitably have to make
Obvious take: AI spend isn’t subject to the same scrutiny as everything else. Budgets are flat or down across the board. But for AI? There seems to be always room.
Result of the obvious take: A wave of experimentation – fast procurement, quick PoCs, light integration. And vendors are counting it as ARR.
But it’s probably more accurate to classify it as what Jamin Ball calls ERR (Experimental Run-Rate Revenue).
It's not recurring. It's just... convenient. And that convenience hides a deeper issue: AI tools are often bought to avoid hard GTM decisions, not to support them.
The Budget Line That Makes This All Possible
It’s never been easier to get budget for AI.
And never been harder to prove it’s worth it.
That gap? It's being funded by ERR. It’s the revenue vendors count as ARR, but buyers treat as temporary. Easy to spin up, just as easy to churn.
Most AI tools today are bought with ERR logic:
Short-term contracts (forget about multi-year)
Little-to-no security review because “it’s just a PoC”
No roadmap integration
And no conviction it’ll still be used in 6 months
One startup told me they were keeping budget flat or negative across every department except AI. Somehow, there was always more room for "innovation" because “we have to show progress.”
But ERR isn't innovation. It's sugar water. And that dissonance? It’s not just a vendor problem. It’s a signal about how companies are making decisions under pressure.
The Quiet Psychology Behind AI Spend
A lot of AI procurement right now isn’t even about scale, it’s about psychological avoidance:
Fear of falling behind: "We need to show leadership we’re incorporating AI, all our competitors are doing it and posting about it on LinkedIn."
Fear of internal fights: "Just buy something smart so we don’t have to argue about routing or segmentation."
Fear of showing gaps: "If we automate or slap an agent on top, maybe no one will notice how messy this is."
You can spot these teams by their stacks: overlapping vendors, zero usage governance, no central strategy. AI as window dressing.
Why This Fails (Eventually)
ERR makes adoption feel harmless. But when the real budget planning cycle hits, and you’re asked to justify the tool’s contribution to pipeline or product velocity, you realize: there was no plan.
Instead of building workflows or feedback loops, you rented automation and hoped no one asked too many questions.
We even went through this ourselves. We piloted an AI tool designed to streamline account research and outbound workflows. It looked slick. It helped with surfacing signals and generating email copy. But in the end? It was a distraction and uplift was largely flat. Working out of multiple tools killed throughput, and most things we needed could be done faster natively in Salesforce, so we killed it.
Framework For What We Did (and Didn’t) Automate
We also passed on full-stack AI SDR tools. Not because AI can’t help with outreach. But because one-size-fits-all automation wasn’t targeted enough for how we wanted to shape our GTM motion (I wrote more about that here)
We’d rather build narrow, effective interventions – ones we can control, evaluate, and evolve – than chase the illusion of a completely self-driving sales org.
When we do look at AI tools, my framework boils down to three things:
Wedge: What's the core thing this tool does uniquely well? Is it 10x better at something I already care about? Or is it just checking an AI box?
Workflow: Who is using this? Where does it fit in our motion? Can we actually map the usage to real efficiency or revenue lift OKRs?
Scale: Is this sustainable, or does it break the second we move beyond the team that piloted it? Is the solution so custom we might as well have built it in-house?
The wedge has to be obvious. The workflow has to be credible. The scale has to be considered early.
Where the Wedge Breaks Down
Take AI-generated email copy as an example. The wedge is real: tools that can produce outbound sequences, personalized intros, or subject lines at scale. It feels like magic.
But in practice, it rarely fits into team workflows:
Reps still need to edit for tone, audience, and product nuance.
Managers can’t quality-control logic or CTA relevance at scale.
Deliverability tanks with content flagged by spam filters.
And when you try to scale it?
Everyone’s messaging starts to sound the same.
The perceived personalization becomes easy to detect.
You’re left maintaining a system no one fully trusts.
The wedge was compelling, but workflow friction and scale fragility made it a net negative.
That’s where most teams stop. But it doesn’t have to be a dead end.
Because sometimes, the wedge is right, you just haven’t done the work yet.
If you narrow the use case, e.g. email generation for <50-employee companies in 3 verticals where you have tested messaging and real PMF, suddenly the value becomes obvious. You’ve defined the audience, created the content framework, and made editing optional instead of mandatory.
It works not because the model is smarter, but because you did the hard work to make it useful.
That’s the real lesson: you can’t abstract away hypotheses and strategy. You have to design for them.
Top-Down vs Bottom-Up Workflows: Listen First, Then Lead
This is where most AI tools will fail: not because of the model, but because of the rollout.
As we’re seeing in the ERR world, new tools are being rolled out top-down constantly: leadership mandates adoption, but there’s just some fuzzy mapping to real work. Reps are told to use it, but they don’t fully trust it. Managers are told it’ll improve productivity, but they can’t fully track usage.
But the answer isn’t some eureka moment to avoid top-down entirely. Sometimes you should push. If there’s strategic value and long-term upside, inertia is the enemy. Good operators know that.
The best AI adoption I’ve seen actually blends both:
Bottom-up signal: What are reps already hacking together? Are they pasting ChatGPT outputs into Salesforce notes? Are they using Notion as a quasi-agent layer? That’s a workflow. Formalize it.
Top-down clarity: Where are we falling behind in productivity or customer experience, and what bets are we willing to make? If the wedge is clear and the value is provable, push – but come armed with answers, not vibes.
Ask yourself:
What wedge are we solving for, and why now?
Where does it actually live in someone's day?
If this works, what happens next?
If you can't answer those, it's too early. And if no one on your team is already solving the problem manually, maybe it's not a problem worth solving yet, so throwing AI at it is just more words, more work, and more worries.
What I Ask Instead
You don’t have to reject AI tools. But you do have to be honest:
Is this a test or a pillar? If it’s a test, treat it like one. Define a hypothesis, measure real behavior, and decide.
Are we solving a real problem or dodging a hard one? Don’t let fear shape your stack. There is a new tool popping up every week; you’re not behind if you’re doing the work, because chances are a shiny new startup will launch by the end of the quarter or a foundational model will announce exactly what you’re looking for.
Will this still matter when the budget tightens? Buy like you’re building toward permanence.
Because eventually, retention tells the truth. And AI that was never integrated, never trained on your real workflows, and never earned internal trust? That’s not ARR.
That’s just ERR in a nicer font.