Table of Contents
You did everything right.
You found the tool. You ran the pilot. You built the business case with real numbers. You walked into that meeting with slides, ROI projections, and a demo that actually impressed people.
And they said no.
Or maybe worse — they said yes. They funded the pilot. It worked. Reps loved it. Metrics moved. And then... nothing. Six months later, it's sitting there. A handful of people use it. Nobody talks about scaling it. You're already getting asked to evaluate the next shiny thing.
If this sounds familiar, the problem isn't your pitch. It's not the tool. It's not even leadership being resistant to change.
The problem is you're having the wrong conversation.
What Leadership Sees That You Don't
Here's something I've learned watching this pattern play out across organizations: Your CTO has a framework in their head that you've probably never seen. Your CEO might not articulate it, but they're thinking about it. It's called the AI Maturity Model, and it explains why your perfectly reasonable request keeps hitting a wall.
Gartner breaks it into five levels:
Level 1: Awareness — The organization knows AI exists and might be useful. Lots of conversations. No strategy. No real projects. Just... awareness.
Level 2: Active — Pilots are running. Different teams are experimenting. Some things work in isolation. Nothing connects. This is where most enablement teams live — running proof of concepts, celebrating small wins, wondering why nothing scales.
Level 3: Operational — AI is integrated into actual business processes. There's governance. There's strategy. Outcomes are measurable. This is where the real value starts — and where most organizations get stuck trying to reach.
Level 4: Systematic — AI is reshaping how the business operates. Cross-functional integration. Advanced infrastructure. Competitive advantage territory.
Level 5: Transformational — AI is central to what the company is and does. Continuous optimization. The organization doesn't use AI — it runs on AI.
A recent survey found only 40% of organizations report reaching Levels 4 or 5. The rest are somewhere between Awareness and Active, running experiments that never become operations.
Here's why this matters for you: When you pitch an AI tool, leadership isn't just evaluating the tool. They're evaluating whether the organization is ready to absorb it. And if they sense you don't understand that gap, your pitch sounds naive — no matter how good your ROI math is.

The Jump Nobody Talks About
The hardest transition isn't from Level 4 to Level 5. It's from Level 2 to Level 3. From Active to Operational. From "we're experimenting" to "this is how we work."
That jump isn't a technology problem. It's a culture problem.
Think about what Level 3 actually requires:
Cross-functional collaboration (data, IT, operations, sales all aligned)
Process changes people didn't ask for
Governance that slows things down before speeding them up
Leadership willing to commit resources to something unproven at scale
Now think about what happens in most organizations when you propose any of that. Silos protect themselves. "Not invented here" kicks in. Legal gets nervous. IT has a roadmap that doesn't include your thing. Everyone agrees AI is important, and everyone has reasons why this particular initiative isn't the right one.
The pattern I keep seeing: The blocker isn't technology resistance. It's that organizations are trying to buy Level 4 capabilities when their culture is stuck at Level 2.
No tool solves that. Not even a really good one.

AI Maturity Workshop: Facilitators Guide
Stop buying AI tools your organization isn't ready to use. Most AI initiatives fail not because the technology doesn't work—but because organizations try to implement Level 4 solutions wit...
Reframing the Conversation
So what do you do?
You stop pitching tools and start diagnosing maturity.
The conversation most enablement leaders have sounds like this:
You're asking for resources. Leadership is thinking about organizational readiness.
Try this framing instead:
"We found this AI tool that does X. Here's the business case. Can we get budget?"
"I've been thinking about where we actually are on AI adoption. Based on what I'm seeing, we're somewhere between experimenting and operationalizing — we've run pilots, some worked, but nothing has scaled into how we actually work. Before we invest in more tools, I want to understand what's blocking us from getting there. Can we talk about that?"
You've just changed the conversation entirely.
You're not asking for money. You're offering diagnosis. You're not pushing a tool. You're naming a problem leadership is probably already worried about. You're positioning yourself as someone who sees the system, not just the shiny object.
That's a different conversation. And it tends to go differently.
Diagnosing Your Organization Honestly
Before you have that conversation, you need to know where you actually stand. Be honest with yourself.
You're probably at Level 1 if: AI comes up in meetings but nothing gets funded. There's no clear owner for AI initiatives. People are excited or skeptical, but nobody's actually building anything. "We should look into that" is the common refrain.
You're probably at Level 2 if: Multiple teams are running pilots independently. Some things worked but stayed small. There's no central AI strategy or governance. Data lives in different places owned by different teams. You've experienced "pilot purgatory" — success that doesn't scale.
You're approaching Level 3 if: At least one AI capability is embedded in a core business process. There's real governance around data and AI decisions. You can point to measurable business outcomes, not just activity metrics. Cross-functional collaboration is becoming normal, not exceptional.
Most enablement teams I talk to are honestly somewhere between Level 1 and Level 2. That's not a failure. That's a starting point. But pretending otherwise doesn't help anyone — especially you, when you're trying to build credibility with leadership.
Naming the Real Blockers
Once you know where you are, get specific about what's in the way. Vague "culture issues" won't cut it. You need observable blockers you can actually name.
Blocker: Fear of visible failure — When pilots are treated as pass/fail tests rather than learning investments, nobody wants to sponsor something that might not work. Careers are on the line.
Diagnostic question: When was the last time a failed initiative was discussed for what it taught us, rather than who was responsible?
Blocker: Data fragmentation — The data AI needs lives in systems owned by teams that don't coordinate. Even when pilots work, they can't scale because the foundation isn't connected.
Diagnostic question: If I asked where customer and rep data actually lives, would the answer require a whiteboard and thirty minutes?
Blocker: Ownership confusion — AI initiatives get funded but nobody owns the outcome. IT owns the technology. Enablement owns the use case. Operations owns the process. Everyone has a piece. Nobody has the result.
Diagnostic question: Who is accountable for AI adoption outcomes at this company? If the answer is "it depends" or names a committee, that's a blocker.
Blocker: Governance paralysis — Legal, compliance, and security concerns create so much friction that nothing moves at the speed the business needs.
Diagnostic question: How long does it take to get a new AI tool approved? If it's measured in quarters, that's a blocker.
The Conversation Leadership Actually Needs
Here's what I've learned: The enablement leaders who earn strategic credibility are the ones willing to tell leadership what they need to hear, not what they want to hear.
Leadership might prefer to believe the right tool will fix everything. Vendors certainly tell them that. It's an easier story.
But the honest conversation is: We can't buy our way to AI maturity. We have to build our way there. And that means addressing culture, governance, and data infrastructure before — or at least alongside — investing in more tools.
If you're the person willing to name that clearly, with evidence and frameworks instead of just opinions, you become someone leadership wants in the room.
And if they're not ready to hear it? You've planted a seed. When the next pilot fails to scale, they'll remember what you said. They'll come back.
The Framework for Your Next Conversation
Here's the practical play:
Assess honestly.
Where does your organization actually sit? Level 1, 2, or approaching 3? Be specific.
Identify 2-3 specific blockers. Not "culture."
Observable things you can point to. Use the diagnostic questions.
Lead with maturity, not tools.
Open with "here's where we are on the curve" instead of "here's the tool I want."
Propose a path to Level 3.
What would it take to move from experimenting to operating? What cultural shifts, governance structures, or data infrastructure investments are needed?
Position tools as enablers, not solutions.
The tool supports the strategy. The tool isn't the strategy.
You're not pitching anymore. You're partnering on a problem leadership is already worried about.
That's a different position to be in.
So my question to you is this...
Where does your organization actually sit on this curve? And what's the blocker nobody wants to name out loud?
Hit reply and tell me. I read every one.
Until next time my friends...
❤️, Enablement
PS-
How to get the AI Maturity Facilitators Guide Free!
Unlock a 100% discount with my exclusive promo code! Simply share this post on LinkedIn and tag me to receive your code.
Hurry up and grab this amazing deal before it's gone! The promo code expires on 1/26/26 at 11:59 PM—don't miss out!