Table of Contents
"Most AI adoption is performative and surface-level. Tool sprawl is simply the symptom."
That's not a consultant trying to sell you something. That's not an analyst hedging behind a paywall.
That's Jonathan Kvarfordt — VP of GTM Strategy at Momentum, former CEO of GTM AI Academy — in the 2026 Voice of the Market Report, published January 27, 2026.
An insider. Calling it what it is.
And I think he just gave the enablement profession the wake-up call it's been dodging.
Why does this quote hit different?
Because Kvarfordt isn't on the outside throwing stones. He's built his career at the intersection of AI and go-to-market execution. When someone at that level uses the word "performative," it's not skepticism.
It's a diagnosis.
And the data backs it up.
What does the data actually say about AI adoption?
Introducing the first AI-native CRM
Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.
With AI at the core, Attio lets you:
Prospect and route leads with research agents
Get real-time insights during customer calls
Build powerful automations for your complex workflows
Join industry leaders like Granola, Taskrabbit, Flatfile and more.
The 2026 State of AI for B2B GTM report from Growth Unhinged found that 53% of GTM leaders are seeing little to no impact from AI.
Let that sit for a second. More than half the leaders investing in AI tools right now can't point to meaningful results.
Meanwhile, Training Industry's 2026 Watch List highlights the companies pulling ahead. The theme? "AI-driven coaching, immersive solutions, and analytics-based learning." Not more tools. Better application of existing ones.
And Rhett Livengood, Sales Enablement leader at Keyfactor, put it plainly in a recent Crafted Journey interview: "AI only works as well as the data and discipline behind it."
Without clean data, clear value propositions, and well-structured playbooks, automation becomes noise.
This isn't a technology failure. It's an adoption quality failure.
What's the difference between performative and genuine AI adoption?
I think about it as a ladder. Here's a quick framework — the AI Adoption Maturity Ladder — to help you figure out where your team actually sits:
Level 1: Tourist You've signed up for tools. Maybe you've played with ChatGPT a few times. You talk about AI in meetings. But nothing has changed about how your team actually works day-to-day.
This is where that 53% lives.
Level 2: User Individual contributors are using AI for isolated tasks — drafting emails, summarizing notes, creating outlines. It's helpful, but it's personal productivity. Not organizational capability.
Level 3: Operator AI is woven into repeatable workflows. Your team has shared prompts, connected tools to internal data, and built processes that use AI as infrastructure — not a novelty. This is where things start compounding.
Level 4: Architect AI informs strategy. Your enablement programs adapt based on AI-driven insights. Coaching is personalized. Content is created and refined using your company's actual performance data. The tool disappears — the capability remains.
Most teams think they're at Level 2 or 3. Most are at Level 1.
If you're not sure where you stand, I wrote about how to run an AI Readiness Audit that walks you through a full assessment.
How do I know if my AI adoption is performative?
Here are three questions you can ask Monday morning. Be honest with yourself.
1. Can you name the specific business outcome your AI tools have improved in the last 90 days?
Not "we're more efficient." Not "reps like it." I mean a number. Pipeline velocity. Win rate. Ramp time. If you can't connect your AI investment to a metric your leadership cares about, you're at Tourist level.
2. Is your AI connected to your internal data — or is it running on generic internet knowledge?
Here's what the Growth Unhinged report made clear: the top-performing GTM teams aren't using expensive, specialized AI platforms. They're using general-purpose LLMs like ChatGPT and Claude — but they're feeding them internal context. Their CRM data. Their call recordings. Their playbooks and win/loss analyses.
The difference isn't the tool. It's the data going into it.
If your team is using ChatGPT with zero internal context, you're getting generic outputs and wondering why AI "doesn't work." It works fine. You just haven't given it anything real to work with.
I wrote about this distinction in Claude Cowork Isn't Coming for Your Job — the skill isn't picking the right AI tool. It's knowing how to direct it with your own expertise and context.
3. Could you turn off your AI tools tomorrow and would anyone notice?
This one stings. But if the answer is "probably not" — that's your signal. Genuine adoption creates dependency because it creates value. Performative adoption creates subscriptions.
So what should I actually do about it?
Stop chasing tools. Start building habits.
Here's the pattern I keep seeing from the teams that are actually getting results:
Pick one general-purpose LLM. ChatGPT or Claude. That's it. Don't start with a $50K/year specialized platform. Start with a tool that's flexible enough to grow with you.
Connect it to your internal data. Upload your playbooks, your call transcripts, your competitive intel. Create a shared project or workspace where your team's context lives inside the AI. This is the single highest-leverage move most teams skip.
Build one repeatable workflow. Not ten. One. Maybe it's pre-call research that pulls from your CRM data. Maybe it's coaching summaries after recorded calls. Maybe it's drafting enablement content from your product team's release notes. Pick the workflow that saves the most time and build it properly.
Measure the outcome. Not adoption rates. Not seat utilization. The business metric that matters. Did ramp time decrease? Did win rates move? Did your content creation cycle get faster?
As I covered in The Hidden Cost of AI Adoption, there's a real risk that teams adopt AI surface-level and actually lose critical thinking in the process. The goal isn't to use AI more. It's to use it well.

AI Maturity Workshop: Facilitators Guide
Stop buying AI tools your organization isn't ready to use. Most AI initiatives fail not because the technology doesn't work—but because organizations try to implement Level 4 solutions wit...
What's the real takeaway here?
The enablement profession doesn't have an AI adoption problem. We have an AI adoption quality problem.
Kvarfordt named it. The data confirmed it. And now you've got a Monday morning diagnostic to test it.
The winners in 2026 aren't going to be the teams with the most AI tools. They're going to be the teams with the best AI habits — the ones who connected simple, powerful tools to their actual work and measured what changed.
That's it. That's the whole thing.
Stop performing. Start practicing.
Until next time my friends… ❤️, Enablement
FAQ
What does "performative AI adoption" mean in sales enablement?
Performative AI adoption means teams have purchased and signed up for AI tools, talk about AI in meetings, and may even have usage metrics — but haven't connected those tools to their actual workflows or internal data in ways that produce measurable business outcomes. Jonathan Kvarfordt, VP of GTM Strategy at Momentum, coined the diagnosis in the 2026 Voice of the Market Report, identifying tool sprawl as the visible symptom of this surface-level adoption.
What percentage of GTM leaders are seeing real impact from AI tools?
According to the Growth Unhinged 2026 State of AI for B2B GTM report, 53% of GTM leaders report seeing little to no impact from their AI investments. That means fewer than half of leaders investing in AI can point to meaningful results — suggesting the majority of adoption remains surface-level.
Should I buy specialized AI tools for my enablement team or use general-purpose LLMs?
The data points toward starting with general-purpose LLMs like ChatGPT or Claude. The Growth Unhinged report found that top-performing GTM teams build effective AI workflows using these accessible tools connected to their internal company data — CRM records, call transcripts, playbooks — rather than investing in expensive specialized platforms. The differentiator isn't the tool itself but the quality of internal context fed into it.
How do I know if my team's AI adoption is genuine or just for show?
Run a three-question diagnostic: (1) Can you name a specific business metric your AI tools have improved in the last 90 days? (2) Is your AI connected to your internal company data, or running on generic internet knowledge? (3) If you turned off your AI tools tomorrow, would anyone on your team notice? If you can't answer these confidently, your adoption likely needs to move from performative to operational.
What is the AI Adoption Maturity Ladder?
The AI Adoption Maturity Ladder is a four-level framework for diagnosing where your team's AI usage actually sits: Tourist (signed up, nothing changed), User (individuals using AI for isolated tasks), Operator (AI woven into repeatable team workflows connected to internal data), and Architect (AI informs strategy and personalizes coaching, content, and programs). Most teams believe they're at Level 2 or 3, but are actually at Level 1.


