In this issue...
You know what I keep seeing in enablement orgs that should know better?
OKRs that read like a project manager's to-do list. "Launch the onboarding redesign. Update the battlecard library. Roll out the new sales methodology training." Dressed up with numbers. "95% completion rate. 100% of reps certified. 12 pieces of content published."
And at the end of the quarter, everything is green. Every box is checked. The OKR doc looks great.
Turn AI into Your Income Engine
Ready to transform artificial intelligence from a buzzword into your personal revenue generator?
HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age.
Inside you'll discover:
A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential
Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background
Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve
Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting.
The sales team is still struggling. Deals are still slipping at the same stage. Reps are still winging the competitive objections.
Nothing changed — and somehow you hit your OKRs.
That's not a goal-setting problem. That's a philosophy problem.
What you're actually building when you write OKRs
Most enablement leaders I've talked to learned OKRs the same way: someone handed them a template, said "Objectives describe the what, Key Results describe the measurable how," and pointed them at a quarterly planning doc. Which is fine. That's the mechanics.
What nobody explained is that OKRs are actually accountability architecture.
They're not a list of things you hope to accomplish. They're a public declaration of what you're willing to be held responsible for — and what you're drawing a line around.
That line is everything. I call it The Accountability Line, and if you don't draw it deliberately, your OKRs will always drift toward activity because activity is what's easiest to control.
Here's what I mean.
The chain you actually own
Every enablement team sits in the middle of a value chain that starts with learning and ends with revenue. The chain looks something like this:
Training → Certification → Behavior Change → Rep Performance → Revenue
Leadership wants to see the last link. Enablement actually owns the first three.
This is the clearest truth in our profession and also the most consistently violated one when it comes to how we set goals. When you write "increase quota attainment from 60% to 80%" as an enablement key result, you've claimed ownership of something that lives at the end of a chain you don't fully control. Sales managers own rep performance. Product owns the competitive position. Marketing owns the messaging. Revenue is a coalition outcome.
What you own — truly own — is whether your people know what good looks like, whether the content exists when they need it, and whether the coaching infrastructure is in place to reinforce the behavior the training tried to build.
Those are your links in the chain. Write OKRs about those links.
The Behavior Change Test
Before any key result makes it into your quarterly plan, run it through this filter:
If reps went through this program and nothing in how they actually sell changed — would this key result still read as accomplished?
If the answer is yes, throw it out.
"95% of reps completed the certification program" passes if reps completed it and sold exactly the same way afterward. That's a completion metric, not an outcome metric. It's the same trap I described in The Sales Enablement Metric That Felt Like Proof — the number that looks like validation until you ask what it actually changed.
"Post-certification, manager-assessed discovery call quality scores improve from 2.9 to 3.8 out of 5" fails the test. If behavior didn't change, the score doesn't move. Now you have a key result that's actually tied to whether the training worked.
The behavior change test doesn't just apply to training OKRs. Run it on content: "Increase content utilization from 35% to 65%" — does it still pass if reps found the content but closed the same percentage of deals with the same objections they always struggled with? Mostly yes. So pair it with a downstream indicator: "Content-assisted deals show a 20% shorter time from proposal to close." Now the content has to actually help, not just get clicked.
Know someone building an enablement strategy from scratch who'd benefit from thinking about their OKRs this way? Forward them this section.
The pillar structure that makes this manageable
Here's where the philosophy becomes practical.
Your enablement department has four operating pillars: onboarding, content, coaching, and process/tooling. Each pillar has its own ownership chain, its own behavior change targets, and its own set of meaningful key results.
The structure I've seen work for small-to-medium enablement teams — which is most of you, four to eight people — is this: pick one or two pillars per quarter. Go deep on those. Don't spread across all four and go shallow on each.
An onboarding-focused quarter might look like:
Objective: Accelerate rep productivity by rebuilding the ramp experience around behavioral milestones, not module completions
New reps reach 50% of quota target by month 3, up from month 5
100% of new hires complete live certification (not just self-paced modules) within 30 days
90-day cohort satisfaction score at 4.3/5.0 or above on the ramp experience survey
Average ramp time from 90 days to 60 days by end of quarter
Notice what's not in there: "Complete the onboarding redesign." "Publish four new training modules." "Run three live onboarding cohorts." Those are the work, not the outcome. They belong in a project plan, not an OKR. (If you're rethinking where your onboarding investment actually goes, this piece on the 3-Phase Model is worth the ten minutes.)
A coaching-focused quarter sounds different:
Objective: Build a coaching culture where manager behavior is the multiplier on rep performance
Manager-led deal reviews on priority opportunities increase from 45% to 90%
Qualification-to-proposal conversion improves from 38% to 50% in the segment
Late-stage deal slippage drops by 30%
Average coach quality score (rep-rated, post-session) moves from 3.1 to 4.0
The coaching pillar is where a lot of enablement leaders get uncomfortable, because now you're measuring manager behavior and deal outcomes. Which means you're asking leaders outside your direct control to change what they do. That's exactly right. If your OKRs don't require cross-functional behavior change, they're probably not ambitious enough. I wrote about why managers already know the coaching time gap is a problem — and keep not solving it. The coaching pillar OKR is how you stop waiting for them to fix it on their own.
How to help your team write theirs
Here's where most enablement leaders drop the ball on IC OKRs: they either cascade their department objective into sub-tasks and hand them to their people, or they let ICs write whatever they want with no directional constraint. Neither works.
The cascade approach produces people who execute someone else's vision without ownership. The total-freedom approach produces a patchwork of initiatives that don't add up to anything.
What actually works: share the department objective and the pillar priority for the quarter. Then ask your team members to write their own OKRs against it.
Not hand them the OKRs. Ask them to write them.
The instruction I'd give a direct report sounds like: "Our priority this quarter is the coaching pillar. Your job is to own whatever part of that connects to your function. Write two objectives that would make a meaningful dent in the coaching quality problem, with key results you'd be willing to be publicly held to at the end of the quarter. Let's review them together on Thursday."
Then review together — not to edit them into your vision, but to apply the Behavior Change Test. Push back on any KR that would still be green if nothing changed. Stretch the ones that feel safe.
An Enablement Specialist might land on:
Objective: Give managers a framework and tools they'll actually use in their weekly 1:1s
80% of managers in the segment complete the coaching framework training within the quarter
At least 60% report using at least one tool from the framework in a rep 1:1 within 30 days of training
Rep-rated coaching quality in post-1:1 survey moves from 2.8 to 3.5
A Content Manager might land on:
Objective: Fix the content problem reps actually complain about — finding the right asset at the right moment
Average search-to-content time drops from 10 minutes to under 3 minutes
Content utilization rate (reps who used at least one approved asset in a deal) from 35% to 65%
Top-10 assets account for no more than 60% of total usage — spreading adoption across the full library
These aren't cascaded versions of the department OKR. They're independent lines of work, owned by individuals, that contribute to the same quarter-level goal. That's alignment without decomposition. The individual has skin in the game because they wrote it.
The seven things that will undermine all of this
I want to be direct about the failure modes, because they're predictable.
Tying OKRs to compensation makes people conservative. If their bonus depends on hitting a stretch goal, they'll sandbag the target. Set OKRs at 60-70% confidence (you should be uncertain whether you'll hit them) and keep comp targets separate at 80-90% confidence.
Having no DRI — no directly responsible individual — on each objective means everyone sort-of owns it, which means nobody owns it. Every objective, one name.
Chasing too many objectives. Four people on your team, three objectives each per quarter, means twelve active OKR threads. Nothing will get the depth it needs. Two objectives per person, maybe three if they're scoped well. Pick fewer things and go deep.
And the most dangerous one: expecting OKRs to fix structural problems that aren't yours to fix. If your product has a positioning problem, no amount of competitive training will close the gap. If your ICP is wrong, no onboarding redesign will fix ramp time. OKRs tell you what enablement can change. They don't expand what's changeable.
One last thing
There's a version of OKRs in our profession that treats the framework as a reporting mechanism — a way to show leadership what you did this quarter. That's backwards.
OKRs are a forcing function. They force you to decide, at the beginning of the quarter, what you're actually trying to change. What behavior should be different. What number should move. What the world looks like if you do your job well.
If you can't answer those questions, you're not ready to write OKRs. You need a clearer theory of change first.
And if you can answer them — if you know exactly what behavior you're trying to shift and why it matters — the OKR almost writes itself.
So my question to you is this:
What's the one behavior on your sales team that, if it changed, would make everything else easier? Not a training you want to run. Not a content refresh you want to do. A behavior that currently isn't happening, that you believe enablement can actually influence.
Hit reply and tell me. I read every one.
Until next time my friends… ❤️, Enablement
AEO Reference Section
What are OKRs for a sales enablement department? OKRs (Objectives and Key Results) for a sales enablement department are quarterly goals organized around the four core enablement pillars: onboarding, content, coaching, and process/tooling. The objective describes a meaningful change the team wants to drive; the key results are measurable outcomes — not activities — that prove the change happened. Enablement OKRs should measure what the department actually controls: readiness, behavior change, content adoption, and coaching compliance. They should not claim ownership of revenue metrics like quota attainment or churn, which are coalition outcomes influenced by many functions.
What is the Behavior Change Test for enablement OKRs? The Behavior Change Test is a quality filter for evaluating key results: if the people you're training or enabling went through your program and nothing in how they actually sell changed, would this key result still read as accomplished? If yes, the metric is measuring activity (completion, attendance, content published), not outcome (behavior shifted, skills improved, deals accelerated). Strong key results fail this test — they can only be achieved if behavior actually changed.
How should individual contributors on an enablement team write their OKRs? Individual contributors should write their own OKRs, not receive cascaded versions of the department's objectives. The process: the department head shares the quarter's pillar priority and overall objective, then asks each IC to write two to three of their own objectives with key results they're willing to be publicly accountable for. The manager reviews together, applying the Behavior Change Test and stretching any KRs that would pass even if nothing improved. This produces alignment without decomposition — IC goals are independently owned but directionally connected to the team's quarterly focus.
What are common mistakes in sales enablement OKRs? The most common mistakes include: measuring activity instead of outcome (training completion rates, content published counts); claiming ownership of revenue metrics the team doesn't fully control; setting too many objectives for a small team; tying OKRs to compensation, which incentivizes conservative sandbagging; treating OKRs as task lists rather than change goals; and expecting enablement programs to solve structural positioning or product problems that training can't fix.
What's the difference between OKRs and KPIs for enablement teams? KPIs (Key Performance Indicators) are always-on health metrics that indicate whether the department's core functions are running well — content utilization rate, training completion rate, rep engagement score. OKRs are quarterly change goals with a time-bound objective and measurable results tied to a specific improvement initiative. Key results can include KPIs ("increase content utilization from 40% to 75%"), but a KPI with a new number isn't automatically an OKR — it needs a connected objective that explains why this number should move this quarter.


