Anthropic dropped something this week. Claude Cowork — a new feature in their desktop app that lets you point an AI at a folder on your computer and tell it to work.
Not chat. Work.
You say "take these competitive intel files and build me a battle card using our template." Then you walk away. Claude breaks the task into pieces, works through them in parallel, checks its own output, and asks you questions if it gets stuck. Twenty minutes later, there's a battle card in your folder.
That's different from asking ChatGPT to "help me write a battle card" and then going back and forth for an hour pasting content into a doc. Cowork acts. It creates files, reads files, edits files, organizes files. You brief it like you'd brief a junior team member, and it executes.
But here's where it gets dangerous: Is the AI drawing the right causal conclusions, or is it conflating correlation with causation?
For enablement professionals, this should feel like a gut punch and an opportunity at the same time.
What This Means for You
I've spent the last 5+ years in enablement, and I've watched the profession wrestle with how to prove its value. We talk about strategic partnerships and behavior change, but the reality is that most enablement professionals spend a huge chunk of their time on execution work. Formatting slide decks. Pulling competitive intel from scattered docs. Organizing content libraries that no one asked us to organize but desperately need organizing.
That work just got automated. Not in a "someday AI will..." way. In a "here's the feature, it costs $100 a month, it works on your Mac right now" way.
The Year-End Moves No One’s Watching
Markets don’t wait — and year-end waits even less.
In the final stretch, money rotates, funds window-dress, tax-loss selling meets bottom-fishing, and “Santa Rally” chatter turns into real tape. Most people notice after the move.
Elite Trade Club is your morning shortcut: a curated selection of the setups that still matter this year — the headlines that move stocks, catalysts on deck, and where smart money is positioning before New Year’s. One read. Five minutes. Actionable clarity.
If you want to start 2026 from a stronger spot, finish 2025 prepared. Join 200K+ traders who open our premarket briefing, place their plan, and let the open come to them.
By joining, you’ll receive Elite Trade Club emails and select partner insights. See Privacy Policy.
So what's left?
This is the question I want you to sit with. Because the answer separates enablers who will thrive from those who will struggle.
The Three Skills Cowork Can't Do
I've been thinking about what actually makes enablement professionals valuable — not what we put on our LinkedIn profiles, but what actually moves the needle. And I keep coming back to three capabilities that AI can execute around but can't replace.
1. Taste
Taste is knowing what "good" looks like before you see it.
Cowork can assemble a battle card from your competitive intel. It'll format it correctly. It'll pull the key points. It might even organize the objection handling section in a logical flow.
What it can't tell you: Will this actually help a rep in a live deal?
That question requires taste. You need to know what reps struggle with in competitive situations. You need to understand which objections actually come up versus which ones are theoretical. You need to recognize when something looks polished but misses the point.
I see this constantly with training content. Slides that look professional. Scripts that sound reasonable. Learning modules that check every box. And none of it changes behavior because the person who built it didn't know what "good" actually looks like for this specific audience, this specific skill gap, this specific sales motion.
From my instructional design background, I can tell you — Malcolm Knowles figured this out decades ago. Adults don't learn by absorbing content. They learn by connecting new information to their existing experience. If you don't know that, you'll accept AI output that looks like training but functions like theater.
Taste is knowing the difference.
2. Judgment
Judgment is knowing which intervention for which problem — and when to override the AI.
Let's say you ask Cowork to analyze training completion data against quota attainment and build an impact presentation for your CSO. It'll do it. It'll pull correlations. It might even surface some insights.
But here's where it gets dangerous: Is the AI drawing the right causal conclusions, or is it conflating correlation with causation? Are these the metrics your leadership actually cares about, or the metrics that were easy to correlate?
The AI doesn't know your CSO. It doesn't know that she's skeptical of training metrics because the last enablement leader showed her completion rates for two years while the team missed quota. It doesn't know that the only number she trusts is time-to-first-deal.
That's judgment. Knowing when to trust the output and when to override it. Knowing what questions to ask the AI before you accept its answers.
I've written about this before — stop asking other enablement people what KPIs to use. Ask your leadership. The same applies here. Stop accepting AI outputs because they look professional. Ask whether they answer the question your stakeholders actually care about.
3. Communication Precision
Here's the uncomfortable one: Cowork is only as good as your brief.
If you tell it "create a training on discovery skills," you'll get something generic. If you tell it "create practice scenarios where reps handle the pricing objection from procurement stakeholders in our mid-market segment, using our MEDDIC framework, with specific focus on identifying economic buyers," you'll get something useful.
The difference isn't the AI. The difference is you.
This is where the years of experience show up — or don't. Do you have a mental model of what good discovery looks like? Can you articulate it clearly enough that another entity (human or AI) could execute against it? Do you have a content taxonomy that makes sense, or are you asking AI to organize chaos into prettier chaos?
I've worked with enablement teams who have frameworks for everything. They can brief a contractor, a new hire, or an AI with equal precision because they've done the hard work of defining what "good" means. And I've worked with teams who wing it — and their AI output shows it.
What This Actually Looks Like
Let me get specific about how this plays out in real enablement work.
Onboarding Curriculum Development
Before Cowork: Weeks of SME interviews, synthesizing notes, building slides, creating facilitator guides, writing assessments.
With Cowork: "Take these SME interview transcripts and create a Week 2 facilitator guide focused on discovery skills. Include practice scenarios."
The skill shift: Cowork can assemble the guide. But is this guide actually going to produce behavior change, or is it content theater? Which of the suggested practice scenarios will land with your reps, and which will fall flat? Did you brief the learning objectives clearly enough that Claude understood what "good discovery" looks like in your sales motion?
The enabler who understands adult learning theory — who knows that adults learn by doing, not by watching, who knows that practice needs to connect to real scenarios they'll face — that person iterates quickly and produces something useful. The person who's been winging it gets mediocre output and can't figure out why.
Content Audits
Before Cowork: Quarterly audits take 2-3 weeks. Manual tagging. Version control nightmares. Nobody finishes.
With Cowork: "Audit this content library. Organize by buyer journey stage. Flag anything older than 6 months. Identify gaps in early-funnel content."
The skill shift: Claude can organize files. But do you have a coherent content taxonomy? Can you articulate what "early-funnel" means for your buyers? Do you know what a gap actually looks like versus what's working fine?
The person with a framework gets a clean, usable library. The person without one gets folders renamed arbitrarily.
Coaching Prep
Before Cowork: Review call recordings. Take notes. Prepare coaching feedback. Repeat for every rep.
With Cowork: "Review these 5 call transcripts for this rep. Identify patterns in their discovery questioning. Create a coaching one-pager with 3 improvement areas and practice scenarios."
The skill shift: Claude can surface patterns. But is the pattern actually the root cause of underperformance? Are the practice scenarios going to land with this specific rep? Do you have the coaching skill to deliver this feedback effectively?
The tool does the analysis. You bring the judgment.
The Shield Is Gone
Some enablement professionals have been hiding behind execution. "I'm swamped" was the shield. It explained why the battle card took a week. Why the onboarding curriculum wasn't ready. Why the content audit never got finished.
That shield is gone now.
When execution gets 10x faster, what's left is whether you actually know what you're doing. Whether you have taste, judgment, and the communication precision to brief effectively.
This isn't about AI taking jobs. It's about AI revealing which parts of the job were actually valuable — and which parts we were doing because we didn't know better, or because they kept us busy enough to avoid harder questions.
The enablement professionals I respect most have always operated this way. They spend their time on strategy, on understanding the business, on developing frameworks that scale. The execution was never the point — it was the means to an end.
Cowork just made that explicit.
What To Do About It
If you're an individual contributor:
After your next piece of content, document why you made specific choices. What makes this battle card useful? What would make it fail? Build your taste by making it conscious.
Learn adult learning theory. Actually learn it. Knowles' five assumptions about how adults learn will make you better at briefing AI — and better at recognizing when AI output misses the point.
Build your frameworks before you need them. Create a content taxonomy. Define what your buyer journey stages mean. This is the scaffolding that makes AI output useful.
If you're a leader:
Start hiring for taste, not execution speed. The ability to create content fast matters less than the ability to recognize quality.
Train your team on adult learning principles. If they don't understand behavior change, their AI output will be content theater — polished, professional, and useless.
So my question to you is this...
If an AI could do 80% of your execution work tomorrow, what's the 20% that makes you valuable? If you can't answer that clearly, you've got work to do. And for once, that work isn't building another slide deck.
Hit reply and tell me — what's the skill you bring that AI can't replicate?
Until next time my friends...
❤️, Enablement

