In partnership with

Anthropic dropped something this week. Claude Cowork — a new feature in their desktop app that lets you point an AI at a folder on your computer and tell it to work.

Not chat. Work.

You say "take these competitive intel files and build me a battle card using our template." Then you walk away. Claude breaks the task into pieces, works through them in parallel, checks its own output, and asks you questions if it gets stuck. Twenty minutes later, there's a battle card in your folder.

That's different from asking ChatGPT to "help me write a battle card" and then going back and forth for an hour pasting content into a doc. Cowork acts. It creates files, reads files, edits files, organizes files. You brief it like you'd brief a junior team member, and it executes.

But here's where it gets dangerous: Is the AI drawing the right causal conclusions, or is it conflating correlation with causation?

For enablement professionals, this should feel like a gut punch and an opportunity at the same time.

What This Means for You

Subscribe to keep reading

This content is free, but you must be subscribed to Love, Enablement to continue reading.

Already a subscriber?Sign in.Not now

Keep Reading

No posts found