I officially started as Director of AI Enablement on January 1st. But I've been working on this since October. And I still have nothing tangible to point to.


Here's the honest truth: I officially start as Director of AI Enablement at an MSP on January 1st, 2026. But I've been in "getting ready" mode since October—strategic planning, refining decks, figuring out how to actually deliver value to SMB clients.

I spend hours every week with Claude, Gemini, ChatGPT. I know I'm doing real work. Building frameworks. Making better decisions. Getting sharper.

But when I step back and ask "what do I have to show for it?" The answer is: nothing I can point to.

No published content. No client-facing tools. No reusable assets. Just a bunch of conversations that felt important in the moment but disappeared when I closed the tab.

This is the AI worker's paradox: Constant activity, invisible output.


The Comparison Trap

I watch people on YouTube building entire systems with Claude Code. I read Substacks from people who've shipped 50 experiments. I see developers, strategists, creators—all producing tangible artifacts from their AI work.

And I think: "Why don't I have that?"

But here's what I'm realizing: They're not smarter or working harder. They're just capturing what they're doing.

They have a system that turns sessions into artifacts. I've just been having conversations.


Why I'm Still Figuring It Out

I'm in the runway phase before my role officially starts. I'm still finding my footing—what services to offer, how to structure engagements, which tools actually work for SMB clients versus which ones are just hype.

I've bounced between ChatGPT, Gemini, and Claude trying to figure out what works with me (not just "for" me—there's a difference). Right now it's Claude and Gemini 3. Not because of some technical benchmark, but because they fit how my brain works.

This experimentation phase is necessary. But it's also why I have nothing to show.

I've been optimizing for learning, not for capture.


The Forcing Function

So I'm changing the system. Starting today.

Every AI session has to produce one of these:

  1. A decision: Documented with reasoning (even if I change my mind later)
  2. A learning: Written up as a blog post (even if it's rough)
  3. A reusable template: Saved and versioned (even if it's V0.1)
  4. Nothing: That's allowed—but I have to acknowledge it

The rule: If it's a blog post, I publish it within 48 hours. No perfectionism. No "I'll polish it later."

This post is the first test. It's uncomfortable. It's exposing. It's admitting I don't have it all figured out yet.

But that's the point.