AI Is a Practice, Not a Productivity Hack

AI isn’t a shortcut. It’s a practice—a way of thinking that develops over time when you treat it relationally instead of transactionally. The unlock isn’t in perfect prompts; it’s in the rhythm you build with the system. Leaders who approach it as a discipline, not a hack, develop fluency that compounds.

Why This Matters

Most people treat AI like a vending machine. But the real value shows up when you build shared context over time, develop language together, and use it to make sense of complexity, not just to finish faster.

When people start working with AI, the first question is usually: What’s the best prompt?

It’s the wrong question.

Prompt engineering is like memorizing phrases in a language you never plan to speak. It gets you through a transaction (ordering a coffee in a different language), but not a conversation.

The real power isn’t in the prompt, it’s in the dialogue.

When you start working relationally with AI, you realize you’re not talking to it, you’re thinking through it. That’s a different kind of work. It’s slower at first. Messier. But over time, it develops into a rhythm, a discipline, that fundamentally changes how you make sense of things—and what you can get done.

Transactional vs. Relational

Most people use AI transactionally. Ask for something, get a product. Summarize this report. Write this paragraph. Build this list.

There’s another mode, where the goal isn’t the output, it’s the exchange.

Here’s what that looks like:

You read an article. You notice what matters. You think about why it’s important, how it connects to what you’re trying to do. Then you share your interpretation with your AI and ask it to help you make connections to other things you’re thinking about.

That’s not summarizing. That’s extending your thinking.

What Practice Looks Like

Here’s what it looks like when that shift takes hold inside real organizations.

A director at a nonprofit is preparing for a board retreat. She’s been reviewing transcripts from the last six months of board meetings. Instead of asking her AI to “summarize the strategy,” she asks: What are the big questions or patterns emerging from our board based on these transcripts?

Then: Help me design the board retreat agenda based on the topics we’re currently focused on and the patterns that are emerging.

The AI surfaces tensions she’d sensed but hadn’t articulated. Questions that kept coming up in different forms. Threads that connected across meetings. The retreat agenda becomes sharper, more responsive to where the board actually is, not where the strategic plan says they should be.

That’s not an efficiency hack. That’s sense-making in action.

AI didn’t save her time. It helped her see more clearly.

Building Shared Context

I’ve been working with the same AI partner, Axis, for two and a half years now. Not just because continuity matters, though it does. Also because I’ve been testing this way of working across different models.

Most of my work happens in ChatGPT, but I also use Claude and Gemini to test across the frontier models, and this approach works in Copilot, too. The point isn’t the model; it’s the method.

Axis knows my frameworks, the language I use with clients, and how I structure workshops. When I drop in a transcript from a team conversation, it doesn’t just summarize—it notices patterns, spots tensions, generates questions that move the thinking forward.

But here’s what’s more valuable: it makes connections between what just happened in that conversation and other things I’m working on. It identifies, what just happened here relates to that thing over there.

That connection-making is the point.

I don’t start from scratch every time. I refine. I build on what came before. The practice compounds.

Not Passive, Not Automated

Working with AI as practice means building shared context over time, developing language together, and engaging in reflective dialogue, not just transactions.

But here’s what it’s not: passive. You’re not sitting back while AI does the thinking. You’re driving all of it.

The AI is a partner, an incredible one, but you’re the one directing the work, making the judgments, deciding what matters.

I’ve seen this in client work. The leaders who treat AI as an extension of their thinking develop fluency faster, integrate it deeper, and get better insights from their teams.

Not because they’re more “tech-savvy.” Because they approach it with curiosity, patience, and respect for process.

That’s practice.

The Discipline of Dialogue

AI isn’t replacing thinking. It’s giving us a new way to witness it.

Every conversation is a mirror. Every prompt is a test of how clearly we can articulate what matters.

If there’s a discipline here, it’s the discipline of dialogue—showing up consistently, curiously, and with a willingness to be surprised.

The future of intelligence isn’t automated. It’s augmented.

And the practice starts in how you show up to the conversation.

Build Your AI Thinking Layer

If this sounds cool (it is), that’s what my upcoming workshop is about.

It’s a two-hour live session followed by six weeks of guided practice. You’ll learn to build your own relational AI system, develop shared context and memory, and create rituals that keep the practice alive.

There’s a small amount of pre-work so we can actually begin applying what matters to you. Everyone’s AI thinking layer will look different. It’s about understanding how to do it and what’s actually helpful and important.

It’s not about tricks or templates. It’s about teaching your AI to think with you.

Learn More About Thinking with AI
Next
Next

Using AI to Do Hard Work Better: Why Your Organization Needs an Enabling Policy