Writing
Back to Blog
LeadershipFebruary 1, 2026/8 min read

The AI Experiments Every Executive Should Be Running Right Now

Share

I want to tell you about two CEOs I spoke with last month. Same industry, similar-sized companies, comparable resources.

The first one spent thirty minutes showing me a tool she had built. Nothing fancy — a dashboard that pulled her company's support tickets, categorized them by root cause using AI, and surfaced patterns that her team had been missing for months. She had built it herself in an evening, using Claude, after getting frustrated that the data existed but nobody was looking at it the right way.

She did not build it because she wanted to become a developer. She built it because she saw a problem, realized she could solve it without waiting for anyone, and decided to try. The insight it produced — that 40 percent of their support volume came from a single onboarding step — led to a product change that reduced ticket volume by a third within six weeks.

The second CEO had a polished AI strategy deck. Fifteen slides. Vendor evaluations. A roadmap. A timeline for a company-wide rollout in Q3. He asked thoughtful questions about governance and data security. He was being responsible.

He was also twelve months behind.

Not twelve months behind on a technology adoption timeline. Twelve months behind on developing the judgment to know what AI can actually do for his business. And that kind of gap does not close with a strategy deck.

This Wave Breaks the Pattern

Every executive in this room has navigated a technology shift before. The internet. Mobile. Cloud. Each one demanded adaptation — new channels, new infrastructure, new ways of reaching customers and managing operations.

But notice the pattern. In every one of those waves, the job of a non-technical leader was the same: understand what the technology can do, then direct others to build it. Your value was in the vision and the decision-making. The building was someone else's job.

This wave breaks that pattern.

For the first time, a COO can prototype the internal dashboard she has been requesting for six months. A CMO can build the content workflow he sketched on a whiteboard last quarter. A founder can turn the onboarding process she has been documenting in a Google Doc into a working tool — not in six weeks, but in an afternoon.

Not because these leaders have become engineers. Because the distance between "I want something that does this" and "here is something that does this" collapsed so dramatically that for a growing category of work, directing and doing are now the same activity.

The skill that matters is not coding. It is clarity — knowing what you want, communicating it precisely, and iterating until you get there. Every experienced executive already has this. They just have not applied it to building yet.

What "Getting Invested" Actually Looks Like

I need to be specific here, because "executives should use AI" has become background noise. Everyone says it. Almost nobody explains what it means in concrete, operational terms.

When I say get invested, I mean three things — and the order matters.

Pick a real problem, not a demo. Not "let me ask ChatGPT a question and see what happens." Find something you actually need solved. A weekly report that takes four hours to compile. A process your team has been begging to automate. A communication you have been putting off because it requires synthesizing information from six different sources. Use AI to make genuine progress on it. The learning comes from the struggle of applying the tool to a real constraint, not from a sandbox.

Do it yourself. Personally. Not through an assistant. Not through a pilot team. Sit with the tool. Feel the friction of your first bad prompt. Notice where the output is surprisingly good and where it misses completely. Develop your own sense of when to trust it and when to push back. This judgment cannot be delegated. It cannot be absorbed from a briefing. It can only be earned through direct experience — the same way you earned your judgment about people, markets, and operations.

Talk about what you find. Openly. This is the multiplier that most leaders skip. Teams take their cues from senior leadership. When an executive experiments with AI and shares what they learned — the wins and the failures — it gives the entire organization permission to do the same. When AI gets handed entirely to IT as a "managed rollout," a ceiling forms. The people closest to the problems never get the chance to discover that they could solve them.

The organizations moving fastest are not the ones with the best AI strategy decks. They are the ones where curious leaders are running small experiments, talking about the results, and building a culture where the question is not "are we allowed to try this?" but "what should we try next?"

The Advantage That Cannot Be Purchased

I spent twelve years in the Army, much of it in aviation — an environment where the quality of your decisions under pressure is the only thing that matters. One of the clearest lessons of that career: the organizations that win are rarely the ones with the most resources. They are the ones that develop better judgment faster.

Not better information. Better judgment. The ability to look at a situation, pattern-match against experience, and make a sound decision when the data is incomplete and the clock is running. That ability comes from one place: repetition. Doing the thing, reflecting on what happened, and doing it again slightly better.

The same dynamic is playing out right now with AI — and it should make every waiting executive uncomfortable.

The leaders who are experimenting are not just getting more efficient. They are developing intuition. They are learning which problems AI handles well and which ones require human judgment. They are building a mental model for how to frame a question, how to evaluate an output, how to iterate toward something genuinely useful. They are learning when to trust the tool and when to override it.

That intuition takes time to build. It compounds with every experiment. And here is the part that matters most: it cannot be acquired secondhand. You cannot read your way to it. You cannot hire your way to it. You cannot buy a platform that gives it to you. You have to earn it, the same way you earned every other form of professional judgment you possess.

The gap between those experimenting and those waiting is not a knowledge gap. It is an experience gap. And experience only closes one way — by doing the work.

The Real Cost of Waiting

In previous technology cycles, waiting was a defensible strategy. Let the early adopters work out the bugs. Let the vendors build mature solutions. Let the market settle, then adopt the winning platform. For the internet, for cloud computing, for mobile — this worked.

It will not work this time. And the reason is subtle but critical.

The advantage being built by early experimenters is not technological. It is cognitive. The CEO who built that support ticket dashboard did not just find an insight her team missed. She developed a new way of thinking about her business data. She learned that she could go from question to answer in an evening instead of a quarter. She started asking different questions — bigger ones, more frequent ones, ones she would never have bothered asking when each answer required a three-week analytics project.

That shift in thinking is the real advantage. And it produces a second-order effect that accelerates the gap: organizations where leaders think this way start building AI-native workflows. AI-assisted decision-making. AI-accelerated execution. Each one creates a structural advantage that makes the next one easier to build. The gap does not just persist. It widens.

Meanwhile, the organizations that are waiting are not standing still. They are falling behind at an accelerating rate, because the target they will eventually need to catch is moving faster than they are.

Your First Hour

Here is what I would ask you to do this week. Not this quarter. This week.

Block one hour. Pick one real problem in your business — something you have been meaning to address, something that has been sitting in the back of your mind. Open Claude or another AI tool and just start talking to it about the problem.

You will be frustrated at first. Your first prompts will produce vague, generic output. That is not the tool failing — that is the starting line. Push through it. Tell the tool what is wrong with its response. Give it context. Be specific about what good looks like. Show it examples.

After sixty minutes, you will have learned more about what AI can and cannot do for your work than any strategy presentation, analyst report, or conference keynote could teach you. Not because the information is hard to find. Because the understanding only comes from doing.

And here is what I have noticed about that first hour: it does not end at sixty minutes. Something clicks. You start seeing problems differently. You start asking "what if I just tried..." about things you had mentally filed under "someday." That shift — from someday to today — is the most important thing happening in business right now. And it is available to anyone willing to sit down and start.

If you want to compare notes on what you find — or explore what building AI fluency could look like across your organization — reach out through the contact page. That is a conversation I always look forward to. And if something here sparked a thought or a question, drop it in the comments below. I read every single one, and some of the best conversations I have had have started there.

Comments

Loading comments…

Leave a comment

0/2000

Read Next