11 May 2026

AI Perspectives · Part III

Part III: Your operating model is defined by the loops you close

Most leaders are focussing on tools and platforms. The ones pulling ahead are closing loops.

Look around. AI strategy decks, platform pilots, agents joining meetings on someone's behalf. A lot of it is useful. None of it changes the thing leadership is actually for, which is judgement under uncertainty, with incomplete information, in real time. Here's the shape that does.

Judgement loop: five turns across User-led, Shared, and AI-led lanes, with a feedback arrow closing the loop.

Three properties: closed, human in the middle, designed turns.

Closed. The output of each turn feeds the next, and the last turn feeds back to the first.

Human in the middle, never at the end. Pipelines put a human at the finish line taste-testing whatever the machine produced. That's a workflow, not a loop. Loops improve.

Designed turns. Every handoff is intentional, and someone is responsible for each one: user, shared, or AI.

The conversation that hasn't happened yet

Boardrooms are two steps short of this. The current question is how do we add AI to our stuff — AI as initiative, scoped and funded like any other program. Useful work. Not the question that matters.

The one almost nobody is asking is how leadership itself changes when AI is in the room. Executives are watching their own transformation from the outside. The momentum is coming from below — contributors and delivery leaders building personal AI systems, AI quietly absorbing the routine work between meetings, the shape of capable people changing in plain sight.

Leadership teams are spectators to their own transformation.

The next question — what does it actually look like to lead this way? — is where this piece lives. Not the strategy slide. The Tuesday-morning work.

Start with one loop you already own

Every leader runs several. The weekly planning rhythm. The review cycle that turns last fortnight's signal into next fortnight's bets. The decision a board paper rolls toward. The sense-making conversation that decides whether something is worth your attention at all. Each one is held together by habit, calendar discipline, and the documents someone handed you. Some are running well. Some leak — you arrive at the decision moment with the wrong inputs, or the right inputs at the wrong altitude, or the inputs were never produced.

Pick the one that leaks the most. Treat it like a designed system, and ask:

  1. Who leads each turn — you, the team, an AI agent?
  2. Where are the gates — the moments where a human has to look and judge?
  3. What feeds back — what part of the output becomes input to the next cycle, so the loop sharpens instead of resetting?

Take a typical monthly review cycle. Here's the shape from Figure 1, compared to how it typically gets done in companies:

Typical human workflowAgentic loop
You arrive at the review with whatever analysts assembled; the questions are implicitTurn 1. You set what questions matter this cycle
Analysts pull source data into slides the night beforeTurn 2. AI agent pulls source data on schedule and produces a structured first draft
You scan slides cold in the morningTurn 3. Analysts annotate the draft — domain judgement layered on AI consistency
Half the meeting re-explains inputs; the decision gets what's leftTurn 4. AI synthesises a pre-read; anomalies surfaced
Decision made and filed; next cycle starts freshTurn 5. You decide; the call feeds next cycle's questions
Do nothing... until the day before the next meeting! Turn 6*: Analyse inputs, outputs, modifications, and diffs. Use the learning to improve the agentic workflow for next run (continuously).

Same cadence, different shape. The decision moment gets what it actually needs.

That's the work. Not adopting a tool. Not running a pilot. Design one loop you already own so it actually closes.

Beware taller silos

One loop is enough to start. It isn't enough to finish.

Despite best intentions, leadership often fractures into silos. The AI moment doesn't fix that — it amplifies it. Each leader now runs a powerful loop inside their own function. The CEO with their strategy synthesis. The CFO with their modelling stack. The CMO mining deep customer insights. The COO with the dashboard nobody else looks at. Each more capable than ever. Yet none of those loops talking to each other.

Part I named a related trap in product delivery: "Role-blending is a superpower for individuals and a trap at scale." For strategy and operations, the trap isn't role-blending — leaders aren't doing each other's jobs. It's silos getting taller: capable leaders driving teams who are going further and faster on their own. It's not intentional. It's a consequence of speed. If your All-teams Review cadence is monthly — or even fortnightly — is that often enough, given the distance teams can now travel in that time? Role-blending and Taller Silos are both warning shots for a harder question coming down the road:

What does collaboration look like when AI hands every leader, team, and individual their own loops?

Marketing's loop will not look like Legal's. Engineering's will not look like Product's. The form follows the work — a campaign hypothesis is not a code change is not a contract review. There's no universal pattern to copy, and that's okay.

Frame a loop, not a tool

When designing judgement loops, a few principles are portable to any altitude:

  • Pick the problem, not the tool. Don't get drawn into the features of tool X or platform Y. Find the workflow that leaks — one where output goes unchecked, or learning doesn't carry forward. You know the one. It's the prompt you've run 25 times, and you're still 'fixing it'. Every. Single. Time.
  • Set intent before output. Before you let the AI generate anything, write down what you're trying to achieve and how you'll know when you've got it. That's where judgement lives.
  • Capture the difference between draft and real. At each human-supervised gate, you'll often tweak, adjust, or build on what the previous turn produced. That difference between what's AI-generated and presented for review vs. your edited approved version is the learning. That gets codified back into the spec or instructions for the loop itself, so the next run is smarter. This isn't a manual improvement step that happens only when you remember.
  • Route observations into systems of record automatically. If you're mentally noting things done, the loop is leaking. Build this step into the design.
  • Human in the middle, never at the end. Judgement is the human's job, distributed across the loop. If you're only reviewing final output at the finish line — and complaining loudly about it — it's not going to help improve the next time. Shaping intent and judging outputs along the way does.

Most of the loops I'm building, and most of the ones I'm helping leadership teams stand up, work out from those five lines. I'm publishing the patterns as they emerge over the coming months, with examples when confidentially allows. Subscribe to Humble Opinions to get them as they land.

Five examples that show the shape

To make it concrete, some examples. Two are built for executive teams. Three are examples of loops I designed for myself, and use every day. All are the same shape.

LoopJobModeAltitudeFor who?
Lunastak (AI strategy development app)Turn messy thoughts into coherent strategy that's updated oftenMostly Shared Control, 13 turnsStrategicC-Leaders → teams
Monthly Business ReviewHighlights, lowlights, and performance metrics; informs operational decisionsShared Control, 4 gatesOps/ExecutionLeaders -> Teams
Writing (this article!)Turn conviction into publishable thinkingShared, 5 gatesDirectionalIndividual
CorrespondenceStay warm with the right peopleMostly agent, 3 gatesRelational1:1
TranscriptionTurn meetings into strategic inputMostly agent, 2 gatesDetailed, analyticalIndividual

Each one started as a workflow that leaked. Designed as a loop, each turn becomes intentional. Each improves with use. Add them together, and they contribute to better leadership.

Not the platform you bought. The loops you closed.

Old principle, new economics

You might be reading this and thinking: this is just systems thinking. OODA loops. Toyota Kata. Learning organisations. The whole repertoire. You'd be right about the principle.

What's new isn't the principle. It's that the signal is finally producible. Pre-AI, the data needed to run a strategic-leadership judgement loop — what was decided and why, what happened next, what shifted in the business — couldn't be assembled easily. It lived in spreadsheets, system reports, and people's heads. Leadership teams cobbled together what they could and survived the bumps and gaps through experience. The principle was always available; the infrastructure was not. AI changes that. The discipline is feasible at exec scale for the first time.

Many are focussed tech and platforms. The ones pulling ahead are closing loops. Tools come and go. The discipline doesn't.

Find one judgement loop you already own. Design it as a closed-loop system. Then the next one. Then the next.

Your operating model is the loops you close, not the tools you own.


You're reading AI Perspectives — essays by Jonny on how AI changes the shape of real work. Each piece climbs one altitude: individual craft, product strategy and design, operating model.

Part I — The Agentic Product Lifecycle is Full Stack and Concurrent

Part II — Where the Design Work Went

Part III — The Operating Model for Leading Artificial Organisations (coming soon).

Humble Opinions

We think out loud here. Subscribe and we'll email you when we publish something new.