18 Feb 2026
I used my own strategy tool to build the strategy for itself

I built an AI strategy tool. Then I used it to develop the strategy for… the tool itself. And honestly? It’s the most honest product demo I’ve ever made.
Let me back up.
Everyone’s shipping prototypes. Nobody’s shipping products.
My LinkedIn feed is full of “I built this in a weekend with AI.” Impressive demos. Slick videos. One-shot prototypes that do part of the job and look great in a screen recording.
Here’s what those posts don’t show: the prototype doesn’t handle edge cases. It breaks when real users touch it. There’s no auth, no tests, no data model that survives contact with reality. It’s a sketch, not a building.
I have nothing against sketches. Sketches are how you explore. But somewhere along the way, we started confusing them with finished work — and the platforms reward it. A 30-second demo video gets ten times the engagement of a post about what it actually took to build something real.
So here’s a post about what it actually took.
The problem I was trying to solve
For twenty years, I’ve helped executives develop strategy. Workshops, facilitation, boardroom sessions. And I kept seeing the same pattern.
Executives don’t lack strategic thinking. They’re mulling continuously — on the drive home, between meetings, at 3am when they should be sleeping. The ideas are there. The problem is they’re scattered. Voice memos that never get transcribed. Notes from one conversation that don’t connect to notes from another. Moments of clarity that evaporate before they’re captured.
Then quarterly planning arrives, and everyone faces a blank page. All that rich, ongoing thinking reduced to “let’s fill in the strategy template.”
I had a hunch that conversational AI could invert this. Instead of forcing structured input, accept messy thinking and let the machine do the structuring. So I did what I always do when something gets under my skin. I built it.
What 200 hours actually looks like
Lunastak started as a learning project. I wanted to understand — firsthand, in code, with real users — what it takes to build a genuine AI-native product. Not a wrapper around an API. Not a chat interface with a system prompt. A product with its own methodology and its own opinion about how strategy should work.
Eight weeks later:
- 200 hours of design and development
- 700+ commits across 10 major releases
- 55 tests across API contracts, integration, and smoke tests
- Systematic experimentation — feature gates, backtest APIs, measured hypotheses, real data informing decisions
- One person. Me.
AI was involved at every step. Claude helped me write code faster, explore architectural decisions, and iterate on prompts. Before AI, this project would have required a small team — a designer, a front-end developer, a back-end engineer at minimum. AI didn’t make me a faster engineer. It made it possible for one person to hold the entire product in their head — design, code, and strategic intent — without the handoff tax that slows every team down.
But here’s what AI didn’t do.
It didn’t figure out the eleven strategic dimensions that form Lunastak’s analytical framework. That came from years of studying Porter, Rumelt, Lafley & Martin — and more importantly, from applying those frameworks in real engagements and watching what actually matters when executives make decisions.
It didn’t design the extraction pipeline — the system that takes a messy conversation and pulls out structured strategic themes. That architecture went through three major iterations. I threw away entire approaches that didn’t work. The first version prescribed what to look for. The second let themes emerge naturally. The third guided conversations toward uncovered dimensions while keeping emergent extraction. Each pivot required product judgement, not better prompting.
And early on, I let AI generate a pipeline that was technically sound but philosophically wrong — it was solving a problem I hadn’t properly understood yet. It took two weeks and a failed experiment to see it. Not because the code was bad. Because the thinking was bad. AI had given me exactly what I asked for. I’d asked for the wrong thing.
AI accelerated the build. It didn’t replace the thinking.
The camera didn’t take the photo
Modern phone cameras are remarkable. Stunning images have been captured on iPhones — some earning their place in prestigious galleries alongside photos shot on equipment costing tens of thousands.
But what makes those photographs great isn’t the sensor technology. It’s the shooter’s ability to capture the moment. Composition. Framing. Just being there — in places a traditional camera rig couldn’t reach.
The tech makes it easier for anyone to take a technically correct photo. But technically correct isn’t creatively good. The photographer who understands light and story will take great photos regardless of what’s in their hand.
AI product development works the same way. The technology is a new medium. Great outcomes come from working in it — learning its grain, understanding where it genuinely adds value, and recognising where it confidently produces plausible rubbish. That intuition doesn’t come from tutorials. It comes from building, shipping, and watching real users interact with something you’ve made.
I dogfooded the whole thing
Lunastak is a strategy tool. I needed a strategy for Lunastak. The obvious move was to use it.
I fed in my positioning documents, my jobs-to-be-done analysis, voice memos from walks where I’d talked through the go-to-market. I had several conversations with Luna — the AI strategy coach at the heart of the product. The system extracted themes, mapped them across eleven strategic dimensions, and synthesised it all into a complete Decision Stack — a Vision for the world we’re tryna create, a Strategy for how to get there, Objectives with measurable targets, Opportunities describing the coherent actions to take, and Principles that make explicit the compromises we’re not willing to take.
Not generated in one shot and forgotten. Built through ongoing conversation, refined as thinking evolves. And here’s what’s beautiful about this approach: the Decision Stack became an engagement tool. I shared it with collaborators — people whose thinking I trust — and it gave them something concrete to react to. Their reactions created new insights, which fed back into Luna, which sharpened the strategy further. A flywheel, not a deliverable.
It was a strange experience. I’d been living with this strategy in my head — scattered across documents, conversations, and half-formed convictions — for weeks. Seeing it synthesised into a structured output was like watching someone organise your messy desk. The same stuff was there. But now it made sense together. Connections between ideas were clearer than I’d managed to see on my own. And every conversation made the strategy clearer and stronger.
Then I did something that felt a little unhinged. I made that output — my actual strategy — the product demo.
New users land on a fully formed strategy project. No sign-up required. They see real documents, real conversations, and a real Decision Stack. Not a fake company with made-up data. My actual thinking about where this product is going, and why.
If the strategy is compelling, the tool just sold itself. If it’s not, I’ve got bigger problems than marketing.
What I actually learned
Building Lunastak wasn’t primarily about creating a product. It was a self-taught class in what it takes to innovate with AI. It's not my first rodeo, I've worked with teams applying AI/ML tech on interesting customer use cases for many years. But things are different now.
AI doesn’t give you more brain. I wrote about this last year, and building Lunastak proved it in practice. The technology handles velocity. The human handles direction. The most dangerous moments building Lunastak were when AI produced something that looked right — plausible code, reasonable architecture — but was solving the wrong problem. It takes experience to spot that. AI won’t tell you.
The hard work hasn’t changed. Understanding the problem. Designing the right abstractions. Knowing what to build and — critically — what not to build. Lunastak has a long list of features I deliberately didn’t build. No real-time collaboration, no metrics dashboards, no integrations. Every “no” required judgement about what the product should be, not just what it could be. AI expands the space of what’s possible. The human job is constraining it to what’s valuable.
Craft matters more now, not less. When everyone can generate a prototype in a weekend, the differentiator isn’t speed to first demo. It’s everything that comes after. The data model that handles real usage. The UX that guides without constraining. The judgement to know whether it should exist at all — and the discipline to instrument, measure, and find out, rather than ship and hope.
Try it yourself
Lunastak is live. You can explore the full demo which includes the actual strategy for the product, or start your own without creating an account. Sign up for free to keep coming back, and you'll get first access to new features.
No sales pitch. No paywall. Just a strategy tool that practises what it preaches.
If you’re a leader who’s tired of blank-page strategy templates, try a conversation instead. And if you’re curious what AI-native product development looks like when done with intention — have a look around. The code isn’t visible, but the craft is.
We think out loud here. Subscribe and we'll email you when we publish something new.