27 Apr 2026

Part II: Where the design work went

Jonny Schneider
Architectural cross-section of a concert hall: left half rendered as a technical blueprint with dimension annotations, right half as a   ▎ photorealistic render showing the curved rib-shell structure and tiered wooden seating.

I've spent 4 months building Lunastak. 1,067 commits. 41 releases. I expected most of the design work to live in the interface — wireframes, interactions, getting the chat to feel right. That’s not where the design work was.

The rest — nearly 200,000 words of planning docs across 94 files. You might call them ‘specs’. Twelve data contracts added after breaking changes slipped through. A pipeline I rewrote once and will rewrite again — all went into something that doesn’t have a name on most org charts. Structure.

AI didn't change where design starts. It changed which layers are cheap.

Jesse James Garrett named five planes of user experience in his 2000 book The Elements of User Experience: surface, skeleton, structure, scope, strategy. Twenty-five years on, it's still a good mental model for where design work happens. The specific jobs in each plane have evolved. The planes themselves haven't.

Surface used to be expensive. Not any more. A half-decent prompt in v0, Lovable, or Cursor will produce a competent interface in an hour. Shadcn gives you accessibility for free. The cost of a good functional surface layer is approaching zero. The tools are capable. Fewer skills are required. The effort to reach a presentable and usable interface has dropped.

That doesn't mean the surface layer doesn't matter. It means it stopped being the thing that's hard and expensive.

Every layer used to be expensive. That's the whole story. The work didn't disappear. It moved. Surface and skeleton are now cheap. Structure is where the hard and expensive work lives.

In the 4 months of work to create Lunastak, the split looks like this:

  ▎ Jesse James Garrett's five planes of user experience (surface, skeleton, structure, scope, strategy) overlaid with effort attribution   ▎ from building Lunastak: surface 3%, skeleton 25%, structure 55%, scope 10%, strategy 5%.

Figure: Jesse James Garrett, The Elements of User Experience, overlaid with attribution of effort during creation of Lunastak. Numbers derived from code and document analysis.

Structure was always the layer that matters

This isn't news. Sullivan said form follows function in 1896. The Bauhaus built a pedagogy around it. Dieter Rams made it feel inevitable. The good examples in every design discipline — architecture, industrial design, software — start with structure and build outwards. The bad ones start with aesthetics and leave structure as an afterthought.

The same pattern shows up in design systems. The ones that work — Material, Carbon, Polaris — are outputs of structural design. They are explicit answers to constraints from brand, accessibility, and platform conventions. The visual language — colour palette, typography, layout — falls out of those upstream decisions. Structure decides; surface expresses. The ones that don't work start from visual language and go looking for a structure afterwards.

So when I say structure is where the work went, I'm not claiming anything new. I'm claiming the AI moment made something visible that was always true.

Vibe coding isn't the problem. Mistaking it for the whole job is.

Cursor, Lovable, and v0 are excellent at the plane that became cheap — surface, and to a lesser extent skeleton. People using them are designing. They're just designing in the plane where design is partially automated.

When it fails, it's not because the tools aren't good enough. It's because we mistook surface and skeleton for the whole discipline of design. The failure mode looks like this: every feature ships easily, and none of them compound. It feels fine at the start. By feature 10, the product feels incoherent. Every new addition makes integration harder and more fragile. Your teams call it tech debt. Without care and a plan, it becomes a thorn in your side — and a red line item on your balance sheet.

Good structural work is about decisions over time. Every prompt version, every schema revision, every pipeline rewrite exists because the last one taught you something. That's not a weakness of the tool — it's how structural work is constituted. Iteration creates the learning. The learning builds the structure. None of it can be done in one shot. Each decision only makes sense in the light of the ones before it.

What the structural work looks like

Most of it is turning unstructured input — conversations, documents, voice memos — into structured data. The resulting data asset provides the core building blocks for features and customer value. Without good data assets, we can’t create value in compelling, consistent, and scalable ways. It’s the difference between a scrappy vibe-coded, throwaway prototype versus a durable software product that can be evolved continuously as we learn from real customers using it.

Here are three concrete examples of the structural design work in Lunastak.

The data schema

If the schema is wrong, every feature that renders from it is wrong too. Get it wrong and no amount of UI work will save you.

Lunastak takes raw strategic material — transcripts, documents, conversations — and produces a structured decision stack a leadership team can reason about. The data model represents strategic evidence as tagged units. Every piece of user content is extracted, tagged with a strategy dimension, and stored with provenance and confidence scores.

Those fragments feed further processes that synthesise insights across the dataset. The outputs help the user take their next turn:

  • Pointed questions around a gap (explore next);
  • A summary thesis for review (knowledgebase); or,
  • Drafting a Decision Stack that describes actionable strategy.

The interface is the surface, but every visible element traces back to a structural decision in the data schema. The UI is a thin render over that structure.

When the schema was wrong, it was blocking. We couldn’t ship the product experience changes we wanted, because the data was the wrong shape, wrong format, and in the wrong place. A single refactor replaced the legacy data model with a unified one, touched 40+ query sites across the app, and shed ~1,300 lines of code that didn't come back. Zero reverts. The reason it was safe to do that is the structure underneath had been designed deliberately enough to carry the change.

The pipeline orchestrator

This is the difference between a product that can add a new feature next week and one where every new feature means untangling the last three. The orchestrator is what keeps that true. It’s a pure function at the front of the data pipeline. About 100 lines total, with the core decision logic in fifty. It decides what happens when a user uploads a document or completes a conversation:

extract themes → create fragments → tag with dimensions → check a threshold is met → return user messaging → pass to the next process
Three-layer pipeline diagram: orchestrator routes incoming requests to processors (extraction, synthesis, generation), which write to a structured persistence layer.

Figure: The designed 'structure' behind Lunastak — the orchestrator, data processors, and persistence layer

Not clever code. Predictable code.

The point isn't the cleverness. The point is this: if a user does X, Y happens. If we want the product to do Y differently, we add a variant to the pipeline. We don't create a new code path on its own 'island'. Decoupling the processing of data away from the feature that requires it may seem like a lot of work upfront. In fact, it’s a simpler solution that is far more durable. Without it, the code drifts into incoherence as features accumulate.

I learned it the hard way. Before refactoring, all routes had similar processing logic, but separated from each other. That means changing how Y happens would involve making multiple similar changes, to separate parts of the application.

After, processing logic is in exactly one place, and changing how a subprocess works or adding a new process is a plan variant, not a new island.

The mistake that demanded structural design first

During dark-launch, early users hit a structural dead end after the first session. The job got done — but it wasn't clear what had happened, or how, or what to do next. A black box. A "so what?" moment.

I papered over it with a hotfix. But this revealed a structural problem. The underlying data model meant any UI fix would be expensive and fragile. And, a superficial patch wasn't going to be enough.

I stopped and refactored the data model first — that's the refactor described above. A week of refactor work, not a day of patching. It didn't just fix the bug — it unlocked the next three weeks of surface work. That was the moment I stopped thinking of data engineering as infrastructure and started thinking of it as design. The surface couldn't improve until the structure underneath could carry it. I had to design the structure in order to design the surface. That's where this article came from.

Close

AI changed which layer you can afford to automate. Structure stayed expensive. Craft is here to stay. It just looks different now.

Still creation, not curation. There's more to it than taste and judgement.

If you're reading this as a designer or Product Manager — the fork is real. It's right now. Pick a project. Grab Claude Code as your coach. Go. The engineers and architects around you are friendlier than you think. Most will be delighted when someone from the "non-technical" side makes a real effort to understand the structure underneath. They'll help you get there.

That said — I don't think we all need to become engineers. In fact, I think that's a terrible idea. Find out why in Part III.




Humble Opinions

We think out loud here. Subscribe and we'll email you when we publish something new.