25 Feb 2026

Most companies will fail at AI. Not because of the technology.

Jonny Schneider

Every executive I talk to is asking the same question: what are we doing about AI?

They’re not wrong to ask. They’re not behind. Most of them have already thrown resources at it — their smartest engineer exploring use cases, a sandbox project here, a vendor demo there. And they’ve hit a ceiling. Not a technical ceiling. A capability ceiling that’s much harder to name.

The problem isn’t that they lack strategy. These are sharp leaders with real commercial instinct. And it’s not that they lack engineering talent — they have capable teams building and shipping every day.

The problem is the gap between the two.

The gap nobody’s hired for

Here’s what the gap looks like in practice. An executive team identifies a high-value AI opportunity. They’re right — it’s a genuine problem worth solving, with real commercial upside. They hand it to an internal team and say: go.

Three things tend to happen next. Sometimes all three.

The initiative dies of a thousand small delays. Security review. Legal sign-off. Procurement. “Have you spoken to the data team?” Enterprise organisations are built to manage risk, and AI initiatives trigger every antibody in the system. Without someone who can navigate executive stakeholders and build credibility with technical teams simultaneously, the project bleeds out before it produces anything.

The investment stays part-time. “Have a crack at this when you get a chance” is not a strategy. The engineer with a passion for AI and the time to explore it is, ultimately, a mid-career developer being asked to deliver executive-level outcomes in their spare time. That’s not a fair bet for anyone — not for the engineer, and not for the business.

The team solves the wrong problem. This one gets all the attention, but it’s actually the least common failure mode. Most internal AI efforts never get far enough to fall into the build trap. They die at red tape or resourcing first. But when teams do get through, and they haven’t properly validated whether the problem is big enough, whether existing solutions already handle it, or whether the value case stacks up — the result is a technically sound prototype that nobody needed. Expensive, demoralising, and it poisons the well for the next initiative.

The common thread: taking a strategic bet from "we think this matters" to "we've proved it, here's how to build it" in weeks instead of quarters is a capability most organisations don't have.

Why AI makes this harder, not easier

This is the part that’s counter-intuitive.

AI dramatically expands the space of what’s buildable. Which means there are more strategic bets to evaluate, not fewer. More possibilities, more vendor pitches, more internal proposals.

The bottleneck has shifted from “can we build it?” to “should we build it — and how do we know before we’ve spent a quarter finding out?”

The organisations that move fastest won’t be the ones with the most agents acting as engineers. They’ll be the ones who can frame the right problem, prove the approach in weeks, and only then invest in building at scale.

This is a craft problem, not a technology problem. And it’s the same craft problem I wrote about recently in the context of building Lunastak — the difference between technically correct and genuinely good. The camera got better. The photographer’s eye matters more than ever.

What closing the gap takes

Before, closing this gap required a small team of unicorns.

First: a full-stack engineer with an eye for design, an appreciation for the end customer, and a willingness to work scrappy — prove ideas with working software instead of gold-plating for scale. They exist, but they're a rare breed.

Second: someone who gets technology deeply, understands how people consume it, with good judgement and enough influence to frame the opportunity for a team collaboratively storm on solutions — not dictate requirements from a slide deck. That was usually my role.

Third: a wildcard. Commercial insight, user research, specialist technology, business analysis — the specific expertise the problem demands. Because the first two roles are already unicorns, and no engagement is complete without domain depth.

Three experienced and capable consultants, each hard to find. That was the minimum viable team to move at pace, on things that matter.

Now it's different. Here’s what’s changed. On a live client engagement we're working on right now, AI has collapsed those three roles into one.

This is the job I’ve been doing for twenty years — facilitating executives through messy strategic problems, running user research, designing product solutions, and getting them built. The difference now is the toolchain.

What used to need a hard-to-hire team of weirdos, working in ways that break norms, is now possible to do with fewer people, and in one continuous cycle.

Executive facilitation one morning, domain knowledge codified into deterministic models that afternoon, a working software prototype in front of real users by end of week. Five major iterations of experience architecture and design based on user feedback in field tests. Front-end code built with components, polished, linted, and ready for Production. And an API and data architecture that's ready for AI. In eighteen days.

None of that is fast because it’s sloppy. It’s fast because AI removes the coordination tax. No handoffs between the strategist, the designer, and the engineer. No waiting for someone else to build what we can see needs to exist. The thinking and the making happen in the same brain, at the same time.

But building isn’t the point

Here’s the uncomfortable truth that took me a minute to fully absorb: the fact that I can produce a customer-ready solution in eighteen days or launch a native-AI beta product in 200 hours is impressive today. Give it six months and it won’t be. The tools are getting better at a terrifying pace. The engineering capability is commoditising in real time — and there’s an ocean of AI agents doing heavy lifting way beyond technology teams.

What doesn’t commoditise is knowing what to build. Understanding which problem actually matters for the business. Seeing how a customer experience should work before anyone’s written a line of code.

Navigating the political reality of an enterprise well enough to keep an initiative alive long enough to prove itself.

The scarce capability isn’t delivery. It’s the judgement to go from "we think this matters" to "here’s the proof — now you know what to invest in." AI gives you more hands, not more brains.

What this looks like in practice

On that engagement, the outcome wasn’t just software. It was proving a solution to a business problem the executive team had been circling for months. Now they can see it, use it, measure it, and decide with conviction whether to invest in building at scale. Their engineering team picks it up from there with a clear blueprint and a validated direction.

That’s the pattern. Six weeks from kickoff to validated proof. Not a slide deck. Not a strategy document that’s irrelevant by the time you’ve paid the invoice. A working proof — product, process, system, experience — that demonstrates what’s possible and what it takes to get there.

Whether that proof addresses customer products, enterprise systems, or your own operating model transformation depends on what matters right now. The approach is the same: frame the opportunity, prove it fast, and build conviction around fewer, better bets — instead of throwing resources at five AI initiatives and hoping for the best.

The in-sourcing wave is coming

There's a larger shift underneath all this.

Enterprises are staring at eye-watering SaaS bills — dozens of vendors, overlapping functionality, middleware duct-taped between systems that were never designed to work together. They can't afford the consulting services to properly integrate what they've got. And they're starting to ask a dangerous question: what if we just built the things that matter most, ourselves?

AI makes that question rational for the first time. The economics of custom software have fundamentally changed. What used to require a team of twelve for six months can now be scoped, proved, and built with a fraction of the people and time.

I expect we'll see more enterprises in-sourcing their most important capabilities — strangler-fig patterning their way out of vendor dependency, one high-value use case at a time. Customer-facing tools, operational processes, internal systems. The underlying capability required is the same: someone who can take the highest-value bet from strategic intent to working proof — fast enough that the business learns before it commits.

Where this leaves leaders

The companies that win the AI revolution won't be the ones with the biggest budgets or the most agents doing engineering work. They'll be the ones that close the gap between strategic intent and working proof — fast enough to learn, adjust, and build conviction before the window moves.

That gap doesn't close with a consulting engagement that produces a report. It doesn't close with an AI vendor demo. It closes when someone with the judgement to frame the right problem and the skill to prove the approach sits alongside your leadership team and does the work with you. Not theorise. Not hand-wave. Create the outcome.

Strategy, design, and implementation as one continuous act. Same work. Different medium.

The technology is ready. The question is whether you can make better decisions, about things that matter most, amongst the noise and urgency surrounding what's just now possible.

Humble Opinions

We think out loud here. Subscribe and we'll email you when we publish something new.

Most companies will fail at AI. Not because of the technology. | Humble Ventures