AI is the answer! What is the question?
Article by Ben JohnsonThere is a pattern emerging across large enterprises and PE-backed businesses. The board asks, “What are we doing about AI?”. It is understandable. It is also incomplete!

There is a pattern emerging across large enterprises and PE-backed businesses. The board asks, “What are we doing about AI?”. It is understandable. It is also incomplete!
Whilst the headlines are competing on news of failed AI implementations and employee free unicorns, the uncomfortable truth is that most organisations will not fail at AI because the technology is inadequate. They will fail because they never properly defined the problem they were trying to solve.
The technology, largely, works. The constraint is, mostly, judgement.
And what differentiate the winners from the failures is the ability to identify where value is leaking, to quantify that leakage with evidence, and to apply the appropriate level of tooling with discipline. When that discipline is missing, one of three things happens.
- The initiative collapses in governance because the business case was never properly evidenced.
- Engineering builds something clever that never quite connects to operational reality.
- You buy a tool and then look for somewhere to use it.
None of those are technology failures. They are framing failures.
Start With Friction, Not Features
Before we talk about models or agents, we need to answer a simpler question: Where is friction costing us money? If you are not sure, look in predictable places:
- Cycle time variability
- Rework loops
- Manual approvals that no one trusts but everyone tolerates
- Error correction
- Revenue delay
- Exception handling
Process mining is one way to uncover this. It is powerful because it reconstructs how work actually flows across systems. It removes opinion from the discussion. The word “actually” is key here, as workflows often are built on assumptions, which shows up in an implementation or re-engineering.
But it is not the only method.
Operational telemetry, management information, variance analysis, customer complaints data, audit findings and structured interviews all surface friction when approached rigorously. The point is not the tool. The point is evidence.
And once you see friction clearly, you quantify it. Let try an example:
- A claims process averaging 18 days end to end.
- Four manual handoffs.
- Twenty-two per cent rework.
- Annual administrative cost of £3.2 million.
At that moment, the question is not “Should we deploy AI?”. Rather it is, “Why is £3.2 million leaking here?”
The Tooling Discipline
Only after friction is identified and quantified should tooling enter the conversation. And this is where many organisations lose discipline. If your instinct is to jump to straight to AI, you are wrong! Yes, it feels progressive. It signals ambition. But, the appropriate sequence is escalation, not enthusiasm. Rather, you need to ask:
- Can this be removed through process simplification?
- Can configuration of existing systems resolve it?
- Would deterministic automation solve it safely?
- Does this genuinely require adaptive decision-making?
- Is autonomy justified, or is augmentation sufficient?
AI is appropriate when the problem genuinely demands context-aware judgement at scale.
Agentic autonomy is appropriate when the problem involves complex cross-system coordination and bounded authority can be defined clearly.
Everything else should be solved at the lowest effective level of intelligence. That protects capital. It protects credibility. And it makes the cases where AI is deployed much stronger.
Where Most Organisations Fail
The common failure pattern is not technical. Leadership identifies an opportunity. It feels strategically obvious. It is handed to engineering.
Then the business case unravels because it was never validated with sufficient operational evidence. Or the engineers build something impressive that solves a slightly different problem than the one that actually mattered. The gap is not capability. It is translation.
There is no disciplined pathway from hypothesis to proof. No structured way of validating value before scaling technology. No clarity on authority and governance before autonomy is introduced.
That is why so many AI initiatives stall at prototype stage.
Not because the model is weak.
Because the question was weak.
Need help asking the right questions? BML has helped 20 organisations make sense of AI since 2017, delivering AI enabled solutions providing over £30m EBITDA improvements.
Explore our favourite client projects

NHS Trust procurement savings and AI tool
We built AI to expose duplicates, attribute billing accurately, and prevent fraud.

Global Healthcare provider. Strategic review & delivery
Exposing critical tech gaps, safeguarded integration and rescued major ERP plans

Global Healthcare PE exit & integration
Revealing hidden tech failures to enable a confident, growth‑ready acquisition.

Healthcare data services deal validation
Identifying the critical blockers to operational effectiveness and growth.


Turning Complexity Into A Clean Exit
Turning complex divestments into executable carve‑outs that protect value

Growth Through One Scalable Platform
From delayed integration to a single operating model—lower cost, stronger execution, faster growth

Creating A Sum Greater Than The Parts
Precision carve‑out and merger integration to create a £5bn listed vertical specialist

Integration Yields Immediate Value
Turning four independent verticals into one scalable, data-driven growth engine

Smart Due Diligence Accelerates Growth
Accelerating acquisition value through execution-ready integration design

Integration Creates Value
Delivering post-acquisition stability and synergy realisation for a FTSE 100

Data Enrichment And SKU Rationalisation
From SKU simplification to smarter customer experiences.










