Most process improvement programs begin with a conclusion. Measurement comes second, if at all. That sequencing is the primary reason redesigns fail to hold.
The most expensive mistake in operational improvement is not choosing the wrong solution. It is choosing a solution before understanding the problem.
Most process improvement programs begin with redesign. They map the current state, identify perceived inefficiencies and build a future state based on what the team believes is wrong. The measurement phase, the work of actually quantifying where time is lost, where rework originates and what the data confirms, happens later, if at all. That sequencing error is not a minor procedural flaw. It is the single most consistent driver of process improvement programs that produce results on paper and no lasting change in practice. It appears in manufacturing operations, financial services back offices, healthcare administration, logistics networks and regulatory functions. The industry changes. The failure pattern does not.
When organizations move directly to redesign, they are making decisions based on opinion. What the team believes is broken and what the data confirms is broken are rarely the same thing. The gap between those two assessments is where improvement programs lose their credibility and their budgets.
The pattern appears consistently. A senior leader identifies a process as slow or costly. A team is assembled. The process is mapped. Improvement opportunities are proposed based on what people in the room believe is happening. A future state is built. The redesign is implemented. Six months later, the metrics have not moved. Not because the redesign was poorly conceived, but because it addressed the wrong problem.
The redesign addressed what the team assumed. The data, had anyone collected it, would have pointed elsewhere.
Across our engagements, organizations consistently overestimate the quality of their operational data. When we ask teams whether they have data on processing time, rework rates and non-value-added activity, the answer is almost always yes. When we examine that data, it typically describes averages, estimates or system-generated timestamps that capture when a transaction was opened and closed, not what happened inside it.
In a process redesign engagement for a global food and beverage manufacturer, we measured the actual time each regulatory specification spent in each stage of the review process before the team proposed a single change. What we found contradicted every assumption the team had made about where the backlog originated. Between 70 and 90 percent of total processing time was non-value-added: time spent waiting for approvers who were not sequenced correctly, rework caused by incomplete submissions entering the queue and handoffs between functions that had never been formalized. None of that was visible in the existing data because no one had measured at that level of granularity before.
This is one documented instance of a pattern we observe across every type of complex, multi-function operation. The specific content of the process changes. The structure of the failure, redesign initiated before measurement established root cause, does not.
Measurement is not a preliminary step that gets compressed to make room for redesign. It is the foundation on which a defensible redesign is built. Without it, every future state is a hypothesis, plausible, perhaps, but not confirmed.
The measurement discipline we apply before any redesign work begins covers three things. First, where time is actually spent in the process, not where people believe it is spent. Second, where rework enters and what triggers it, traced to primary source data rather than team recollection. Third, how handoffs between functions are sequenced relative to the dependencies they create. That analysis identifies what is non-value-added by definition, waiting, duplication, unnecessary approval steps and what is non-value-added by design, steps that exist because of a historical workaround that was never removed.
Only after that picture is confirmed do we propose a future state. The redesign is then a response to what the data shows, not to what a workshop concluded.
This distinction matters for a practical reason. A redesign grounded in measurement can be defended when questioned. When a business unit leader asks why their approval step was removed, the answer is not that the team believed it was unnecessary. The answer is that the data showed it added no time value to the output and created a queue that delayed every submission behind it. That is a conversation that can be had. A redesign grounded in opinion cannot survive that conversation.
There is a specific type of measurement that rarely happens before redesign begins and its absence accounts for most of the failure we observe: end-to-end cycle time measurement at the transaction level.
Most operational data is aggregate. It tells teams how many units were processed, how many were rejected and how long the average took. It does not tell them how a specific transaction moved through the process, which steps it touched, how long it waited at each one, what caused it to loop back or what sequence of decisions determined its final outcome. That transaction-level picture is what makes root cause visible. Without it, root cause is inferred. And inferred root cause produces redesigns that address the wrong place.
Before any redesign conversation begins, answer this question: what percentage of processing time in your most critical workflow is accounted for by active work versus waiting?
If your team cannot answer that question with primary source data, not estimates, not system averages, you are not ready to redesign. You are ready to measure.
That question, answered rigorously, will tell you more about where your process is broken than any workshop, process map or consultant recommendation. It will also tell you whether the problem is structural, a process that was never engineered for its current volume or operational, a well-designed process that is being run inconsistently. The interventions are different. Getting to that distinction requires measurement.
The regulatory process redesign engagement referenced above produced a redesign of 621 specs, grounded in that transaction-level measurement. The future state was built around what the data showed, not what the team assumed. The improvements held because the baseline was documented and the root causes were confirmed before a single process change was made.
That is the standard we apply across every engagement, in every type of operation. It is also the starting point we recommend for any organization preparing for a process improvement program that expects the results to last past the first quarter.
If you are preparing for that conversation, the operational assessment is the right place to begin.
Thirty minutes with a senior partner. No deck, no pitch.