A well-run reliability-centred maintenance programme on a stable asset population is one of the most defensible operating disciplines in asset management. RCM is not the past. It is the bar that any predictive-maintenance programme has to clear to be worth the investment. The honest question on a specific asset class is not “do we want to do predictive maintenance”; it is “what does IBM Maximo Predict give us that the RCM programme is not already giving us, and is the gap worth the cost”.
This is how we work that question through with clients.
Where RCM is hard to beat
A mature RCM analysis on a defined asset class produces a maintenance strategy grounded in failure modes, consequences and detection methods. It draws on engineering experience, manufacturer data, operating context, and historical failure events. It produces a maintenance plan that the reliability engineer can defend, the planner can execute, and the auditor can review.
On stable populations with known failure modes, RCM is genuinely hard to beat. The plan it produces is conservative, transparent, and adapted to the operating context. A predictive model trained on the same population will often produce maintenance recommendations that look very similar to what RCM has already concluded, because both are reading the same physics.
If the RCM programme is mature and the failure population is stable, Predict on top of it is incremental, not transformational.
Where Predict pulls ahead
Three patterns where Predict does materially more than a well-run RCM programme.
Variable operating context. RCM produces a strategy for the asset under typical operating conditions. Where the operating context varies materially — load, ambient, duty cycle — the strategy has to be conservative enough to cover the worst case, which means the assets running in benign conditions are over-maintained. A model that knows the operating mode and the live signal can recommend interval extension on the conservatively-maintained assets, with an audit trail.
Large populations with subtle failure precursors. RCM identifies failure modes; it does not necessarily detect early-stage onset. Where a failure mode has a measurable precursor — a vibration signature, a temperature trend, a pressure correlation — and the population is large enough to learn from, a model can spot onset before the RCM-defined inspection would.
Failure modes the original RCM analysis missed. Five-year-old RCM analyses occasionally miss failure modes that have emerged from operating conditions, design changes, or environmental drift. A model trained on the recent population will surface anomalies that the original analysis did not anticipate. This is not a fault of RCM — it is a feature of any analysis that has aged.
In all three patterns, the value is incremental refinement of the maintenance plan, not a wholesale replacement of the discipline that produced it.
Where Predict makes things worse
Two patterns where deploying Predict on top of an RCM-managed asset class actively hurts.
Where the failure data is thin. Predict learns from failure history. If RCM is doing its job and unplanned failures are rare, the historical failure dataset is thin. A model trained on a thin dataset learns wrong lessons, generates noisy recommendations, and erodes the reliability engineer’s confidence in both the model and the data. The right response on these asset classes is to stay with RCM.
Where the recommended action is already conservative. RCM on safety-critical asset classes often recommends conservative maintenance intervals, on purpose, because the cost of failure is asymmetric. Predict can argue for interval extension on those assets, statistically. The reliability engineer is right to overrule the model. The model has not understood that the cost of being wrong is not symmetric. Deploying Predict on these asset classes wastes effort and creates organisational tension.
The honest answer in both cases is to leave RCM alone.
A practical decision framework
Per asset class, work through five questions.
- Is the RCM analysis recent and trusted? If yes, the bar for Predict is high. If the analysis is ageing or trust is wavering, the bar is lower.
- Is the operating context variable? Variable contexts favour Predict; stable contexts favour RCM.
- Is the failure history rich enough to learn from? Predict needs failure events to learn from. Asset classes that have been managed conservatively will have thin failure data and poor model performance.
- Is there a measurable failure precursor? Predict’s edge is in detecting onset. If there is no precursor signal, the model has nothing to learn against.
- Is the cost of failure symmetric? Where the cost of underestimating risk is much higher than the cost of overestimating it, RCM’s conservatism is a feature, not a bug. Predict is the wrong tool.
If three or four of those answers tilt towards Predict, the asset class is a credible candidate. If most tilt towards RCM, leave it alone. Predict is not free; the readiness work, the data engineering and the operating discipline cost real money.
Predict and RCM are not adversaries
The framing that gets the best result is “Predict augments RCM where the data and operating context support it, and RCM remains the system of record for maintenance strategy”. The reliability function still owns the strategy. Predict is one input.
This framing also resolves the cultural problem. Reliability engineers who have spent years on RCM are not threatened by a tool that supports their work; they are threatened by a tool that claims to replace them. The first is true; the second is not. Anyone selling Predict on the second framing is selling badly.
Closing position
A well-run RCM programme on a stable asset class is hard to beat. Predict is not a wholesale replacement; it is a targeted augmentation on asset classes where variable context, large populations, measurable precursors, and rich failure data align. Deploying it on the wrong asset class wastes effort and erodes confidence in both Predict and RCM. Choosing the right asset class is most of the work.
For the implementation pattern in detail, see IBM Maximo Predict: implementation, models and managed services. For the broader sequencing argument, see why predictive maintenance programmes fail and sequencing Monitor and Predict after Manage.