IBM Maximo Predict is the part of the MAS suite where buyers most often run ahead of their data. The supplier story is compelling. The technology is real. But Predict learns the lessons your failure history teaches it, and on a lot of estates that history is not yet good enough to teach a useful lesson. The point of this guide is to make that conversation honest before money moves.
What Predict actually does
Predict applies analytics to failure history and operational signal to estimate the probability that an asset will fail in a given window. The output is fed into Manage so it can drive PM intervals, inspection scope, or work prioritisation. It is not a magic answer; it is an additional, defensible input to a maintenance plan that the reliability function already owns.
Predict ships with a model lifecycle. It does not ship with the data discipline, the asset domain knowledge, or the operating model that make the lifecycle pay back.
When Predict is worth it
Four conditions usually have to be true at once.
Failure history in Manage is rich enough to learn from. Years of work orders, with usable failure codes and accurate completion data. Not perfect — usable. If technicians close work with vague descriptions and missing failure codes, you are not ready to invest in model development. You are ready to invest in coding discipline.
The asset population is large enough for a model to generalise. Predict learns from the population, not from the individual asset. A handful of high-value bespoke assets is rarely a Predict candidate. A large population of similar equipment with shared failure modes usually is.
The cost of unplanned failure is high enough to justify the investment. Safety-critical, regulatory, operationally expensive — there has to be a number on the page that the model can move.
There is a reliability function that will use the output. A model whose output nobody acts on is an expensive dashboard. The reliability operating model — who interprets, who challenges, who retrains — has to exist before the model does.
When any of those is missing, the project either stalls or produces output the operations team quietly stops trusting. Our broader position on this is in why predictive maintenance programmes fail.
When Predict is not worth it (yet)
Three patterns we recommend deferring on:
- The asset register and failure coding are still being cleaned up. Predict learns wrong lessons from bad data. Fix the data first, then come back.
- Predict is being scoped before Monitor. For most asset classes, Predict is materially better when it has live operational signal alongside historical failure data. Sequencing Monitor first on the same asset class usually produces better Predict outputs later. (See sequencing Monitor and Predict after Manage.)
- There is no maintenance decision waiting for the model output. A model needs a target. “Reduce unplanned failures” is a vibe, not a target. “Lengthen PM interval on this pump class without breaching the failure-rate threshold” is a target.
Picking the right first asset class
The first Predict use case sets the tone for everything after it. The right candidate is rarely the executive sponsor’s favourite asset. It is the asset class where:
- Failure history in Manage is the cleanest in the estate
- The population is large enough for a model to generalise
- A reliability engineer is willing to own the output and challenge it
- A maintenance decision (PM interval, inspection scope, condition trigger) is waiting on a better view of risk
- A business case can be written in a single page with numbers in it
Pick boring before exciting. The first programme has to succeed.
Scoping a credible first programme
A credible Predict programme has six elements. If any are missing, the supplier is selling the easy part and hiding the hard part.
- Readiness assessment on Manage data quality, the candidate asset class and the reliability operating model
- Data engineering: failure-code cleanup, work history alignment, feature build from Monitor signal where available
- Model build and validation against historical failure events, with explicit acceptance criteria
- Manage integration so output drives work, not a separate dashboard
- Operating model for the reliability and planning teams to interpret, retrain and challenge the model
- Managed run-state for the model lifecycle, retraining and observability
Notice that two of the six elements (data engineering and operating model) are not Predict-the-product. They are the work that makes Predict-the-product earn its licence.
Questions to ask a supplier
- “Walk me through how you would assess our failure-coding quality before scoping a Predict use case.”
- “Which asset class would you recommend we start on, given what you have seen of our estate, and why that one?”
- “Do you build proprietary models or do you use the Predict model lifecycle?” (A credible answer is the second one. There is no proprietary IP in this space that a buyer should pay for.)
- “How does the model output get into the maintenance plan? Who decides whether to act on it?”
- “Who retrains the model after go-live, on what cadence, and what do they do when it drifts?”
- “If the first asset class does not pay back, what do we have, and what does the supplier recommend we do?”
Closing position
IBM Maximo Predict is one of the highest-leverage components of the MAS suite when the data and the reliability operating model are ready, and one of the most expensive distractions when they are not. The decision is not “do we want predictive maintenance” — every operator does. The decision is “are we ready to do it on the asset class we have in mind”. This guide is the basis for that conversation.
For where Predict sits inside the wider suite, see the MAS suite overview. For the implementation pattern in detail, see IBM Maximo Predict: implementation, models and managed services.
Talk to the people who would actually deliver it
No pitch deck, no pressure. A direct conversation with one of our senior consultants.