Back to Insights

Insight

AI-led changes to the asset operating model: what Manage customers should plan for in the next 24 months

What the next 24 months of AI-led change actually mean for an asset-intensive operator running IBM Maximo Manage today: what is real, what is not, and what to plan for without abandoning the Manage discipline that pays the bills.

By Robert Carew
Cover image — AI-led changes to the asset operating model: what Manage customers should plan for in the next 24 months
IBM Maximo Application SuiteAIAsset operating modelFuture readinessStrategy

Boards are being told the asset operating model is about to change. Some of that is genuine, and some of it is not. The right response, for an operator running IBM Maximo Manage today, is neither to dismiss the change nor to throw the existing operating model overboard. It is to understand what is actually shifting, what is hype, and what to put on the twenty-four-month plan so the operator is ready when the conversation lands at the executive table.

This is what we are seeing on the ground. We deliver Manage and the wider MAS suite for clients today. We have implemented and operated Monitor, Predict, Health and Visual Inspection in production. The view here is from inside those engagements, not from a vendor deck.

What is actually shifting

Three changes are real and material.

Operational signal is finally meeting the asset record. The historian and the SCADA estate have been producing data for decades. The asset record has lived in Manage for nearly as long. The two have spoken to each other through brittle integrations and human reconciliation. The MAS suite — specifically Monitor underneath Predict and Health — is the first widely-deployed pattern that closes the loop properly. It is not a future capability. It is in production today.

Vision is becoming a routine part of inspection regimes. Visual Inspection is no longer a research project. On safety-critical inspection workflows with high volume and well-defined defect classes, it is operationally viable today. The audit trail is the design problem, not the model.

**The reliability conversation is shifting from “do we do predictive maintenance” to “on which asset classes does it actually pay back”. That is a maturity step. It produces better programmes than the previous framing — and it puts a higher bar on data quality, failure-coding discipline and the reliability operating model. Those are Manage problems.

Notice that all three changes pull more weight onto the Manage estate, not less. The asset record has to be in better shape, not worse. The failure history has to be more disciplined, not less. The integrations have to be more resilient, not less.

What is hype

For balance — three things that are widely promised and not, in our experience, true today.

General-purpose generative AI replacing the planner, the reliability engineer, or the inspector. None of the operators we work with have credibly displaced these roles, and the ones that have tried have unwound it. Generative AI is genuinely useful for accelerating drafting, summarising work history, and conversational interfaces into Manage. It is not, today, replacing the human judgement that asset-intensive operations depend on.

Pre-trained vertical models that work out of the box on your estate. The training data on your estate is what makes a model useful. A pre-trained model gets you to a starting point; it does not get you to a working operating model. Suppliers offering “no training required” are usually selling a generic baseline that will need real training work to be operationally trustworthy.

A wholesale replacement of RCM, work management, or the asset register. The MAS suite augments the existing operating discipline. The operators succeeding with it are the ones treating it as augmentation; the ones treating it as replacement are the ones whose programmes stall.

The honest line is: the MAS suite is genuinely transformational where it earns its licence, and it is being oversold in places it is not yet ready.

What to put on the twenty-four-month plan

For a Manage-first operator, the twenty-four-month plan should look something like this.

Months 0–6: get the Manage foundation honest. Asset register completeness on the candidate asset classes. Failure-coding discipline. Hierarchy and criticality consistency. Integration resilience. None of this is glamorous. All of it is the prerequisite for everything that follows. We argue the case in sequencing Monitor and Predict after Manage.

Months 6–12: pick one MAS-suite component that pays back on a known decision. For most operators that is Health (against a capital plan), Monitor (against unplanned downtime on a specific asset class), or Visual Inspection (against a high-volume safety-critical workflow). One component, one asset class, one decision. The point is to build the operating muscle, not to build the platform.

Months 12–18: add the second component on top of the first. Monitor on the asset class where Health is now scoring. Predict on the asset class where Monitor is now producing live signal. Visual Inspection on the inspection regime that Health is identifying as criticality-driving. The sequencing produces compounding value, not parallel projects.

Months 18–24: industrialise the operating model. The reliability function, the planning function and the operations function start working from a common operating picture. The audit trail is mature enough to defend at regulator level. The capital plan is being built on evidence rather than narrative. The operator now has a defensible position on the AI-led conversation, because they are already in it.

This is not a heroic plan. It is a sequenced one. It also produces a defensible position when the executive team asks “what are we doing about AI in the asset operating model” — the honest answer is “this, this and this, on these asset classes, with this evidence, on the same platform we already trust”.

What not to do

Three failure patterns we see repeatedly.

Buying the entire MAS suite up front because the licensing is bundled. The licensing conversation should not drive the operating model conversation. Buy what you can credibly deploy in twenty-four months. Defer the rest.

Standing up a parallel team for the AI initiative. The team that will operate this long-term is the team that runs Manage today. A parallel team builds a parallel platform and a parallel operating model, and the two never converge. The MAS suite is operated by the same team that operates Manage, on the same platform.

Promising executive sponsors a step-change in twelve months. The step-change is real on the right timeline. It is not real on a twelve-month timeline. Programmes promised on a twelve-month step-change get cancelled in month nine when the step-change is not visible.

The honest framing for the executive conversation

When the AI conversation lands at the executive table — and it will — the framing that holds up is “we have stabilised the asset record, we have already deployed the suite components that pay back today, we have a sequenced plan for the next two years, and we have evidence rather than narrative on every claim”. That position survives a board challenge. The position “we have a transformational AI roadmap that will deliver in the next eighteen months” does not.

The MAS suite is the route by which Manage-first operators reach that defensible position. The sequencing is the work. The platform is the easy part.

Closing position

The next twenty-four months are real, not hype, but only on the right timeline and with the Manage discipline intact. Stabilise the foundation. Pick one component that pays back on a known decision. Sequence the rest behind it on the same managed platform. Industrialise the operating model. The operators who do this will be in the conversation when the executive question lands. The operators who chase the hype will be unwinding programmes when the question lands.

For the broader picture, see the MAS suite overview. For where to start in the Manage estate, see our services.