The AI and IoT conversation in asset management has moved past the slide-deck phase. There are real production deployments of MAS Monitor against historians, MAS Predict against rotating equipment, MAS Visual Inspection on inspection workflows, and increasingly AI assistance embedded in the day-to-day work of the technician and the planner. There are also a meaningful number of stalled projects where the AI or the IoT was the goal rather than the means, and the operating model never showed up.
This is a practitioner view of where AI and IoT in MAS earn their licence today, where they do not, and the integration strategy that gets value out of them without a marketing-led project. It draws on the work we are doing on customer estates today, not on a 2026 prediction piece.
The honest 2026 picture
Three things are now true that were not true two years ago.
The MAS components are mature enough to be production options for the right asset class. Monitor is the most-deployed MAS component on the engagements we run, Predict has crossed from pilot into production on rotating equipment with disciplined failure data, Health is increasingly the executive-level capital prioritisation tool. Each one earns its place when the data and operating model support it. Each one disappoints when they do not. Our deeper view on this is in the insight on sequencing Monitor and Predict after Manage.
The IoT and OT connectivity layer is the actual hard part. Wiring sensors, historians and SCADA into MAS without breaking the OT cybersecurity boundary is the work that takes the time on these projects, not the AI on top. The pattern we now reach for is set out in the IoT and OT connectivity page and the deeper piece on connecting an OT historian to MAS Monitor without breaking the OT team.
AI in the work loop is starting to land. Not as a separate AI product, but as in-product assistance — answering “what was the last issue with this asset”, suggesting parts and tasks based on history, summarising a long inspection report into a planner’s view. The technologies behind these capabilities are mature enough that they have moved from supplier slideware into capabilities customers actually use. We build a version of this into our own product set with MaxIron Assist.
The strategy that works
The strategy we run with customers in 2026 has the same shape every time:
1. Get Manage right first
Every MAS-component conversation starts with “is the asset register, the work history and the failure coding in good enough shape?” If they are not, the AI on top will surface garbage. We cover the readiness baseline in the Maximo health check guide.
This is not a delaying tactic. It is the condition for the rest of the work to produce decisions a planner can trust.
2. Pick a deliberate first asset class
The MAS component that pays back is the one delivered against a deliberately-chosen asset class with a quantified business case. Not “let’s roll out Monitor across the estate”. Specifically: “let’s roll out Monitor against the chillers in the data centre, where unplanned downtime costs us X per hour and we have 18 months of historian data to train against.”
The same is true for Predict (a rotating-equipment population with ISO 14224-coded failure history), for Health (an asset class where the criticality conversation is a real capital one), for Visual Inspection (a defect class with a clear pass/fail definition and a labelled image set).
We have written more on this in the insight reading a Maximo Monitor anomaly without panicking the night shift.
3. Design the operating loop, then the technology
The MAS component is one part of a loop. The other parts are: who looks at the alert, what they do with it, how they capture the action back into Manage, what changes in the maintenance plan as a result, who reviews the cumulative outcome. If those questions are not answered, the dashboard exists but the work does not flow.
This is the part that the supplier slide does not cover. It is the part that determines whether the project produces a result.
4. Connect at the boundary, not by extending the boundary
For IoT specifically, the integration strategy is at the OT-IT boundary, not by extending OT into IT or the other way around. The pattern is an explicit edge gateway or DMZ, with one-way data flow into MAS, with the OT team owning what crosses. Customers who try to integrate around this boundary instead of through it spend the next twelve months arguing about it.
5. AI assistance lives in the work, not next to it
The AI capabilities that have landed best in 2026 are the ones that live inside the application surface the user is already on — the work order screen, the asset history, the inspection write-up. They suggest, summarise, or surface. They do not require the user to switch to a separate AI tool, learn a new interface, or change their workflow. The value is the speed in the existing workflow, not the novelty of a new one.
6. Measure outcomes, not deployments
The metric for an AI/IoT-on-Maximo programme is not “how many MAS components are deployed”. It is: did the failure rate go down on the asset class we targeted, did unplanned downtime reduce, did the cost per intervention drop, did the inspection cycle shorten without losing coverage. Those are outcomes. The MAS components are means to them.
Where it does not pay off
Equally important: where AI and IoT do not earn their place yet, in our experience.
- Estates where the data quality work has not been done. Monitor against a poor asset register produces phantom anomalies; Predict against incoherent failure coding produces useless models; Health against missing criticality produces a confidence-inducing chart that is wrong.
- Asset classes where the data does not exist. Predict needs years of operating and failure history. New asset classes, or classes where the failure history is uncoded, are not Predict candidates regardless of business interest.
- Inspection workflows where the defect definition is subjective. Visual Inspection works on defects with clear pass/fail definitions and a labelled training set. It does not work on “this looks a bit off” judgements that a senior inspector makes by experience.
- Organisations where the operating model will not change. If the planner cannot act on a Monitor alert because the maintenance plan is fixed in a five-year contract, the alert is decoration. The technology will not make the operating model change.
These are not reasons to never do the work. They are the conditions to fix first.
What this means for procurement and budget conversations in 2026
The board-level question we are seeing this year: “we have invested in MAS, where is the AI and IoT value coming back?” The honest answer is that it comes back on the asset classes where the readiness work has been done and the operating model is in place. The work to put that in place is not glamorous, but it is the precondition for everything the supplier slide promises.
Customers who landed Monitor on a deliberate asset class in 2024 are seeing the value in 2026. Customers who launched a generic AI/IoT programme in 2025 are mostly still working through the readiness gap.
If the executive timeline says the value has to land this year, the answer is: pick one asset class, get it right, prove the operating loop, then expand. That is the strategy that produces a measurable outcome before the budget conversation in Q4.
Where to start
If the conversation has come up on your side, the natural starting point is the readiness assessment — the asset class candidates, the data quality, the operating model question. We run that as a fixed-scope engagement and the output is a defensible scope and business case for one MAS component on one asset class. From there, the rest of the conversation is about delivery, not about whether to do it.
If you would like to talk through where you are, get in touch. The conversation is usually most useful at the point where the executive interest exists but the scope is not yet locked in.