Back to Insights

Insight

Connecting an OT historian to MAS Monitor without breaking the OT team

How to connect an OT historian to IBM Maximo Monitor cleanly: the OT/IT boundary as a first-class constraint, the patterns that work, and the patterns that quietly get rolled back.

Cover image — Connecting an OT historian to MAS Monitor without breaking the OT team
IBM Maximo Application SuiteMaximo MonitorIoTOT cybersecurityArchitecture

The connectivity layer is where most IBM Maximo Monitor programmes succeed or quietly fail. The Monitor configuration is straightforward. The dashboards take days. The work that takes months is getting historian data out of the OT environment cleanly enough that the OT team is comfortable operating it long-term. Get the OT engagement wrong and the programme either gets rolled back at the first cybersecurity audit or ages into a fragile bridge that nobody wants to touch.

This is what we have learned from doing it on real estates. None of it is novel. All of it is unglamorous.

The OT team is not the obstacle

The framing that produces failed programmes is “the OT team is the obstacle to getting data into MAS”. The framing that produces durable programmes is “the OT team is the function we have to design with, because they own plant safety and they will own this connectivity long after the consultants have left”.

The OT team owns plant safety and operational resilience for a reason. They have seen what happens when those responsibilities are diluted. A pattern designed around them — where their data leaves the OT environment in a way they have not approved — will be unwound at the first audit, and they will be right to unwind it. The sequence that works is to engage the OT team first, design the boundary jointly, and only then start scoping Monitor on top.

Where this engagement is treated as a sign-off gate at the end of the design rather than a continuous conversation throughout it, the design either gets blocked or gets approved with reservations that are never resolved. Both outcomes corrode the programme.

The pattern that works: an explicit edge or DMZ component

The architectural pattern we see succeed in production is the same across sectors: an explicit edge or DMZ component sits between the OT environment and MAS, owned jointly by the OT team and the IT/MAS team, with clearly delineated responsibilities.

What this looks like in practice:

  • The OT team owns the historian and everything inside the OT environment. The IT/MAS team does not have direct read access to the historian.
  • The edge or DMZ component pulls aggregated, agreed historian data on an agreed cadence. It is the explicit handoff. The OT team is comfortable operating it; the IT team is comfortable depending on it.
  • The data that crosses the boundary is documented — which tags, which assets, which sample rate, which retention. Nothing crosses by accident.
  • The edge component buffers and forwards. If the link between edge and MAS breaks, data is not lost; it is queued. If the link from historian to edge breaks, the OT operations are not affected.
  • The edge component has its own observability, owned by the team that runs the rest of the MAS platform.

This pattern is not the cheapest possible architecture. It is the architecture that survives a cybersecurity audit, a SCADA upgrade, a change of OT lead, and a regulator visit. The cheaper architectures we have seen tend to have one or more of those events kill them inside two years.

Sample rate at the source is rarely the sample rate Monitor needs

A common rookie mistake is to mirror the historian’s native sample rate into MAS. The historian is recording at a high frequency for the OT use case — high-frequency control loops, post-incident forensic analysis. Monitor’s anomaly detection is operating on a much longer timescale and on a much smaller, more meaningful set of tags.

Aggregating at the edge — minute-level or five-minute-level summaries, derived signals, calibrated baselines — produces a dataset that is materially cheaper to ingest and store, materially easier for Monitor to operate against, and materially less risky from an OT data exposure perspective.

This is data engineering, not analytics. It is also one of the easier wins in the early phase of a Monitor programme, because it benefits both the OT team (less data leaving the environment) and the MAS team (cheaper, more focused ingest).

Identifier mapping is where the dashboards become useful

A historian tag like PLT01_C03_PMP_DSC_PRES_PSI means something to the OT engineer who named it. It means nothing to the planner looking at the Monitor dashboard. The work of mapping operational tags to asset records in Manage is what turns the data into operational context.

Patterns that work:

  • A single, agreed asset identifier strategy spanning OT tags, historian paths and Manage records
  • A mapping layer (often part of the edge component or alongside it) that translates source identifiers to Manage identifiers
  • An exception path for tags that cannot be mapped, surfaced to the asset data team rather than dropped silently
  • Periodic reconciliation between source taxonomies and the Manage hierarchy, with explicit ownership

This is unglamorous work. It is also the difference between a Monitor dashboard that surfaces “anomaly on PLT01_C03_PMP_DSC_PRES_PSI” and one that surfaces “anomaly on Pump 3, Compressor Train 3, Plant 1, classified as critical”.

What gets rolled back

For completeness — patterns we have seen rolled back, and what they have in common.

  • Direct read access from MAS into the historian. The OT team is right to unwind this at the first audit.
  • Connectivity designed around the OT team. The OT team is right to unwind this whenever they next have the opportunity.
  • Bespoke per-site connectivity patterns. They do not scale, do not get patched, and become a liability at upgrade.
  • No buffering at the edge. The first network blip loses data and erodes confidence in the programme.
  • No observability on the edge. The first failure is invisible until the dashboard goes blank.
  • No agreed data exposure documentation. The first cybersecurity audit produces findings nobody wants to defend.

Each of these is avoidable. Each of these is what makes the difference between a Monitor programme that lasts and one that gets quietly switched off.

Closing position

Connecting an OT historian to MAS Monitor cleanly is mostly the OT engagement, partly the architectural pattern, and only a little the technology. Engage the OT team first. Design an explicit edge or DMZ component jointly with them. Aggregate at the source. Map identifiers to assets. Document what crosses the boundary. Build observability into the connectivity, not just into MAS. None of this is novel. All of it is what separates programmes that compound value over years from programmes that get rolled back at audit.

For the implementation pattern in detail, see IBM MAS IoT and OT connectivity for Monitor, Predict and Health and our position on OT cybersecurity is now an asset management problem.