Back to Guides

Guide

MAS IoT architecture patterns: getting operational data into Monitor and Predict cleanly

Practical IoT architecture patterns for IBM Maximo Application Suite — SCADA, historians, edge gateways, OT/IT boundary — and how to design the connectivity layer underneath Monitor and Predict without breaking the OT team.

Published 22 April 2026

Cover image — MAS IoT architecture patterns: getting operational data into Monitor and Predict cleanly
IBM Maximo Application SuiteMaximo MonitorIoTOT cybersecurityArchitecture

Most MAS-suite engagements that go wrong, go wrong in the connectivity layer. The Monitor configuration is straightforward. The Predict model lifecycle is mature. The capability that gets operational data out of the OT estate, across the OT/IT boundary, into MAS cleanly enough to be useful — that is where scope, budget and timeline are quietly destroyed. This guide is for the architect, IT lead or platform owner who has been asked to design that layer.

What “the connectivity layer” actually means

It is the deliberate design across SCADA, historians, edge gateways, the OT/IT boundary and MAS itself that turns plant data into a usable input for Monitor and Predict. It is rarely a single product. It is a set of decisions:

  • Which sources contribute data — SCADA tags, historian time series, edge sensor signals, line-of-business systems
  • Which assets in Manage those signals belong to, and how the identifier mapping works
  • What sample rate, retention and aggregation each signal needs, for the use case it is feeding
  • Where the OT/IT boundary sits, what crosses it, how, and who is allowed to see it
  • What happens when the connectivity link is broken, slow, or producing bad data
  • Who operates the layer, with what observability, on what platform

Most of those decisions have to be made before the first MAS Monitor connector is configured.

The OT/IT boundary, treated as a first-class constraint

The OT/IT boundary exists for a reason. Plant safety, operational resilience and cybersecurity are owned by the OT team, and they should be. Getting MAS Monitor working depends on engaging that team on its terms, not designing around them.

The patterns we see work in production:

An edge or DMZ component sits between the OT environment and MAS. The OT team owns its side of the boundary. The IT team owns the other side. The edge component is the explicit handoff. It can pre-process, buffer, anonymise where required, and present a stable interface to MAS regardless of what is happening in the plant.

Data ownership is agreed before tagging starts. Which tags leave the OT environment, on what cadence, and to where, is documented and signed off. There is no ambiguity about what MAS is allowed to consume.

The pattern is repeatable across sites. A multi-site operator builds the boundary architecture once and applies it. Bespoke patterns per site rarely scale and almost never get patched.

OT cybersecurity engagement is continuous, not a sign-off gate. The team that owns plant safety has to be in the room as the architecture evolves. Our position on this is set out in OT cybersecurity is now an asset management problem.

Source design: what to do with each kind of data

SCADA tags. Real-time signal, used for control. Generally not the right primary source for MAS Monitor — too high a sample rate, too much noise, too tight a coupling to the control function. Where SCADA tags are used, an aggregation tier sits in front of Monitor.

Historian time series. Usually the right primary source. The OPC-style historian already aggregates and stores at a sample rate that matches the asset-management use case, with a stable schema and an established access pattern.

Edge sensors. New sensors deployed for the asset-management use case rather than for control. Often the right answer where existing instrumentation does not cover the asset class. Edge components handle the connectivity.

Line-of-business systems. ERP, CMMS, work systems. Not strictly part of “IoT” but often part of the same data flow, particularly for Predict where work history matters as much as live signal.

Identifier mapping: the unglamorous work that decides whether anything works

Every operational signal has to be tied back to an asset record in Manage. If that mapping is incomplete or wrong, dashboards highlight phantom equipment or miss real signals. The asset hierarchy in Manage is the spine; the identifier mapping is what attaches operational signal to it.

Practical patterns:

  • A single asset identifier strategy spanning OT tags, historian paths, and Manage records, agreed and documented
  • A mapping layer (often part of the edge component) that translates source identifiers to Manage identifiers
  • An exception path for tags that cannot be mapped, surfaced to the asset data team rather than dropped
  • Periodic reconciliation between source taxonomies and Manage hierarchy, with explicit ownership

This is data engineering, not analytics. It is also what most MAS-suite programmes underestimate.

Sample rate, retention and aggregation

Each use case has its own appetite. A bearing temperature feeding an anomaly model wants a different sample rate from a pressure reading feeding a daily KPI. Designing this once, for the candidate asset class, is cheaper than a generic “send everything to MAS” pattern that nobody can tune later.

Rules of thumb:

  • The sample rate at the source is rarely the sample rate Monitor needs. Aggregate at the edge.
  • Retention at the source is for the OT use case; retention in MAS is for the asset-management use case. They are usually different.
  • Aggregation rules are part of the use case, not a one-off platform decision.

Operational signal is interrupted: networks fail, instruments drift, OT teams patch SCADA. The architecture has to behave well when that happens.

Patterns that work:

  • The edge buffers and forwards on reconnect, not loses data on disconnect
  • MAS Monitor is configured to degrade gracefully on stale signal — anomalies on stale data are not surfaced as anomalies
  • The connectivity layer has its own observability, owned by the same operations team that runs the rest of the platform
  • A documented runbook exists for the common failure modes

Operating the layer

Once it is live, the connectivity layer has to be run. This is not different from any other production capability.

  • Owned by the team that runs the rest of MAS, on the same observability and alerting stack
  • Patched and upgraded on a known cadence, including the edge components
  • Monitored end-to-end, not just at the source or just at MAS
  • Reviewed periodically against the use cases it is feeding, because use cases change

We host and run this layer for clients today, on the same managed cloud as the rest of MAS. See managed Maximo and MAS hosting.

Closing position

The MAS IoT architecture is the part of the wider-suite engagement most likely to surprise the budget and the timeline. Treating it as a first-class capability — engaged with the OT team, designed for the OT/IT boundary, sized for the use case, operated like the production system it becomes — is the difference between a Monitor or Predict programme that lands and one that quietly stalls.

For where IoT and OT connectivity sit inside the wider suite, see the MAS suite overview. For the implementation pattern in detail, see IBM MAS IoT and OT connectivity for Monitor, Predict and Health.

Talk to the people who would actually deliver it

No pitch deck, no pressure. A direct conversation with one of our senior consultants.