Most “Maximo is slow” tickets do not have a single cause. They have a stack of causes, in different layers, that together push the busiest screens past a threshold users can tolerate. Tuning Maximo well is the discipline of finding those causes in the right order and fixing them at the cheapest layer first.
This guide is for the Maximo administrator or platform engineer responsible for response time on a production estate. It sets out the levers that actually move performance in our experience, in the order we pull them.
Measure before you tune
The first rule is the boring one. Tuning before measuring produces noise. We instrument three things before touching anything:
- End-to-end response time on the busiest screens, broken down by user role and time of day. Work order list, work order detail, asset list, asset detail, inventory issue, mobile sync. We take a baseline over a representative week.
- Database query workload: top queries by total elapsed time, top queries by execution count, top queries by I/O. Whatever the platform native tooling supports — DB2 MON_GET_PKG_CACHE_STMT, Oracle AWR, SQL Server Query Store. The shape of the workload matters more than any single slow query.
- Application-server JVM behaviour: heap usage, GC pause distribution, thread pool saturation, MIF queue depths, scheduled task overruns.
With those three sources of evidence, the actual cause becomes obvious in most cases. Without them, we are guessing.
The order matters
Performance work, in the order we do it on engagements:
- Database, because it is usually the cause and almost always the cheapest fix.
- Application configuration (security, screens, queries, fetches).
- JVM and application server.
- MIF, integrations and asynchronous load.
- Cron tasks, escalations and reports.
- Infrastructure, only after everything above has been ruled out.
Skipping levels is the mistake we see the most. Adding RAM to a JVM does not fix a missing index.
Database — the actual levers
Indexes
The biggest single lever on most Maximo estates. Maximo ships with reasonable indexes; configuration and customisation tend to introduce queries that the shipped indexes do not cover.
What we look at:
- The top queries by total elapsed time. For each one: the execution plan, the missing-index recommendations the platform produces, the actual selectivity of the predicates.
- Custom domains, conditional UI, audit tables, integration staging tables — these are where missing indexes hide.
- Over-indexing: indexes that are not used by any query in 30 days. They cost on every insert, update and delete.
We add the indexes that matter, drop the ones that do not, and document why each one exists. An indexed Maximo is a faster Maximo, but more importantly, an explainable Maximo.
Statistics and table maintenance
Stale optimiser statistics are the second biggest cause of plan flips on Maximo. We schedule statistics updates on the high-volume tables (workorder, asset, locations, matusetrans, inventory, persongroup), set a sensible auto-statistics policy on the rest, and rebuild fragmented indexes on a maintenance window. On DB2, REORGCHK plus runstats on a recurring schedule. On Oracle, gather_table_stats with a meaningful sample. On SQL Server, the Ola Hallengren maintenance solution is the de facto answer.
Locking and blocking
A locked workorder for sixty seconds blocks every planner trying to look at it. We watch for:
- Long-running update transactions, especially from automation scripts and cron tasks.
- Default isolation level mismatches between the application and reporting users.
- Cron tasks that update large numbers of records in a single transaction instead of batching.
The fix is rarely “more hardware”. It is usually a transaction redesign in the offending script.
Audit and history
Audit tables (a_workorder, a_asset etc.) and history tables grow unbounded by default. On a five-year-old estate, they are often larger than the live tables and slow every join. The fix is a defensible retention policy, the necessary regulator alignment, and an archive job. On regulated estates we treat this as a programme decision, not a DBA decision.
Application configuration
Security groups and conditional access
Maximo evaluates security on every record. Estates with hundreds of overlapping groups, each with conditional expressions on data attributes, pay a per-record cost on every list. We audit the security model, consolidate equivalent groups, simplify conditional UI, and flatten the inheritance tree where it is unnecessarily deep. The user experience improves and the platform breathes.
Application designer
The work order screen on a long-tenured Maximo is a museum. Forty fields, six tabs, every fetched relationship triggering a query. We rebuild the busy screens around what users actually do today, not what someone added in 2019. The principle is role-based: planners see planner fields, technicians see technician fields. Mobile rollouts make this non-negotiable — see the Maximo Mobile rollout guide.
Where clauses and saved queries
Every list has a default where clause. Many also have saved queries that nobody owns. We profile the cost of each, drop the dead ones, and rewrite the expensive ones. A start centre query that costs four seconds and runs every login is half the load on the database.
JVM and application server
Only after the above. The MAS UI server, the BIRT server, the Maximo application server (where still in play) — each has a heap size, a GC algorithm, a thread pool, and an MEA queue depth. We set those based on observed behaviour, not on a vendor default sheet.
The common mistakes we untune:
- Heaps sized at the host’s RAM ceiling, with no headroom for the OS. GC pauses become unpredictable.
- G1GC defaults left unchanged on a 32 GB heap. Tuning the pause-time goal and the region size makes a measurable difference.
- Thread pools sized for throughput on a system that is constrained on database response time. The thread pool fills, queues lengthen, and users see the queue, not the database.
On MAS / OpenShift the resource conversation moves to limits and requests on the workload pods, plus HPA and the cluster’s scheduling pressure. The principle is the same: observed behaviour first, tuning second.
Integrations and MIF
Asynchronous and inbound integration load is one of the most common causes of “Maximo got slower last week” — because something changed at the other end of an integration, not in Maximo. We watch:
- Inbound queue depths, retry counts, dead-letter rates.
- The size of the largest JMS message processed in the last 24 hours.
- The MIF object cache hit rate.
- File-based integrations dropping large files into the staging directory at the wrong time of day.
The fix is rarely in Maximo. It is usually a conversation with the upstream system owner about batch windows, sizing and retry behaviour. See the Maximo integrations service for our broader pattern view.
Cron tasks and escalations
Cron tasks and escalations are the silent load. They run when no user is logged in, on the same database. We audit:
- Every cron task: schedule, average run time, max run time, failure rate, how many records it touches.
- Every escalation: schedule, where clause, action, business value of the action.
- Overlap: two cron tasks scheduled at the same time both updating workorder are a deadlock pattern in waiting.
The fix is usually housekeeping. Disable the cron tasks the customer no longer needs (PMHIST that ran since 2016 and nobody reads). Rebalance the schedule. Add sensible where-clause filters so escalations do not scan whole tables.
Reports
BIRT reports run on the application server. A single ad-hoc report against the work history can pin the report engine for thirty minutes and slow every user. We:
- Identify reports nobody opens — drop them.
- Identify reports that run on a schedule and dump to file but produce a 200 MB output that lives on a shared drive — kill the schedule, replace with a query.
- Move the genuinely heavy operational reporting to a reporting database where it cannot interfere with transactional load.
For estates that have moved to MAS, the reporting story increasingly includes Cognos Analytics or an external BI tool against a replicated copy. That is a programme decision, but it removes the report-engine load from the operational estate entirely.
Infrastructure, last
If — and only if — the above has been done, infrastructure is the next conversation. CPU, memory, IOPS, network. On managed cloud, this is the easiest lever to pull, which is why it is so often pulled first. It is also the most expensive ongoing decision. We make it last.
What “good” looks like
A well-tuned Maximo or MAS estate, in our experience, hits these markers:
- Work-order detail loads in under a second at peak.
- Inventory issue completes the round-trip in under three seconds.
- Mobile sync for a typical technician completes in under thirty seconds for a standard field-day workload.
- Cron tasks finish inside their window, every time, with telemetry to prove it.
- Database CPU at peak hour sits under 60% sustained.
- The on-call team gets paged once a quarter on performance, not once a week.
If a customer’s numbers look very different to those, the audit work in the Maximo health check guide is the right starting point. The performance-tuning programme builds on top of it.
When to bring an outside team in
The internal team usually knows where the pain is. They sometimes do not have the time, or the database depth, to fix it without disrupting the business. We come in for fixed pieces of work — index review, JVM tune, cron audit, security model simplification — with a measurable before-and-after on agreed screens. No deck-driven workstreams. If you would like that conversation, talk to us.
Talk to the people who would actually deliver it
No pitch deck, no pressure. A direct conversation with one of our senior consultants.