Back to Insights

Why Most Asset Criticality Assessments Fail

Asset criticality scores often sit unused. Build a framework that changes maintenance behavior, not just registers.

Asset Management Best PracticesAsset CriticalityMaintenance StrategyRisk Management

Asset criticality assessment is standard practice in asset-intensive organizations. The outputs are familiar: spreadsheets of numerical scores, heat maps color-coded by risk level, registers with ratings from 1 to 5. What is far less common is evidence that these assessments actually changed anything.

Criticality scores get calculated, documented, and filed. Maintenance strategies remain unchanged. Resource allocation follows historical patterns. The work that was urgent last year is still urgent this year, regardless of what the criticality register says.

This is not a failure of methodology. It is a failure of implementation. Criticality assessments fail when they are treated as documentation exercises rather than decision frameworks.

What Asset Criticality Actually Measures

Asset criticality is a composite of two dimensions: consequence of failure and likelihood of failure.

Consequence captures what happens when the asset fails: production loss, safety risk, environmental impact, reputational damage, regulatory breach. This dimension is relatively stable. A critical pump does not become less critical because it has not failed recently.

Likelihood captures the probability of failure within a defined timeframe, based on age, condition, operating environment, and maintenance history. This changes as assets degrade and as maintenance interventions extend service life.

The product of these dimensions produces a criticality score that should inform maintenance strategy, spares holdings, inspection frequency, and capital replacement timing.

Should. But often does not.

Why Criticality Assessments Do Not Stick

Asset criticality frameworks fail for specific, fixable reasons.

Criteria Are Too Abstract

Frameworks that define levels with phrases like “moderate impact” or “significant disruption” produce inconsistent assessments. Two engineers evaluating identical pumps will assign different scores because the criteria are interpretable. If definitions do not specify measurable thresholds (hours of downtime, volume of production loss, cost of remediation), the assessment becomes subjective.

Stakeholder Input Is Missing

Engineering teams often complete criticality assessments in isolation, without input from operations, finance, or compliance. The result is a technically defensible assessment that does not reflect business priorities. A production manager knows which assets constrain throughput. A compliance officer knows which failures trigger regulatory reporting. An engineer working from P&IDs alone does not.

Scores Are Not Linked to Actions

If a high criticality rating does not automatically trigger a defined response (revised PM schedule, condition monitoring deployment, spares provisioning), then the score is decorative. The assessment should directly determine maintenance treatment, not just document risk. Organizations that invest in solution design and process optimization before building their criticality model tend to avoid this disconnect.

The Framework Never Updates

Asset criticality changes as operating context changes. A backup pump becomes critical when the primary fails. A secondary production line becomes critical when demand exceeds primary capacity. Assessments conducted once and left static become historical artefacts, not decision tools.

Building a Criticality Framework That Works

Effective asset criticality assessments require tight integration with maintenance strategy and resource planning.

Define consequence thresholds numerically. Replace qualitative scales with measurable business impacts:

  • Safety: Lost-time injury, fatality, dangerous occurrence requiring HSE notification
  • Production: Hours of downtime, tonnes of throughput loss, revenue impact
  • Environment: Volume of reportable release, remediation cost, regulatory penalty
  • Compliance: Breach of operating permit, suspension of certification, legal liability

These thresholds should be agreed with operational leadership, not defined by the maintenance team in isolation.

Incorporate operational knowledge. Engineering drawings show design intent. Operators know actual usage patterns, workarounds, and hidden dependencies. Involve operations supervisors in the assessment, especially for identifying single points of failure and bypass arrangements not documented in technical records.

Map criticality to maintenance treatment. Establish clear rules linking criticality bands to strategies:

  • Critical: Condition-based monitoring, preventive replacement before failure, redundant spares on site
  • High importance: Time-based maintenance with reduced intervals, predictive diagnostics, spares held regionally
  • Medium importance: Standard PM routines, corrective maintenance acceptable, spares procured on failure
  • Low importance: Run-to-failure, no planned maintenance, competitive procurement on failure

These are not suggestions. They are the operating model. If an asset is rated critical but still maintained reactively, either the rating is wrong or the strategy needs changing.

Embed in work management. Criticality ratings should populate the CMMS and influence work prioritisation. When a high-criticality asset generates a corrective work order, it should be flagged for expedited scheduling. Platforms like IBM Maximo support criticality-driven prioritisation natively, but only if the data model and business rules are configured to use it.

Handling the Grey Areas

Not every asset fits cleanly into a criticality band. Judgement is required, but it should be structured.

Redundancy changes the calculation. A pump with an installed standby is less critical than an identical pump with no backup, even if consequence of failure is identical. The assessment should account for redundancy but be explicit about switchover time and standby reliability.

Intermittent criticality is real. Some assets are critical only during specific operating modes or seasonal peaks. A heating system is critical in winter, not in summer. These assets need dual treatment: reduced maintenance during low-criticality periods, intensified monitoring during high-risk windows.

Deterioration affects likelihood. As assets age, likelihood of failure increases, which should increase criticality ratings and trigger strategy changes. Build a review cycle into the framework so criticality refreshes based on condition data, not just calendar schedules.

Linking Criticality to Maintenance Strategy

Asset criticality assessment is a means, not an end. The output should be a stratified maintenance strategy that allocates effort proportional to risk.

For critical assets:

  • Shorter PM intervals or transition to condition-based monitoring
  • Predictive analytics where failure modes allow it (vibration analysis for rotating equipment, thermography for electrical systems, oil analysis for hydraulics)
  • Root cause analysis on every failure, not just recurring problems
  • Dedicated spares inventory with defined reorder triggers

For low-criticality assets:

  • Elimination of unnecessary PMs that cost more than replacement
  • Consolidation of spares inventory to avoid holding slow-moving parts
  • Acceptance of corrective maintenance as a valid strategy

Both matter. Over-maintaining low-criticality assets wastes resources that should protect critical ones.

Measuring Whether It Worked

A successful asset criticality assessment changes observable behavior. Within six months:

  • PM schedules are restructured to align with criticality bands
  • Condition monitoring is deployed on high-consequence, high-likelihood assets
  • Spares provisioning policies are differentiated by criticality
  • Work order prioritisation reflects criticality ratings, not just requester urgency

If none of that happens, the assessment was a documentation exercise: useful for audits, perhaps, but not for managing risk. The goal is not a complete criticality register. The goal is a maintenance regime that protects critical assets, optimizes resources, and tolerates acceptable risk.

Sources