Back to Insights

The EU AI Act Changes How Industrial AI Must Be Validated

The EU AI Act imposes new requirements on AI systems in industrial settings. Asset managers need to act now.

Industry NewsAIRegulationRisk ManagementPredictive Maintenance

The EU AI Act, which entered into force in August 2024 and begins phased enforcement in 2025-2027, introduces binding obligations on organizations deploying AI systems in industrial environments. For asset-intensive sectors (manufacturing, energy, transport, utilities), the regulation directly affects how predictive maintenance, anomaly detection, and condition monitoring systems must be validated, documented, and governed.

Most organizations running AI-powered asset management tools are not yet compliant. The regulation is not a set of recommendations. It is enforceable law with penalty structures reaching €35 million or 7% of global turnover for serious violations.

What the AI Act Actually Requires

The AI Act categorizes AI systems by risk level. Industrial safety and critical infrastructure applications fall into the “high-risk” category, triggering mandatory conformity assessments before deployment and continuous monitoring obligations during operation.

High-risk classification applies to AI systems used for:

  • Safety components in critical infrastructure (energy, transport, water supply)
  • Management and operation of infrastructure where failure could threaten health, safety, or fundamental rights
  • Determining access to or prioritisation of essential services

For asset management, this captures:

  • Predictive maintenance algorithms that determine when safety-critical equipment gets serviced
  • Condition monitoring systems that inform shutdown decisions for production or transport assets
  • AI-driven work prioritisation in environments where deferred maintenance creates safety risk

The Conformity Assessment Burden

Before a high-risk AI system can be deployed, it must undergo conformity assessment: a structured process proving the system meets technical requirements around data quality, documentation, human oversight, accuracy, robustness, and cybersecurity.

Conformity assessment involves:

Risk management system. Documented processes for identifying, analyzing, and mitigating risks throughout the AI system lifecycle. For predictive maintenance, this means formal analysis of what happens when the model generates false negatives (missed failures) or false positives (unnecessary interventions).

Data governance. Training datasets must be documented, with evidence of relevance, representativeness, and absence of bias. Organizations using enterprise asset management platforms to train predictive models need auditable records showing which failure data was used, how it was cleaned, and what gaps exist.

Technical documentation. Detailed specification of the AI system’s intended purpose, design choices, data sources, training methodology, performance metrics, and known limitations. This is not marketing collateral. It is regulatory evidence.

Human oversight. High-risk systems must include mechanisms for human intervention, the ability to override AI decisions, and alerts when the system operates outside validated parameters.

Accuracy and robustness. The system must achieve documented performance levels and maintain them over time. Organizations must define what “acceptable accuracy” means for their use case and demonstrate ongoing compliance.

Many asset management AI deployments were implemented without this level of rigor. Vendors provided black-box models. Users deployed them based on proof-of-concept results. No one documented assumptions, validated training data quality, or established performance thresholds with statistical confidence intervals.

That approach is no longer viable within EU jurisdiction.

Practical Implications for Asset Managers

The AI Act changes procurement, deployment, and operational governance for industrial AI systems.

Vendor Accountability Shifts

AI system providers bear primary compliance responsibility, but deployers (the organizations using the AI) also have obligations. If a vendor-supplied predictive maintenance tool is classified as high-risk, the vendor must complete conformity assessment. But the deployer must ensure the system is used according to its documented intended purpose and must monitor performance.

Procurement contracts should now explicitly allocate AI Act compliance responsibilities. Which party provides conformity documentation? Who monitors ongoing performance? What happens if the system drifts outside validated accuracy thresholds?

Vendors unwilling to provide compliance documentation are signaling that their systems either do not meet the technical requirements or have not been assessed. Organizations that deploy such systems within the EU assume regulatory risk.

Transparency Requirements Become Enforceable

The AI Act mandates that users of high-risk systems are informed they are interacting with AI and understand how outputs are generated. For maintenance planners using AI-driven work prioritisation, this means visibility into why specific work orders were flagged as urgent.

Black-box systems that provide recommendations without explainability do not comply. Maintenance teams need access to decision logic: not necessarily the underlying code, but enough transparency to validate that outputs are reasonable given inputs.

Documentation Becomes Continuous

Conformity assessment is not a one-time event. High-risk AI systems require ongoing monitoring, with documented evidence that performance remains within validated parameters.

For predictive maintenance, this means tracking:

  • Model accuracy over time (precision, recall, false positive/negative rates)
  • Drift in input data distributions (are assets operating in conditions outside training data?)
  • User override frequency (how often do planners ignore AI recommendations, and why?)

Organizations already running managed cloud hosting and application support for their asset management platforms may find it easier to implement these monitoring requirements, but only if governance processes were designed with auditability in mind.

What Changes in Practice

Asset-intensive organizations subject to the AI Act should take immediate action.

Classify existing AI systems. Determine which deployed AI tools meet the “high-risk” threshold. Not all analytics qualify. Descriptive reporting does not. Predictive systems influencing safety or service continuity decisions do.

Audit vendor compliance. Request conformity documentation from AI system providers. If they cannot provide it, either plan for their replacement or accept that deployment may violate the regulation once enforcement begins.

Establish performance baselines. Define what acceptable accuracy means for each high-risk AI system. “The model works well” is not a measurable standard. “95% precision and 90% recall on failure prediction, validated quarterly with production data” is.

Document intended use. High-risk systems must be used only for their documented intended purpose. If a predictive model was validated for centrifugal pumps in clean service, using it for slurry pumps without revalidation is non-compliant.

Build human oversight mechanisms. Ensure that AI-driven decisions can be reviewed, overridden, and escalated. Maintenance planners should not be forced to accept AI recommendations without the ability to intervene based on operational knowledge.

The Enforcement Timeline

The AI Act follows a phased implementation schedule:

  • February 2025: Prohibitions on unacceptable AI practices (e.g., manipulative systems, social scoring)
  • August 2026: Obligations for high-risk AI systems deployed in critical infrastructure
  • August 2027: Full enforcement across all provisions

Many organizations are treating August 2026 as the compliance deadline. That is a misunderstanding. Conformity assessment for high-risk systems must be completed before deployment. Systems already operating when the regulation takes effect need retrofitted compliance documentation.

Enforcement responsibility falls to national authorities in each EU member state. Penalties are substantial, but more immediate is the operational risk: regulators can require suspension of non-compliant AI systems pending conformity assessment. For an organization relying on AI-driven predictive maintenance, forced suspension means operational disruption.

What Separates Prepared Organizations

Organizations ahead of the compliance curve share common characteristics.

They treated AI deployment as a governance problem, not just a technology problem. Before deploying predictive models, they established data quality processes, performance monitoring frameworks, and escalation procedures. The AI Act formalized requirements they had already met.

They demanded transparency from vendors. Black-box AI was rejected in favor of systems that could explain their outputs and provide conformity documentation.

They documented everything. Training data provenance, model validation results, accuracy thresholds, override procedures, performance drift investigations: all captured in auditable records before regulators required it.

The Strategic Question

The AI Act imposes compliance costs. Conformity assessment is not trivial. Ongoing monitoring adds operational overhead. Some organizations will respond by abandoning industrial AI rather than meeting the requirements.

That is a strategic error. The regulation does not prevent AI use in asset management. It requires that AI be deployed responsibly, with evidence that it works as claimed and governance to ensure it continues working.

Organizations that see this as an opportunity (to clean up poorly governed AI deployments, demand accountability from vendors, and build sustainable analytics programs) will extract more value from AI while carrying less risk.

Those that see it purely as a compliance burden will either deploy non-compliant systems and hope for lenient enforcement, or retreat to reactive maintenance strategies while competitors gain efficiency through validated, well-governed AI.

The regulation is forcing a choice. Make it deliberately.

Sources