Skip to Content
DocsReferenceISO 13374 Mapping

Reference · standards

Mapping winkComposer to ISO 13374

ISO 13374 is the reference architecture for condition monitoring. A practical crosswalk for integrators: what Composer implements, what it configures, and what remains the responsibility of the surrounding system.

ISO 13374 is widely cited in industrial monitoring procurement, often without a precise mapping behind the citation. This page provides that mapping for winkComposer — block by block, with the boundaries Composer respects and the gaps it leaves open named in plain language.

What ISO 13374 Is

ISO 13374 is published in four parts. The umbrella title varies between editions — Parts 1, 2, and 3 use “Condition monitoring and diagnostics of machines”; Part 4 (2015) broadened to “Condition monitoring and diagnostics of machine systems” — a renaming consistent with ISO/TC 108/SC 5’s broader vocabulary shift around the same period. All four parts share the subtitle Data processing, communication and presentation.

  • ISO 13374-1:2003 — Part 1: General guidelines (first edition, 2003-03-15)
  • ISO 13374-2:2007 — Part 2: Data processing (corrected version 2008-01-15)
  • ISO 13374-3:2012 — Part 3: Communication
  • ISO 13374-4:2015 — Part 4: Presentation

The standard was prepared by ISO/TC 108 (Mechanical vibration, shock and condition monitoring), Subcommittee SC 5 (Condition monitoring and diagnostics of machines).

Part 1 establishes the six-block reference flow. Parts 2 through 4 detail data processing, communication, and presentation requirements that compliant specifications need to conform to — Part 2 §3.7 names the MIMOSA Open Systems Architecture for Enterprise Application Integration (OSA-EAI™) as one such compliant specification.

There is no third-party ISO 13374 certification scheme. Specifications can be compliant with the standard’s requirements (self-declared conformance), but no audit body issues an “ISO 13374 certified” mark. Alignment with ISO 13374 in this page means the system’s pipeline maps onto the six-block reference, not that it has passed a compliance test.

A system that implements the reference architecture is not the same as a system that claims compliance. Composer is in the first category.


How to Read This Mapping in a Compliance Context

This page supports standards-aware architecture review. It helps system integrators identify which ISO 13374 functional blocks Composer can implement directly, which blocks require configuration, and which blocks remain the responsibility of upstream or downstream systems. It is not a certificate, conformance test report, or substitute for project-specific validation.

What Composer does not replace — these belong outside Composer by design:

  • Sensors, ADCs, PLCs, or calibration systems
  • Operator workflow, CMMS, EAM, or SCADA alarm-management systems
  • Project-specific verification and validation

What Composer does not currently replace — candidates for future work as the framework evolves:

  • Spectral preprocessing for vibration-heavy deployments (a future fft or welch node would close this gap)
  • A validated fault library (the standardised tables in ISO 17359 are a reasonable starting point; a packaged library is a candidate for future work)
  • Statistical or physics-based RUL models — trend already provides the smallest possible RUL primitive (rate-of-change with a confidence score, suitable for linear-extrapolation threshold-crossing estimates); richer RUL primitives (Weibull fits, survival analysis, physics-based degradation) are candidates for future work

The first list reflects boundary by design. The second list reflects scope today.


The Six Blocks

ISO 13374-1:2003 §2.2.1 defines the six processing blocks as follows. The first three are usually technology-specific in configuration and signal interpretation — signal processing and data analysis targeted to a particular monitoring technology. The final three combine monitoring technologies to assess current health, predict future failures, and recommend action.

#BlockRole (per ISO 13374-1:2003 §2.2.1)
1Data Acquisition (DA)Converts transducer output into a digital parameter representing the physical quantity, plus context such as timestamp, calibration state, data quality, data collector identity, and sensor configuration.
2Data Manipulation (DM)Runs signal analysis, computes meaningful descriptors, and derives virtual sensor readings from the raw measurements.
3State Detection (SD)Establishes and maintains baseline profiles, screens new data for abnormalities, and assigns each observation to an abnormality zone (for example, alert or alarm).
4Health Assessment (HA)Diagnoses faults and rates current health of the equipment or process, drawing on all state information.
5Prognostic Assessment (PA)Predicts future health states, failure modes, and remaining useful life based on current health and projected usage loads.
6Advisory Generation (AG)Generates actionable maintenance or operational recommendations to extend the useful life of the equipment or process.

Blocks consume the output of the block before them. A condition monitoring system covers some prefix of this sequence; a complete system covers all six.


Where ISO 13374 Sits in the PHM Standards Landscape

ISO 13374 is one part of a multi-standard ecosystem for condition monitoring and prognostics and health management (PHM). Vogl, Weiss, and Donmez of NIST surveyed the family in 2014 — the relevant siblings, with current editions:

StandardRoleCurrent edition
ISO 17359Top-level general guidelines for setting up a condition monitoring programme; includes fault-symptom correlation tables across machine types2018 (third edition; cancels 2011)
ISO 13374 (parts 1-4)Open architecture for data processing, communication, and presentation — the focus of this page2003 to 2015
ISO 13379-1Data interpretation and diagnostics techniques; formalises failure mode symptoms analysis (FMSA)2025 (second edition; cancels 2012)
ISO 13381-1Prognostics — formalises ETTF (Estimated Time to Failure), confidence levels, four prognostic phases2015 (cancels 2004)
ISO 18435 (parts 1-3)Integration with manufacturing automation applications — AIME and ADME structures2009 to 2015
MIMOSA OSA-EAI / OSA-CBMCompliant implementation specifications, free for downloadactive

Composer maps to ISO 13374 specifically; this page does not duplicate the mapping for sister standards. Two adjacent standards are worth naming because they sharpen gaps acknowledged in the mapping that follows:

  • ISO 17359:2018 contains standardised tables correlating possible faults with symptoms across common machine types — the standardised analogue of the application-specific fault library mentioned in Block 4 (Health Assessment).
  • ISO 13381-1:2015 formalises prognostic vocabulary (ETTF, four phases: pre-processing, existing failure mode, future failure mode, post-action) — the standardised vocabulary for the RUL estimation Composer leaves to specialised libraries in Block 5 (Prognostic Assessment).

Mapping Summary

BlockCoverageExample implementations
1 — DAFrom the digitized-signal boundary onwardsource-manager (OPC-UA, MQTT, custom adapters)
2 — DMTime-domain: strong. Frequency-domain filtering: covered. Spectral analysis: gapsanitize, kernel, butterworthFilter, esStats, twStats, momentsDigest, kalman1d
3 — SDStrong — through configurable detection primitivesstateChangeDetector, dwellTimeTracker, pageHinkley, threshold, persistenceCheck
4 — HAFusion engine present; fault library is application-specificappraise
5 — PAPartial — trend-based. RUL estimation: gaptrend, lag
6 — AGTrigger generation only; presentation is downstreamemitIf, persistIf, MQTT and QuestDB egress

The remaining sections name each boundary and gap.


Block 1 — Data Acquisition

The DA block converts the transducer output to a digital parameter with related information — time, calibration, data quality, data collector identity, sensor configuration (ISO 13374-1:2003 §2.2.1 a).

Composer ingests post-digitization signals via its source-manager. Sensor-level acquisition (PLC, RTU, OPC-UA server) sits upstream of Composer. The standard’s pre-digitization layer — transducer-to-digital conversion, calibration, sensor configuration capture — is satisfied by upstream automation infrastructure, not by Composer.

What Composer covers from this block:

  • Time-stamping at ingestion, with the source timestamp preserved when the upstream system provides it
  • Field-level input validation via sanitize — range gates, type checks, invalid-value marking
  • Multi-source merge for assets with split telemetry streams

What Composer does not cover:

  • Sensor signal conditioning — anti-aliasing filters, ADC range scaling, calibration curves
  • Sensor-side timestamping when the OPC-UA server’s source timestamp is unavailable

These responsibilities belong in the OT layer.

For Composer’s mapping to remain valid, the upstream layer should provide a minimum data contract:

  • A stable asset identifier, with an asset class where flows differ by equipment type
  • A timestamp in known units (Composer expects milliseconds since epoch; incorrect units silently corrupt dwell time, slopes, and stored records)
  • Engineering units for each measured field
  • Calibration status or calibration provenance, where available
  • A data quality flag or invalid-value convention
  • Source identity — the PLC, gateway, OPC-UA node, MQTT topic, or collector that produced the reading

The asset identifier and asset class together drive Composer’s per-asset isolation and its egress topic structure (edgeDeviceId/assetId/assetClass/insightType), which can carry an ISA-95 enterprise/site/area/cell hierarchy in the edgeDeviceId segment for Unified Namespace deployments.

sanitize placed at the start of the pipeline catches many bad values when the upstream contract is incomplete; per-field invalid-value propagation prevents downstream nodes from silently corrupting derived signals. These are safety nets, not substitutes for the contract above.


Block 2 — Data Manipulation

The DM block runs signal analysis, computes descriptors, and derives virtual sensor readings from raw measurements (ISO 13374-1:2003 §2.2.1 b). In practice this covers signal filtering, feature extraction, and frequency-domain analysis — the standard names bearing vibration monitoring, infrared thermographic monitoring, acoustical monitoring, and motor current monitoring among the technology-specific applications (ISO 13374-1:2003 §2.2.1).

Time-domain coverage is strong; spectral analysis is the gap:

DM capabilityComposer nodeNotes
Streaming statistical features (mean, variance, moments)esStats, twStats, momentsDigestExponentially weighted or time-windowed
Kernel-based smoothing (FIR)kernelBuilt-in presets plus custom weights
State estimationkalman1d1D Kalman with optional control input
Frequency-domain filtering (IIR)butterworthFilterLowpass, highpass, bandpass; cutoff in Hz
Outlier rejectionspikeGuard, sanitize, median3Threshold-based and median-based
Per-field invalid-value propagation(built into every node)Invalid inputs cascade so downstream consumers stay honest
Spectral feature extraction (FFT, PSD)not implementedVibration condition monitoring typically requires this

For applications dominated by time-domain features — process variables, electrical signals at the asset level, weighing, thermal monitoring — Composer’s coverage is strong. For vibration-based bearing or rotor analysis, an FFT or PSD pre-processor is required upstream, or the spectral features are computed externally and fed into Composer as derived inputs.

A future fft or welch node would close this gap.


Block 3 — State Detection

The SD block establishes and maintains baseline profiles, screens new data for abnormalities, and assigns each observation to an abnormality zone such as alert or alarm (ISO 13374-1:2003 §2.2.1 c).

Composer covers this block through several primitives:

  • threshold — fixed and dynamic threshold tests against rolling baselines
  • pageHinkley — classical change-point detection (Page, 1954; Hinkley, 1971), suitable for streaming drift
  • stateChangeDetector — edge detection on derived flags
  • dwellTimeTracker — operating-mode classification by dwell time
  • persistenceCheck — confirms a condition holds for N of M samples before firing; the standard debounce against single-sample false positives

The publishing of confirmed events — the egress of an SD result onto MQTT or into QuestDB — is handled by emitIf and persistIf and properly belongs to Block 6 (Advisory Generation), not to State Detection itself.

The standard’s emphasis on baseline comparison maps onto Composer’s combination of streaming statistics — which establish the baseline — and detection nodes — which flag departures from it.


Block 4 — Health Assessment

The HA block diagnoses faults and rates current health of the equipment or process, drawing on all available state information (ISO 13374-1:2003 §2.2.1 d). Traditional condition monitoring systems implement this with a fault library — named modes such as imbalance, misalignment, bearing wear, each with a signature pattern. ISO 17359:2018 contains standardised fault-symptom correlation tables across common machine types, and ISO 13379-1:2025 formalises the diagnostic procedure (failure mode symptoms analysis, FMSA).

Composer’s appraise node provides the fusion mechanism — evidence-driven SNN accumulation that produces interpretable health scores. The fault library and the FMSA evidence patterns are application-specific: the system designer defines which evidence patterns map to which fault modes, with what weights and what decay constants. Composer does not ship with a built-in fault library; the standardised tables in ISO 17359 are a reasonable starting point for vibration-based applications.

For applications where the fault modes are well understood — bearing wear, pump cavitation, valve sticking — the engine plus a small evidence specification is often sufficient. For applications without an established fault model, a domain study precedes the deployment.

Composer runs the fault-evidence model. Checking that the model is right for your asset is the deploying organisation’s job. Test it against historical or experimental data before you rely on it.

Calibration is the bridge between an engine and a working assessment. With the right parameters a detection node catches genuine failures and stays quiet during normal operation. The approach is the same regardless of signal type: pick a window of known-healthy operation, measure each signal’s mean and standard deviation during that window, then set every detection, trend, and assessment parameter relative to that baseline.

Several Composer nodes carry a built-in warm-up phase that supports this work — appraise learns its firing threshold during warmup and publishes a calibrating flag while it does so; pageHinkley exposes minWarmUpSamples; trend reports a learning state before it commits to stable, rising, or falling; lag reports invalid output during its warmup window. Downstream consumers can branch on these signals to suppress alerts until calibration is complete.

Dynamic options extend this further. A node parameter can be a function of the incoming message rather than a fixed value — a threshold that varies by asset class, a window size that scales with the operating regime, a setpoint that follows a control variable. The same flow then runs unchanged across heterogeneous fleets, with the calibration travelling with the data instead of being hard-coded into the pipeline.

Three rules of thumb for evidence sources:

  • Quiet during normal operation. A source that drips signal even when the asset is healthy will accumulate into a false alarm.
  • Stronger when the fault is worse, not just different.
  • Driven by sustained behaviour. A single bad sample is not evidence.

A property of this approach: every assessment is decomposable. The reader of an appraise output can see which evidence contributed to which decision and by how much. This matters for AI governance frameworks that require decision traceability — the companion Mapping winkComposer to NIST AI RMF page covers that crosswalk.


Block 5 — Prognostic Assessment

The PA block determines future health states and failure modes based on the current health assessment, projected usage loads on the equipment, and remaining useful life predictions (ISO 13374-1:2003 §2.2.1 e). The standard’s framing is broader than RUL alone — it includes failure-mode prediction conditioned on future operating loads.

The companion standard ISO 13381-1:2015 formalises the prognostics vocabulary: Estimated Time to Failure (ETTF), confidence levels, root cause, and four prognostic phases — pre-processing, existing failure mode prognosis, future failure mode prognosis, post-action prognosis. Composer’s coverage maps to the pre-processing phase plus light existing-failure-mode trending; the future-failure-mode and post-action phases sit outside Composer.

Composer’s coverage here is partial:

  • trend provides rate-of-change estimation with a confidence score derived from sample count, persistence, and noise. This supports linear-extrapolation-based threshold-crossing time estimates downstream.
  • lag provides historical comparison — now against N samples ago.

Composer does not provide:

  • Statistical RUL estimation — Weibull fits, accelerated life models
  • Survival analysis primitives
  • Physics-based degradation models
  • Sequence-based forecasting — LSTM, transformer, ARIMA

For applications that require RUL estimation, two integration patterns work:

  1. Stay narrow. Cede this block to specialised libraries. Composer feeds features in, consumes RUL estimates back, and uses them as inputs to detection or advisory nodes.
  2. Linear extrapolation in-pipeline. For degradation processes that are approximately linear in their late phase — battery capacity fade, filter pressure drop, brake-pad wear — trend plus a threshold-crossing computation is often sufficient.

Many practical deployments can start with the second category. RUL estimation is hard at the streaming edge, and the failure mode of overconfident RUL is worse than no RUL at all.


Block 6 — Advisory Generation

The AG block generates actionable maintenance or operational recommendations aimed at extending the useful life of the equipment or process (ISO 13374-1:2003 §2.2.1 f). The standard’s framing is broader than alerting — both maintenance and operational change recommendations fall within scope.

Composer emits advisory triggers — events, alerts, and persistent records via MQTT and QuestDB. Composer is the trigger and routing layer. Presentation — the dashboard widget, the SCADA alarm panel, the mobile push notification, the MCP-mediated LLM that explains assessments in natural language — is downstream of Composer and is implemented by the consumer of Composer’s emissions.

For simple alerting cases, Composer’s emit and persist path generates the advisory trigger and the supporting assessment record. The final advisory experience — acknowledgement, escalation, work-order creation, operator guidance, audit workflow — belongs to the consuming system. For applications where the advisory requires a recommendation engine — which corrective action, with what priority, escalating to whom — that recommendation engine consumes Composer’s emissions.

Communication of those emissions is configurable rather than conformant. Composer ships emitter and persistence adapters (MQTT, QuestDB), but ISO 13374-3-style interoperability depends on the chosen payload schema, topic structure, field naming, units, timestamps, and the contract with the downstream consumer. Existence of an MQTT egress is not, by itself, communication-layer conformance.


Worked Example — A Bearing Condition Monitoring Flow

A typical bearing condition monitoring deployment touches all six blocks:

BlockWhat happens in the flow
1 — DAVibration RMS values arrive pre-computed; FFT and RMS are performed in the upstream signal processor
2 — DMkernel smoothing, esStats for streaming mean and variance, momentsDigest for distributional shape
3 — SDthreshold against rolling mean ± kσ, pageHinkley for early drift detection, persistenceCheck for debounce
4 — HAappraise fuses evidence from multiple detectors into a health score with explicit decay
5 — PATrend slope from trend extrapolated to a threshold-crossing estimate downstream of the pipeline
6 — AGemitIf raises an alert, QuestDB persists the assessment record, MQTT pushes to the dashboard

This is one deployment-ready pattern. It is not the only pattern Composer supports for condition monitoring; it demonstrates how a Composer-centred deployment can participate in all six blocks, with upstream signal processing for spectral features and downstream presentation or recommendation handling where the deployment requires them.


What Adopters Can Inspect

A standards-aware architecture review benefits from naming the surfaces a reviewer can examine. For a Composer deployment, those surfaces are:

  • The flow definition — every node in the pipeline, its parameters, and the order of operations
  • Input fields and their units, and the timestamp source and unit
  • Invalid-value handling and any sanitize ranges
  • Threshold and baseline definitions, including dynamic options where used
  • For appraise — the evidence sources, weights, deviation types, and half-lives
  • Emitted MQTT payloads and persisted QuestDB records
  • The runbook describing who consumes those emissions and what action follows

Each of these is a configuration artefact, not a black box.


What This Mapping Is Not

  • Not a certification claim. ISO 13374 has no third-party certification scheme. The standard does support self-declared compliance with its requirements (Part 2 §3.7 names compliant specifications such as MIMOSA OSA-EAI), but the mapping presented here is an architectural statement, not a self-declared compliance claim.
  • Not exhaustive. Composer has more nodes than appear in the table above. accumulate, categorize, extremaRank, vectorDistance, ratio, diff, and others find use across blocks 2–6 depending on the application.
  • Not unchanging. A future fft node would close the spectral analysis gap in block 2. A future RUL primitive may close the prognostic gap in block 5. This page is updated when those land.

References

The ISO 13374 series (umbrella title varies; Parts 1-3 use machines, Part 4 uses machine systems; all share the subtitle Data processing, communication and presentation):

Sister standards in the PHM landscape cited on this page:

  • ISO 17359:2018 — Condition monitoring and diagnostics of machines — General guidelines. Third edition; cancels and replaces ISO 17359:2011. Contains fault-symptom correlation tables. iso.org/standard/71194.html 
  • ISO 13379-1:2025 — Condition monitoring and diagnostics of machine systems — Data interpretation and diagnostics techniques — Part 1: General guidelines. Second edition (October 2025); cancels and replaces ISO 13379-1:2012. iso.org/standard/88027.html 
  • ISO 13381-1:2015 — Condition monitoring and diagnostics of machines — Prognostics — Part 1: General guidelines. Cancels and replaces ISO 13381-1:2004. Defines ETTF and the four prognostic phases. iso.org/standard/51436.html 
  • ISO 18435-1:2009 — Industrial automation systems and integration — Diagnostics, capability assessment and maintenance applications integration — Part 1: Overview and general requirements. iso.org/standard/38692.html 
  • ISO 18435-2:2012 — Part 2: Descriptions and definitions of application domain matrix elements. iso.org/standard/55419.html 

Survey paper and implementation specifications:

Classical references for change-detection primitives cited in Block 3:

  • Page, E. S. (1954). Continuous Inspection Schemes. Biometrika 41(1/2): 100–115.
  • Hinkley, D. V. (1971). Inference about the change-point from cumulative sum tests. Biometrika 58(3): 509–523.
Last updated on