Skip to Content
DocsPlaygroundRecipesTrajectory-Aware Adaptive Compression

Trajectory-Aware Adaptive Compression

The deadband recipe is a good first cut, but it has a blind spot: every decision is local. It does not know which way the signal is heading, how confident the model is, or whether a sample sits at an inflection. The cost shows up at step changes — the deadband widens just after the event, and the reconstruction trails behind. The fix is to give the flow a model.

This recipe wires eight nodes into a trajectory-aware subsampling flow. A kalman1d predictor runs alongside esStats and a trend node; a controller resets the predictor on chi-squared gate fires; the winnow node fuses the slope, the noise estimate, and the gate into a single per-sample decision; passIf forwards the kept samples; a second kalman1d lightly scrubs measurement noise from the kept values before they hit storage.

flow('adaptive-compression') .median3('med', 'value', { median3: 'smoothed' }) .kalman1d('kf', 'value', { filtered: 'filtered', innovation: 'innovation', innovationGate: 'gate' }, { sensorVariance: 0.005, processVariance: 0.004, chi2Threshold: 6.63 }) .esStats('stats', 'smoothed', { stdev: 'stdev', mean: 'mean' }, { halfLife: 50 }) .trend('slope', 'smoothed', { trend: 'trendDir', confidence: 'trendConf', rocMean: 'roc' }, { rocStatsHalfLife: 20, rocThreshold: 0.005, warmupSamples: 15 }) .controller('ctrl', [ { when: (msg) => Number.isFinite(msg.gate) && msg.gate > 6.63, triggers: [ { control: 'reset', targets: [ 'kf' ] } ] } ]) .winnow('compress', 'smoothed', { significant: 'store', deviation: 'dev', predicted: 'pred' }, { K: 1.5, tightenBase: 100, maxGap: 50, chi2Threshold: 6.63 }) .passIf('gate', (msg, counter) => msg.store === true) .kalman1d('kfSmooth', 'value', { filtered: 'storedValue' }, { sensorVariance: 0.005, processVariance: 0.5, chi2Threshold: 1000 }) .run()

Drag the slider through the four regions and watch the reconstruction follow the original signal across the quiet baseline, the slow ramp, the sharp step, and the high-amplitude vibration that follows.

Loading adaptive compression recipe...

What You’re Seeing

The cyan line is the original signal. The orange line is the reconstruction — a linear interpolation between the kept samples (the amber dots). The chart starts with a quiet sinusoid, climbs through a slow ramp, jumps at a step change, and continues into a noisier vibration regime.

Watch what the eight nodes do together. During the quiet region, the running deadband is narrow and the flow keeps only the inflections of the sinusoid. As the ramp begins, trend fires direction and winnow tightens the deadband around the sloped trajectory. At the step, the Kalman innovation gate trips; the controller resets the predictor; winnow forces a kept sample so the reconstruction snaps to the new level instead of trailing through it. After the step, the running stdev grows and the deadband adapts upward — the kept-sample density stays roughly constant, so compression holds steady even though the signal is louder.

The KPI strip under the chart shows three numbers: the compression ratio, the largest single-sample reconstruction deviation, and the RMS reconstruction error in the same units as the signal.


Where This Pattern Fits

DomainWhat you’re keeping
Vibration monitoringExcursions and inflections in 20 kHz accelerometer streams
Current transducersStep changes and ripple in motor drive feedback
Acoustic emissionBursts above the running noise in ultrasonic AE sensors
Fleet telemetrySpeed, RPM, and torque excursions across vehicles
Distributed motorsBearing and stator currents across drives

How It Works

The eight nodes split the work into single-purpose pieces. median3 absorbs single-sample spikes. The first kalman1d predicts the signal’s trajectory from a constant-velocity model and fires its innovation gate (a chi-squared test) when reality disagrees with the prediction — that gate is what catches step changes the deadband alone would miss. esStats maintains the running mean and stdev that set the local noise scale. trend classifies the signal’s direction and its rate of change. The controller listens for the chi-squared gate and resets kf so the predictor does not stay anchored to the old level after a step. winnow is the decision node — it fuses the slope, the noise estimate, the trend direction, and the innovation gate into a single per-sample store flag. passIf is the gate that drops every sample whose store flag is false. The second kalman1d reads from the raw value (not the median3 output) and lightly scrubs measurement noise from the kept samples before storage.

The single sensitivity parameter is K on winnow. K = 1.5 means “store when the deviation exceeds 1.5 times the current noise floor.” Because the comparison is multiplicative on the running stdev, the threshold scales with whatever the local noise happens to be — there is no absolute tolerance to pick per signal. K itself is dataset-specific: the value here is tuned on the NASA IMS bearing data, and other datasets should sweep K on their own representative signals before deploying.

The first kalman1d reads from the raw value, not from the median-smoothed stream. The Kalman filter’s sensorVariance parameter is the right place to model measurement noise — pre-filtering with median3 would double-filter and leave the chi-squared innovation gate (6.63, a 1% false-alarm test) under-calibrated. esStats and trend still read from smoothed so their running statistics stay noise-protected.

winnow’s maxGap = 50 guarantees at most 50 samples between kept points — a minimum sampling-rate fallback that rarely fires on vibrating data but prevents the reconstruction from flat-lining through a long quiet stretch.

Like every recipe, these parameter values are a starting point. Adapt K, the half-lives, the chi-squared threshold, and the Kalman variances to your own signal characteristics and storage budget before deploying.


References

  • Bristol, E.H. (1990). Swinging Door Trending: Adaptive Trend Recording? ISA National Conference Proceedings, pp. 749–756.
  • Kalman, R.E. (1960). A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering, 82(1), 35–45. doi:10.1115/1.3662552 
  • Welford, B.P. (1962). Note on a method for calculating corrected sums of squares and products. Technometrics, 4(3), 419–420. doi:10.1080/00401706.1962.10490022 

Next Steps