Skip to Content
DocsBenchmark

Pipeline Throughput

The ~100K msg/sec on a Raspberry Pi and 1M+ on a modern server quoted on the home page come from this exact pipeline — 8 nodes, 900 temperature readings, end-to-end. The same flow now runs in your browser. The number below is yours, measured on your hardware.

Loading benchmark...

What’s Being Measured

900 temperature readings flow through an 8-node pipeline: noise filtering (sanitize, median3), dual exponential smoothing (fast and slow), signal differencing, Page-Hinkley cumulative sum test, persistence confirmation, and a controller that resets the smoothers when a change is detected.

Each iteration creates fresh pipeline state, processes all 900 messages, and discards the output. The benchmark repeats until at least one second of wall-clock time has elapsed, then reports throughput.

This is end-to-end — partition creation, message cloning, state updates, trigger dispatch, and result publishing. No shortcuts.


Methodology

  • Timer: performance.now() — sub-millisecond, monotonic
  • Warm-up: one full pass before measurement (JIT priming)
  • Duration: repeats until ≥1 second elapsed
  • Partitions: single asset (one isolated state)
  • State: fresh partition per iteration
  • Caveats: browser results are typically 30–60% of native Node.js throughput due to JIT differences and sandbox overhead

The reference hardware numbers use the same pipeline and dataset in Node.js v22. To reproduce natively:

cd composer && node benchmark/compare.js 10 500

winkComposer is transitioning to open source. The benchmark script will be available in the public repository once the transition is complete.


Next Steps

Last updated on