Use case · wifi
Diagnosing WiFi AP health
The WiFi dashboard shows one number for the whole site. It does not say which access point is the problem — and walking the floor with a signal meter takes hours.
A winkComposer flow turns a real four-day client telemetry export from a 5-client office network into one health verdict per AP, plus two kinds of signal worth acting on. Every number on the page is a direct read of a Composer node — the verdict, the drivers behind it, the roaming cost, the drop events. No AP-side telemetry is needed. Drag the slider and switch between access points to see how each one is doing.
What You’re Seeing
Below the verdict — what the flow is telling you. Each card is a notice from the flow. Watch notices flag one driver highlighting on a single AP even though the overall verdict holds. Fix notices are sustained, site-wide observations worth acting on right now.
Per-AP verdict card. Healthy / Monitor / Degraded / Critical, plus a key-signals table ranked by intensity and persistence. Each row is one driver watching a signal. The small amber pip on an AP tab means “verdict still Healthy, but one driver is highlighting hard — worth a closer look.”
Three stacked charts. Median RSSI · AP Health · drop events, all sharing one x-axis of active office hours. Overnight gaps are compressed out; per-AP silences inside a session are shown as grey bands labelled Idle at the bottom. The initial learning window is labelled Calibrating.
Flow matrix + roaming table. Where clients actually go when they roam, and what the roam costs in dB. The matrix shows AP-3 is the hub; the table shows clients leave AP-3 at roughly −47 dBm — excellent signal — and arrive at AP-1 at around −57 dBm. Roughly 10 dB is lost per roam, and the policy is pushing clients off a strong AP onto a weaker one.
One quirk worth naming: in this four-day sample, no AP’s overall verdict ever leaves Healthy. That is the honest answer — the network is fine at the aggregate level. But one driver on AP-2 saturates at end-of-run because its per-client baseline becomes very tight — even a normal-quality reading registers as a significant deviation. The framework distinguishes this from a genuine AP failure: the verdict holds, and the driver’s intensity combined with the Watch card raises the flag.
How It Works
The flow is layered — per-client locally, per-AP aggregated. The pattern is documented in Composition Patterns.
The per-client flow cleans each RSSI sample, tracks a smoothed baseline, and ranks each signal dip by how large an amplitude would be needed to erase it — separating structural drops from routine jitter. A parallel step records the previous-tick RSSI per client, so each roam reports its own dB cost directly from flow output.
An aggregation step groups all client records by access point and timestamp. For each group it takes the median RSSI, the median SNR, the mean retry rate, the client count, and the count of per-client drop events directed at that AP. That turns per-client streams into per-AP tick streams — without a single AP-side query.
The per-AP flow runs six independent drivers in parallel — one each for signal strength, structural drops, interference, occupancy, retry, and event concentration — and fuses them into a verdict with the appraise node. Evidence for each driver builds while its signal is outside its normal range and fades as it returns; the verdict waits for combined evidence across multiple drivers before it escalates. That is why a single saturated driver does not flip the verdict — and why the Watch card exists to flag that case explicitly.
The flow tracks change against an adaptive baseline, not absolute quality. A chronically poor but stable AP — one that has always sat at the edge of usable signal — does not register as a deviation, because the baseline learns its behaviour. Catching coverage holes that have always been holes needs a complementary fixed-threshold check sitting alongside this flow.
No AP-side telemetry is needed. The universal client export — timestamp, client MAC, RSSI, retry rate, and AP name, plus SNR or the signal-plus-noise pair needed to derive it — is present in every major controller: UniFi, Omada, OpenWRT, and Cisco Catalyst Center. Catalyst Center reports a Client Health Coverage score based on fixed RSSI bins. The amplitude-ranked per-client drop-event stream this flow produces — with a per-client adaptive baseline — is not on its documented API surface. The same flow runs on a Raspberry-Pi-class board.
References
- Data: anonymised Cisco Catalyst Center client-health export from Velocis Systems , used with permission. Five clients, six access points, approximately four days at 5-minute polling. PII removed: MAC addresses, usernames, IP addresses, and location identifiers replaced with anonymised labels.
- Edelsbrunner, H., Letscher, D. & Zomorodian, A. (2002). Topological Persistence and Simplification. Discrete & Computational Geometry, 28, 511–533. doi:10.1007/s00454-002-2885-2 — the basis for ranking signal dips by the amplitude needed to erase them.
Next Steps
- Detecting Bearing Failure — the same appraise-driven assessment on NASA vibration data
- Composition Patterns — the layered-flow pattern used here, in a more general setting