Skip to Content
DocsReferenceNIST AI RMF Mapping

Reference · standards

Mapping winkComposer to NIST AI RMF

The NIST AI Risk Management Framework defines four functions and seven characteristics of trustworthy AI. This page maps each to the winkComposer surfaces that contribute — and names plainly which parts sit outside Composer's scope.

The NIST AI Risk Management Framework is widely referenced in AI governance procurement, often without a clear mapping behind the citation. This page provides that mapping for winkComposer — function by function, characteristic by characteristic, with the in-scope contributions and out-of-scope boundaries stated explicitly. AI RMF 1.0 is the basis for this page; the page is updated when the framework is revised.

In practice — Composer is relevant when you need a transparent, streaming evidence layer around deployed AI or algorithmic systems. It helps monitor behaviour, detect drift, confirm sustained anomalies, emit events, and preserve evidence. It does not replace model validation, governance, security, privacy, fairness, or incident-response processes — those remain organisational responsibilities.

Shared Responsibility Boundary

Composer provides the streaming measurement, evidence, and eventing layer. The deploying organisation remains responsible for AI system purpose, risk categorisation, metric selection, human review, user feedback, incident response, model validation, security controls, privacy controls, and fairness decisions. This page maps Composer’s contribution; it does not transfer organisational responsibilities onto the framework.

What the AI RMF Is

The Artificial Intelligence Risk Management Framework (AI RMF 1.0) is publication NIST AI 100-1, released by the National Institute of Standards and Technology, U.S. Department of Commerce, in January 2023, with DOI 10.6028/NIST.AI.100-1 .

Two facts about the framework matter for any mapping exercise:

  1. It is voluntary. The AI RMF is a guidance framework, not a regulation. There is no NIST certification scheme for AI RMF compliance. Alignment with AI RMF is an architectural statement, not an audit result.
  2. It is a living document. NIST plans formal review with community input no later than 2028. Companion documents (the AI RMF Playbook and the Generative AI Profile, NIST AI 600-1) extend the core framework.

This page is updated when the framework is revised.


The Framework’s Two Axes

The AI RMF organises trustworthy AI along two complementary axes.

Axis 1 — Seven characteristics of trustworthy AI (Section 3):

#Characteristic
3.1Valid and Reliable
3.2Safe
3.3Secure and Resilient
3.4Accountable and Transparent
3.5Explainable and Interpretable
3.6Privacy-Enhanced
3.7Fair – with Harmful Bias Managed

The framework notes that Valid & Reliable is the base condition for the others, and Accountable & Transparent applies across all of them.

Axis 2 — Four functions of the AI RMF Core (Section 5):

FunctionRole (paraphrased from NIST AI 100-1 §5)
GOVERNCross-cutting culture, accountability, policies, oversight; informs and infuses the other three functions
MAPEstablishes context, categorises the system, identifies risks, characterises impacts
MEASURESelects and applies metrics; evaluates trustworthiness; tracks risk over time; assesses measurement effectiveness
MANAGEPrioritises risks, allocates resources, plans response and recovery, handles residual risk, drives continual improvement

A complete AI risk management programme covers all four functions across the AI lifecycle. Composer is infrastructure, not a programme — it addresses some functions and characteristics, and is out of scope for others.


Mapping Summary — by Function

FunctionComposer’s coverageWhy
GOVERNPartial — infrastructure-side onlyComposer flows are deterministic, versionable, and reviewable. The actual governance work — policies, accountability structures, training, oversight — is org-process, not infrastructure-feature. Composer makes governance possible but does not implement it.
MAPOut of scopeContext establishment, stakeholder identification, impact assessment, risk identification — these are pre-deployment activities that belong in MLOps governance tooling (model registries, model cards, feature stores). Composer is downstream of MAP.
MEASUREStrong fitThe MEASURE function explicitly covers continuous monitoring, drift tracking, and real-time evaluation of system behaviour in production. Several MEASURE subcategories map directly to Composer node families.
MANAGEPartial — trigger and escalation layerComposer emits triggers and persists evidence. The treatment, prioritisation, and resource-allocation decisions sit in org-process. Composer is the what changed layer; MANAGE is the what to do about it layer.

The remaining sections work through each function and the seven characteristics with named subcategory references.


Function-by-Function Detail

GOVERN — Partial Contribution

The GOVERN function is fundamentally about people, policies, and accountability — not about technology. NIST AI 100-1 §5.1 frames GOVERN as a cross-cutting function infused throughout AI risk management that enables the other three.

Composer contributes to GOVERN only where infrastructure choices enable governance work:

  • Flow definitions as a reviewable source of truth. Every Composer flow is expressed as pipeline code — nodes, parameters, evidence sources, thresholds, emitters, and persistence choices live in one place. This supports GOVERN 1.5 (ongoing monitoring of the risk management process) by making behaviour reviewable.
  • Deterministic, reproducible streaming. Composer’s hot path is allocation-free and the algorithms it runs are deterministic. Given the same flow definition and the same input stream, output is reproducible to within numerical precision. This supports GOVERN 4.3 (organisational practices to enable AI testing and identification of incidents) by keeping behaviour repeatable across environments.
  • Configurable persistence for an evidence trail. persistIf plus QuestDB can preserve raw inputs, derived signals, appraise scores, state, severity, timestamps, and emitted events. The audit value depends on what the deployment chooses to persist — Composer provides the mechanism, not an automatic audit trail.

What Composer does not contribute to GOVERN:

  • Roles, responsibilities, and lines of communication (GOVERN 2.1)
  • Training (GOVERN 2.2)
  • Workforce diversity (GOVERN 3)
  • Decommissioning policies (GOVERN 1.7)

These belong to the deploying organisation.

MAP — Out of Scope

The MAP function establishes the context that frames AI risk — intended use, stakeholders, impacts, system categorisation, risk identification (NIST AI 100-1 §5.2).

Composer is downstream of every MAP activity. By the time a Composer flow exists, the deploying organisation has already established intended purpose, identified stakeholders, characterised impacts, and categorised the system. Composer does not implement context-establishment activities.

MAP is a prerequisite to a useful Composer deployment, not a function Composer fulfils. A deployed flow can, however, preserve MAP decisions as implementation artefacts. Composer’s semantics layer captures the facts — column types, units, physical ranges, operational limits, asset classes — as a single source of truth shared across dashboards, queries, and storage. The flow language captures the decisions — thresholds, evidence sources, severity labels, downstream event routes. Together they make context-of-use inspectable, even though Composer did not author it.

MEASURE — Strong Fit

The MEASURE function is where Composer’s contribution is structural. NIST AI 100-1 §5.3 frames MEASURE as the application of quantitative, qualitative, and mixed-method tools to analyse, benchmark, and monitor AI risks and impacts — testing systems before deployment and continuously while in operation.

Several MEASURE subcategories map onto Composer node families:

SubcategoryComposer contribution
MEASURE 2.4 — production monitoring of AI system functionality and behaviourpageHinkley for change-point detection; esStats/twStats for distributional baselines; processIndex for capability tracking (Cpk/Ppk) of deployed-model output streams; threshold and emitIf for departure flags; stateChangeDetector for mode transitions
MEASURE 2.6 — safety metrics covering reliability, robustness, real-time monitoring, and response times to failuresComposer’s deterministic streaming pipeline; persistenceCheck for debounce against single-sample false positives; low-latency edge processing
MEASURE 3.1 — tracking of existing, unanticipated, and emergent AI risks based on actual performance in deployed contextsQuestDB persistence of all evaluations; momentsDigest for distributional shape; extremaRank for amplitude-ranked deviations
MEASURE 2.9 — explanation, validation, and documentation of AI model output within its deployment contextappraise’s evidence decomposition exposes which evidence contributed how much to a decision

What Composer does not contribute to MEASURE:

  • The metrics themselves (MEASURE 1.1 — selection of approaches and metrics is a domain decision)
  • TEVV processes (MEASURE 2.1 — test sets, evaluation tools, formal documentation belong in MLOps tooling)
  • Feedback processes for end users (MEASURE 3.3)
  • Independent review processes (MEASURE 1.3)

Composer is the engine; the metric definitions and the review processes wrap around it.

MANAGE — Partial — Trigger and Escalation Layer

NIST AI 100-1 §5.4 frames MANAGE as the allocation of risk resources to mapped and measured risks on a regular cadence set by GOVERN, with risk treatment plans for response, recovery, and communication around incidents or events.

Composer contributes the trigger and escalation layer:

SubcategoryComposer’s contribution
MANAGE 1.3 — high-priority risk responses are developed, planned, and documentedComposer’s emitIf raises events with explicit severity; MQTT/QuestDB egress routes events to consumers (alarm panels, ticket systems, on-call routers)
MANAGE 2.4 — mechanisms to supersede, disengage, or deactivate AI systems that show performance inconsistent with intended useComposer detects performance regressions in real time; the decision to deactivate is upstream and out of scope

What Composer does not contribute to MANAGE:

  • Risk prioritisation policies (MANAGE 1.2)
  • Resource allocation (MANAGE 2.1)
  • Documentation of residual risk (MANAGE 1.4)
  • Third-party risk management (MANAGE 3)

These are organisational decisions that consume Composer’s emissions, not features Composer provides.


Mapping by Trustworthy AI Characteristic

The seven characteristics from NIST AI 100-1 §3 cut across the four functions. The mapping below names what Composer contributes to each characteristic and what falls outside its scope.

#CharacteristicWhat Composer contributesWhat is out of Composer’s scope
3.1Valid and ReliableDeterministic streaming behaviour; reproducible outputs from the same spec and inputValidity of the upstream data sources or upstream models
3.2SafeFail-fast input validation via sanitize; per-field invalid-value cascade preventing silent corruptionSafety of the deployment context; controls in the OT layer
3.3Secure and ResilientEdge-local processing can reduce unnecessary data movement; resilience contribution is runtime behaviour — sanitize at the boundary, per-field invalid-value cascade, guarded predicates and dynamic options, graceful recovery from bad inputsIdentity, authentication, access control, key management, encryption, and network segmentation are supplied by the surrounding deployment infrastructure
3.4Accountable and TransparentFlow definitions are a reviewable source of truth for behaviour; configurable persistence supports an evidence trailOrg-process accountability — RACI, sign-off, governance committees
3.5Explainable and Interpretableappraise is a small two-layer spiking neural network — one receptor neuron per evidence source (layer 1), one decision neuron (layer 2). It is natively decomposable: alongside the combined conviction score and state label, it publishes per-source charge (how much each evidence source contributed) and per-source rate (how fast each is accumulating); the configuration that produced those numbers — signed weights, half-lives, deviation type per source, and the monitor / degraded / critical thresholds — is visible in the flow definition. This explains Composer’s assessment, not an upstream ML model’s internal reasoningInternals of an upstream ML model — those require SHAP, LIME, or similar techniques applied to that model
3.6Privacy-EnhancedComposer’s edge-deployable footprint enables edge-local processing as a deployment choice; the privacy property is realised by the deployment, not by Composer itselfDifferential privacy primitives; k-anonymity; consent management
3.7Fair – with Harmful Bias ManagedOut of scope for typical industrial deployments. For AI applications where group-level monitoring is relevant, Composer can run an independent flow per assetId (or any other group key) — that supports observation, not fairness assessmentBias detection algorithms; fairness metrics; the policy decisions themselves. Composer ships no fairness primitives

Worked Example — Monitoring a Deployed ML Model

A common AI RMF deployment pattern: a deployed ML model produces predictions; Composer monitors its inputs and outputs in real time and surfaces a deployment-health assessment.

Function / SubcategoryWhat happens in the flow
MEASURE 2.4 — input distributionesStats over input features establishes a streaming baseline; vectorDistance against the training-time baseline produces an input-drift score
MEASURE 2.4 — output distributionpageHinkley on the prediction stream detects mean shifts; processIndex tracks capability against expected output bounds
MEASURE 2.6 — real-time safetypersistenceCheck debounces single-sample anomalies; threshold raises alarms when drift sustains
MEASURE 2.9 — explainabilityappraise fuses input-drift, output-drift, and persistence signals into a deployment-health score with visible evidence weights
MEASURE 3.1 — tracking over timeQuestDB persists every assessment; the time series is queryable for trend review
MANAGE 1.3 — escalationemitIf raises an MQTT event when the deployment-health score crosses a threshold

This pattern assumes the deployment supplies the artefacts that any drift-monitoring layer needs: training-time or accepted-production baselines, feature schemas, expected output bounds, timestamp discipline, and escalation thresholds. Composer monitors against those artefacts; it does not infer their governance validity.


Evidence an Adopter Can Request

A standards-aware review benefits from naming the artefacts that can be inspected. For a Composer deployment those are:

  • The flow definition used for monitoring — every node in the pipeline, with parameters
  • Input and output schema, and the timestamp policy
  • Evidence fields persisted to QuestDB or downstream storage
  • An example emitted event payload, with topic structure and field naming
  • The appraise evidence decomposition — source weights, deviation types, half-lives, charge and rate fields, conviction score, and state thresholds
  • The semantics layer — column types, units, physical ranges, operational limits
  • The operational runbook describing who consumes emitted events and what action follows

What This Mapping Is Not

  • Not a certification claim. AI RMF is voluntary. There is no NIST certification scheme for AI RMF compliance. This mapping is self-assessed by the Composer project; it does not imply review, approval, certification, or endorsement by NIST.
  • Not exhaustive. The mapped subcategories above are illustrative, not a complete enumeration. A full Composer-to-AI-RMF crosswalk would include every subcategory in Tables 1–4 of NIST AI 100-1.
  • Not unchanging. AI RMF is a living document; the framework is scheduled for review with community input no later than 2028. This page is updated when the framework is revised.
  • Not a substitute for AI governance work. Composer makes some governance possible; it does not replace policies, accountability structures, training, or oversight.
  • Not coverage of NIST AI 600-1 (Generative AI Profile). Composer’s MCP layer surfaces structured pipeline state to LLMs for explanation but does not validate LLM outputs against the GenAI Profile’s risks — confabulation, information integrity, prompt injection. LLM output validation is a deploying-organisation responsibility today, and a candidate node for future work.

Cross-References


References

  • NIST AI 100-1, Artificial Intelligence Risk Management Framework (AI RMF 1.0), National Institute of Standards and Technology, U.S. Department of Commerce, January 2023. DOI: 10.6028/NIST.AI.100-1 
  • NIST AI Risk Management Framework programme page: nist.gov/itl/ai-risk-management-framework 
  • NIST AI Resource Center (AIRC): airc.nist.gov 
  • NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, July 2024 — companion document for generative AI use cases
Last updated on