straced/layer
/Domains

LayerDomainsGovernance & policy

The rules and control requirements around AI use. For many teams, policy readiness now affects launch timing as much as engineering readiness.

Mapped players3As of2026-05-08

EU AI Act

Binding regulation with risk-tier enforcement rolling out through 2025-26. Prohibited practices enforcement began February 2025. High-risk system requirements are the current implementation pressure.

Cold read

The EU AI Act is binding regulation enacted by the European Union that classifies AI systems by risk tier and sets compliance requirements. Prohibited practices enforcement began February 2025; high-risk system requirements are rolling out through 2026.

Position read

The main practical challenge for most teams right now is classification — determining which risk tier their system falls into. The Act's definitions are broad and legal teams are still developing standard interpretations. Implementation guidance from the EU AI Office has been slower than expected, leaving compliance teams to make judgment calls in the meantime.

NIST

Publishes the AI Risk Management Framework (AI RMF 1.0). Primary voluntary governance reference for US-based teams deploying in regulated domains.

Cold read

NIST is the US National Institute of Standards and Technology, a federal agency. It published the AI Risk Management Framework (AI RMF 1.0), the primary voluntary governance reference for US enterprise AI deployment.

Position read

The US standards body that published the AI Risk Management Framework. The AI RMF is the primary voluntary governance reference for US enterprise AI. Many procurement and legal teams use NIST RMF language as the baseline for AI control requirements, even without a regulatory mandate to do so.

UK AI Safety Institute

Runs pre-deployment evaluations on frontier models. Published Inspect (open-source eval framework). Influence on international safety norms above its regulatory authority.

Cold read

The UK AI Safety Institute is a UK government body that runs pre-deployment evaluations on frontier AI models. It published Inspect, an open-source evaluation framework used by labs and regulators internationally.

Position read

Punches above its regulatory weight because it published practical evaluation tooling (Inspect) that other labs and regulators have adopted directly. Currently at risk from UK fiscal pressure, which could reduce its evaluation capacity. Several allied governments reference its protocols, so a capability reduction would have wider signal impact than its formal authority suggests.