The regulatory landscape for artificial intelligence has moved from theoretical frameworks to enforceable requirements. With the EU AI Act entering its phased enforcement period, NIST's AI Risk Management Framework gaining adoption as a de facto standard, and ISO 42001 establishing the first international certification for AI management systems, organizations deploying AI must now navigate multiple overlapping compliance obligations. This guide provides a practical approach to understanding and implementing these requirements.
The Regulatory Landscape: Three Frameworks, One Goal
Despite their different origins and structures, the EU AI Act, NIST AI RMF, and ISO 42001 share a common objective: ensuring that AI systems are developed and deployed in ways that are safe, transparent, accountable, and aligned with human values. Understanding their differences is essential for building a compliance program that satisfies all three without tripling your effort.
The EU AI Act is legislation — it carries legal force within the European Union and applies to any organization that places AI systems on the EU market or whose AI systems affect EU residents, regardless of where the organization is headquartered. It takes a risk-based approach, categorizing AI systems into four tiers: unacceptable risk (banned), high risk (subject to strict requirements), limited risk (transparency obligations), and minimal risk (voluntary codes of practice).
The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary framework published by the U.S. National Institute of Standards and Technology. It provides a structured approach to managing AI risks organized around four core functions: Govern, Map, Measure, and Manage. While not legally binding, it is increasingly referenced in U.S. federal procurement requirements and is expected to influence future U.S. AI legislation.
ISO/IEC 42001:2023 specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system (AIMS). It follows the familiar ISO management system structure (shared with ISO 27001 for information security), which makes it accessible to organizations already operating within the ISO framework. Certification is available through accredited bodies.
Risk Classification: The Foundation of Compliance
Every compliance program begins with understanding which requirements apply to your specific AI systems, and that determination starts with risk classification. The EU AI Act provides the most prescriptive classification system, and it is worth using as your baseline even if you are not directly subject to EU jurisdiction.
High-risk AI systems under the EU AI Act include those used in biometric identification, critical infrastructure management, educational assessment, employment decisions, credit scoring, law enforcement, and immigration processing. If your AI system falls into one of these categories, you face the full weight of the Act's requirements: conformity assessments, technical documentation, quality management systems, post-market monitoring, and incident reporting.
For systems outside the high-risk categories, the EU AI Act still imposes transparency obligations. Any AI system that interacts directly with natural persons must disclose that the user is interacting with an AI. Systems that generate synthetic content (deepfakes, generated text) must ensure that the output is marked as artificially generated in a machine-readable format. General-purpose AI models (including large language models) face additional transparency and copyright compliance requirements.
The NIST AI RMF does not prescribe risk categories but provides a methodology for conducting your own risk assessment through its Map function. This flexibility makes it more broadly applicable but requires more judgment in implementation. We recommend conducting the NIST mapping exercise for every AI system, then cross-referencing the results against EU AI Act categories to determine your regulatory obligations.
Documentation Requirements: What You Need to Produce
Documentation is the most tangible compliance deliverable, and the requirements across all three frameworks are substantial. For high-risk AI systems under the EU AI Act, you must maintain technical documentation that covers: a general description of the AI system, including its intended purpose and the persons responsible for its development; detailed information about the data used for training, validation, and testing, including data collection methodology, data preparation processes, and any assumptions or biases identified; the design specifications of the system, including its architecture, computational requirements, and the rationale for key design decisions; a description of the monitoring, functioning, and control of the AI system by human beings; and information about the performance of the system, including the metrics used and their results on defined populations.
ISO 42001 adds management system documentation requirements: an AI policy, risk assessment and treatment methodology, statement of applicability, and records of management review. If you already maintain ISO 27001 certification, approximately 40% of the documentation framework carries over — the management system structure is deliberately aligned.
The NIST AI RMF emphasizes documentation of risk management decisions and their rationale. The Govern function specifically calls for documenting organizational policies, roles and responsibilities, and risk tolerance thresholds. The Measure function requires documented metrics and measurement methodologies that are transparent and reproducible.
Audit Trail Implementation: From Theory to Practice
An audit trail for AI systems must capture more than traditional application logs. You need to record the complete lineage of decisions made by the AI system, the data and model versions used to make those decisions, and the human oversight actions taken in response. This is both a regulatory requirement and a practical necessity for incident investigation.
At the technical level, implement audit logging at three tiers. The first tier is inference logging: for every prediction or decision made by a production model, record the input (or a content-addressable hash if the input contains sensitive data), the model version, the output, the confidence score, and any post-processing applied. The second tier is pipeline logging: record every data transformation, model training run, evaluation result, and deployment event with timestamps, actor identities, and configuration snapshots. The third tier is governance logging: record human review decisions, risk assessment updates, policy changes, and incident reports.
Storage and retention requirements vary by jurisdiction and risk level. The EU AI Act requires that logs for high-risk systems be retained for a period appropriate to the intended purpose of the system, and at least for the duration of the system's availability on the market. In practice, we recommend a minimum retention period of seven years for high-risk systems, aligned with common financial audit retention requirements. Logs must be stored in a tamper-evident format — append-only databases, cryptographic chaining, or write-once storage are all acceptable approaches.
Cross-Framework Compliance Strategies
The most efficient approach to multi-framework compliance is to build a single, comprehensive AI governance program and map its outputs to each framework's specific requirements. We call this the "single spine" approach: one set of policies, one risk assessment methodology, one documentation repository, with framework-specific compliance reports generated from the shared foundation.
Start with ISO 42001 as your structural backbone. Its management system approach provides the organizational framework (policies, roles, processes, continuous improvement) that the EU AI Act and NIST AI RMF both require but neither specifies in structural detail. Then layer the EU AI Act's prescriptive technical requirements on top for any systems classified as high-risk. Finally, use the NIST AI RMF's Map and Measure functions to fill any gaps in your risk assessment and metrics program.
Practically, this means maintaining a centralized AI system inventory with risk classifications under both the EU AI Act taxonomy and your internal NIST-aligned risk framework. Each system in the inventory links to its technical documentation, audit trail configuration, assigned risk owner, and compliance status against each applicable framework. Regular reviews (quarterly for high-risk systems, semi-annually for others) verify that documentation is current, audit trails are functioning, and any changes to the system have been assessed for their impact on compliance status.
Automation is essential for sustaining compliance at scale. Manual documentation and audit trail management breaks down as the number of deployed AI systems grows. Invest in tooling that automatically generates technical documentation from your ML pipeline metadata, maintains audit trails as a byproduct of normal operations rather than an additional burden, and produces compliance reports mapped to each framework's requirements. The upfront investment in automation pays for itself within the first audit cycle.
Preparing for Enforcement
The EU AI Act's enforcement timeline is staggered: prohibitions on unacceptable-risk AI systems took effect first, followed by requirements for general-purpose AI models, and the full high-risk system requirements are entering enforcement through 2026. National authorities are establishing AI regulatory sandboxes and building enforcement capacity. The window for compliance preparation is narrowing.
Organizations should prioritize three actions. First, complete a comprehensive inventory of all AI systems and classify them under the EU AI Act's risk taxonomy. Second, conduct a gap analysis between your current documentation, monitoring, and governance practices and the requirements of all applicable frameworks. Third, begin implementing the technical and organizational measures needed to close those gaps, starting with the highest-risk systems.
Automate Your AI Compliance
Our governance module generates audit trails, compliance documentation, and framework-mapped reports for EU AI Act, NIST RMF, and ISO 42001.
Learn More