Governance & General Controls

Building an AI Management System aligned with ISO 42001.

Governance & General Controls: Building the AI Management System

Governance is often the least glamorous but most critical part of AI security. Without a structured management system, technical controls are just ad-hoc patches.

Based on OWASP General Controls, this section outlines how to build an AI Governance framework that satisfies modern standards like ISO/IEC 42001.

1. The AI Lifecycle Perspective

Security cannot be a gate at the end of development. OWASP defines controls across the entire lifecycle:

A. Design & Development

  • Risk Assessment: Conduct an AI-specific risk assessment (AI-DPIA) before a single line of code is written. Define the "intended use" and "misuse" cases.
  • Supply Chain Security: Vet foundation models and datasets. Just as you scan open-source libraries (SCA), you must scan model weights (e.g., for pickle bombs) and datasets (for poisoning).
  • Privacy by Design: Determine if personal data is actually needed for training or fine-tuning. If not, minimize it.

B. Training & Fine-Tuning

  • Data Governance: Ensure training data is clean, legally obtained (copyright/consent), and free of poisoning attacks.
  • Model Hardening: Use techniques like Adversarial Training to make the model robust against known attacks.

C. Operation & Monitoring

  • Continuous Monitoring: AI systems drift. Monitoring is not just for uptime; it's for behavioral drift (is the model becoming more toxic?) and attack detection.
  • Incident Response: Standard IR playbooks fail for AI. You need specific runbooks for "Prompt Injection," "Model Theft," and "Hallucination Storms."

2. ISO/IEC 42001 & Emerging Standards Alignment

The ISO/IEC 42001 standard is the "ISO 27001 for AI." There is strong potential that the EU will accept ISO 42001 as an effective risk management practice for AI Act compliance.

Other standards are also evolving to cover AI security:

  • IRAP (Australia): Incorporating AI security controls for government systems.
  • HITRUST (Healthcare): Building specific AI controls for health data.
  • NIST AI RMF: The de facto standard for US-based risk management.

Strategic Advantage: Organizations that adopt these frameworks early (ahead of hard regulation) gain a competitive advantage. It is easier to map existing robust controls to new laws than to build compliance from scratch under a deadline.

OWASP controls map directly to these frameworks:

OWASP Control DomainISO 42001 ClauseCISO Action
AI Program ManagementContext of Organization (4)Define the scope: Which AI systems are "high risk"?
Risk ManagementPlanning (6)Maintain a dynamic "AI Risk Register" that updates as models evolve.
Data SecurityOperation (8)Implement strict RBAC for training data and RAG knowledge bases.
Supplier ManagementSupport (7)Mandate AI security addendums in contracts with LLM providers (OpenAI, Anthropic).

3. Key Governance Artifacts

To demonstrate compliance (whether for EU AI Act or SOC 2), you need these three documents:

  1. AI Use Policy: An Acceptable Use Policy (AUP) defining allowed tools (e.g., "Enterprise Copilot" vs. "Public ChatGPT") and data types.
  2. Model Cards (System Cards): Documentation for every deployed model, detailing its limitations, training data, and known biases.
  3. Algorithmic Impact Assessment: For high-risk use cases, a formal document analyzing potential harm to users (bias, safety).

CISO Takeaway

Governance is not paperwork; it is visibility. You cannot secure "Shadow AI."

Your first step is to establish an AI Inventory—a living database of every model, agent, and AI-enabled SaaS tool in use across the organization. Only then can you apply General Controls.


Continue to the next section: Threats Through Use