As AI systems move rapidly from experimentation into production, enterprises are facing a new class of security and governance challenges โ challenges that existing security frameworks were not designed to address.
Today, OpenGuardrails is announcing the release of the AI Runtime Security Management System (AI-RSMS) โ an open, community-driven standard draft focused specifically on securing AI systems during runtime.
AI-RSMS is designed as a practical, auditable framework for organizations operating AI applications, agents, and large language models (LLMs) in real-world enterprise environments.
We are publishing this standard as an open draft and inviting security, IT, risk, and compliance leaders worldwide to help shape its evolution and adoption.
Why AI Runtime Security Needs a Standard
Most enterprise security frameworks were created for systems that behave deterministically. AI systems do not.
Modern AI introduces runtime behaviors that are:
- <strong>Dynamic and context-dependent</strong>
- <strong>Driven by natural language and semantics</strong>
- <strong>Capable of tool invocation and autonomous actions</strong>
- <strong>Difficult to fully assess before deployment</strong>
As a result, many real risks emerge only while the AI system is running, including:
- Prompt injection and policy bypass
- Semantic privilege escalation
- Unauthorized tool or function calls
- Contextual data leakage
- Untraceable or non-auditable AI decisions
While organizations increasingly deploy "AI guardrails," there is no shared language or standard that defines:
- What runtime AI security controls should exist
- How those controls should be enforced
- How effectiveness should be evaluated
- How AI runtime behavior should be audited
AI-RSMS exists to address that gap.
What Is AI-RSMS?
AI-RSMS (AI Runtime Security Management System) is a management-system-oriented standard that defines how organizations should govern, enforce, and audit security controls during the runtime operation of AI systems.
Rather than prescribing specific models or tools, AI-RSMS focuses on:
- <strong>Runtime control points</strong> across the AI inference lifecycle
- <strong>Policy-driven enforcement</strong>, not static content filtering
- <strong>Clear accountability and auditability</strong>
- <strong>Compatibility with existing enterprise frameworks</strong>
The structure of AI-RSMS is intentionally aligned with management system standards such as:
- ISO/IEC 27001
- ISO/IEC 27701
- Emerging regulatory frameworks, including the EU AI Act
This alignment allows AI runtime security to be discussed, reviewed, and audited using language that enterprise security leaders already understand.
What AI-RSMS Covers (at a High Level)
The standard defines requirements and controls across key runtime stages, including:
- Prompt and input handling
- Context and memory management
- Model inference enforcement
- Tool and function invocation
- Output validation and delivery
- Evidence generation and audit readiness
Importantly, AI-RSMS makes a clear distinction between:
- <strong>Detection</strong> (identifying risk)
- <strong>Decision-making</strong> (applying policy)
- <strong>Enforcement</strong> (blocking, modifying, or routing behavior)
- <strong>Auditability</strong> (recording why actions were taken)
This separation reflects how real enterprise security programs operate.
A Community Standard, Not a Vendor Specification
AI-RSMS is not a proprietary framework and not tied to any single implementation.
It is published as an open, community-driven standard draft, initiated by OpenGuardrails but intended to evolve through:
- Enterprise security practitioners
- IT and platform leaders
- AI governance and risk teams
- Auditors, compliance professionals, and regulators
- Researchers and system architects
The full specification is authored in Markdown and openly available on GitHub:
https://github.com/openguardrails/openguardrails/tree/main/standards
We believe this openness is essential. AI systems are still evolving rapidly, and AI runtime security cannot be standardized behind closed doors.
Why We Are Calling for Global Participation
AI deployment patterns differ across industries, regions, and regulatory environments. No single organization has a complete view of AI runtime risk.
By opening AI-RSMS to global participation, we aim to:
- Incorporate real enterprise use cases
- Reflect diverse regulatory expectations
- Improve clarity, precision, and applicability
- Ensure the standard remains practical, not theoretical
Whether you are operating AI systems today or preparing to do so, your input matters.
How You Can Get Involved
We invite organizations and individuals to participate in the AI-RSMS community by:
- Reviewing the draft specification
- Providing feedback or proposing changes
- Contributing use cases or control refinements
- Participating in discussions on implementation and auditability
- Piloting the standard within your organization
AI-RSMS is intentionally published as a draft, because meaningful standards are built through iteration.
Looking Ahead
AI is becoming a core component of enterprise systems.
Security leaders are increasingly accountable not just for infrastructure, but for AI behavior itself.
AI-RSMS is an early step toward a shared foundation for managing that responsibility.
We believe AI runtime security will, over time, become as foundational as information security management systems are today.
We invite you to help shape that future.
๐ Download and review the AI-RSMS community standard draft: Download PDF
๐ Join the discussion and contribute on GitHub: View on GitHub
Together, we can move AI runtime security from ad-hoc controls to a shared, auditable, and trusted standard.