Build AI for Production
Secure by Default

The only production-ready open-source AI guardrails platform for enterprise AI applications.

OpenGuardrails is an open-source runtime AI security layer that protects the entire inference pipeline — prompts, models, agents, and outputs — with policy-driven controls. It defends against prompt injection, jailbreaks, PII & data leaks, and unsafe content, and is designed for real production workloads.

Fastest Way to Evaluate

  • •Start with the free SaaS platform for instant evaluation
  • •Same codebase as the open-source release
  • •No vendor lock-in, no hidden logic

Go Open Source When Ready

  • •Download and deploy from GitHub
  • •Pull models from Hugging Face
  • •Fully self-hosted, production-ready

Built on Solid Research

  • •Technical report & evaluation at arXiv:2510.19169
  • •This is not a black box SaaS. This is an open, verifiable system.

Try instantly on our free SaaS, then deploy the same open-source code in your own production environment.

See how OpenGuardrails protects your AI applications in real-time

119+
Languages Supported
SOTA
Performance
274.6ms
P95 Latency
Apache2.0
Open Source

Why Enterprises Choose OpenGuardrails

The open-source standard for securing enterprise AI inference at runtime

Enterprise-grade, Production-ready

Built for real production environments — not demos.

  • ✓Designed for high-throughput, low-latency inference paths
  • ✓Supports policy-based enforcement, auditability, and observability
  • ✓Proven in enterprise PoC and production deployments

"We've already crossed the gap from lab to production."

Open-source by Default, No Lock-in

Fits naturally into modern open-source AI stacks.

  • ✓Works with open-source models, AI agents, and workflows
  • ✓Seamlessly integrates with tools like n8n, Dify, gateways, and internal platforms
  • ✓Fully self-hostable, extensible, and transparent

"If your AI stack is open-source, your guardrails should be too."

Policy-driven, Customizable by Design

Your policies. Your risks. Your guardrails.

  • ✓Built-in scanners for security, compliance, and data protection
  • ✓Custom scanners & custom model training for enterprise-specific rules
  • ✓Unified policy layer across prompts, agents, tools, and outputs

"This is not content moderation — this is enterprise control."

OpenGuardrails is the open-source standard for securing enterprise AI inference at runtime.

Whether you're an enterprise AI developer, architect, or security lead — this is the only reasonable security layer choice.

State-of-the-Art Technology

Leading the industry in AI safety and guardrails performance

Industry-Leading Performance

OpenGuardrails Performance Results

For full details, check out the OpenGuardrails Tech Report

87.1%
English Prompt F1 Score
vs. 84.3% (Qwen3Guard-8B)
88.5%
English Response F1 Score
vs. 80.5% (Qwen3Guard-8B)
97.3%
Multilingual Prompt (RTP-LX)
vs. 85.0% (Qwen3Guard-4B)
97.2%
Multilingual Response
vs. 78.1% (Qwen3Guard-4B)

Key Innovations

Configurable Policy Adaptation

Dynamic per-request policy configuration with continuous probabilistic sensitivity thresholds (τ ∈ [0,1]). Unlike static "strict/loose" modes, OpenGuardrails enables real-time sensitivity tuning for safety governance.

Unified LLM Architecture

Single 14B→3.3B (GPTQ quantized) model handling both content-safety and model-manipulation detection. Achieves superior semantic understanding compared to hybrid BERT-style architectures while maintaining production-level efficiency.

Multilingual Excellence

Robust performance across 119 languages and dialects, with SOTA results on English, Chinese, and multilingual benchmarks. Includes OpenGuardrailsMixZh 97k dataset contribution under Apache 2.0 license.

Production-Ready Platform

First fully open-source guardrail system with both large-scale safety LLM and deployable platform. RESTful APIs, Docker deployment, and modular components for seamless private/on-premise integration.

Enterprise-Ready Features

Everything you need to secure AI applications across any cloud or deployment

Multi-Cloud Support

Protect AI models across AWS, Azure, GCP, and on-premise deployments. Works with OpenAI, Anthropic, open-source models, and custom LLMs - wherever they run.

Developer-First API

RESTful API with SDKs for Python, Node.js, Java, and Go. Get started in minutes with comprehensive docs and code examples.

Prompt Injection Defense

Advanced protection against jailbreaks, prompt injection, code-interpreter abuse, and malicious code generation attempts.

Content Safety Detection

Detect harmful, hateful, illegal, or sexually explicit content across 12 risk categories with configurable sensitivity thresholds.

Data Leakage Prevention

Identify and redact sensitive personal and organizational information using NER pipelines and regex-based detection.

Real-Time Performance

P95 latency of 274.6ms ensures your applications stay fast. High concurrency support for production workloads.

Latest from the blog

How teams are shipping safer AI

Release notes, field insights, and security research from the OpenGuardrails team and partners.

View all posts

Nov 5, 2025

Unified Guardrails for Real-World AI: Configurable, Scalable, and Open Source

Help Net Security spotlighted how OpenGuardrails unifies prompt-attack defense, moderation, and sensitive-data protection in one configurable, scalable, open-source guardrail stack ready for production workloads.

Thomas Wang, CEO, OpenGuardrails7 min read
Read article