Product updates, security research, and field lessons from the OpenGuardrails team.
February 21, 2026
New Large-Scale OpenClaw Malware Campaign Spreading on ClawHub
OpenGuardrails has identified a new, rapidly spreading malware campaign targeting the OpenClaw ecosystem through the ClawHub skill community. We are naming this threat Clawhub.Trojan.LiuComment.
OpenGuardrails Team · Security Research·8 min read
The First Principle of Cybersecurity Has Not Changed — But the Cost Law Has Been Rewritten by AI
When Claude Code Security was announced on February 20, 2026, global cybersecurity stocks dropped almost immediately. The market understood: the first principle has not changed, but the cost law governing the entire industry just did.
OpenGuardrails Team · Security Research·14 min read
OG-OpenClawGuard v1: Protecting AI Agents from Hidden Prompt Injection in Long Content
AI agents now read emails, browse web pages, and process documents autonomously. But what happens when attackers hide malicious instructions inside those long-form contents? Today, we release OG-OpenClawGuard v1 — an open-source Guard Agent plugin for OpenClaw that detects and blocks indirect prompt injection attacks in real time.
OpenGuardrails Team · Product Announcements·5 min read
Introducing OG Personal: Guardrails for Your Personal AI Assistant
Personal AI assistants now execute shell commands, browse the web, and send messages — but most users have zero visibility into what their agents can actually do. Today, we're introducing OG Personal, the first guard agent designed specifically for personal AI assistants.
OpenGuardrails Team · Product Announcements·6 min read
Your LLM Is Your Company's Second Brain — But Do You Know What It's Leaking?
Large Language Models have become the second brain of modern enterprises. But in real enterprise environments, one uncomfortable question keeps surfacing: do we actually know how much sensitive data is being sent to external LLMs — unintentionally?
OpenGuardrails Announces the AI-RSMS Community Standard Draft
A global call to shape AI Runtime Security together. OpenGuardrails announces the AI Runtime Security Management System (AI-RSMS) — an open, community-driven standard draft focused on securing AI systems during runtime.
OpenGuardrails Team · AI Runtime Security Initiative·8 min read
OpenGuardrails 4.5.0: Direct Model Access for Fast Private Deployment POCs
OpenGuardrails 4.5.0 introduces Direct Model Access—a privacy-first feature that lets enterprises quickly deploy private POCs by pointing to our SaaS models without logging any data. Deploy locally, access models remotely, keep everything private.
Unified Guardrails for Real-World AI: Configurable, Scalable, and Open Source
Help Net Security spotlighted how OpenGuardrails unifies prompt-attack defense, moderation, and sensitive-data protection in one configurable, scalable, open-source guardrail stack ready for production workloads.