# GuardionAI ## Docs - [Product Updates](https://docs.guardion.ai/changelog.md): Learn about the latest GuardionAI product updates. - [Custom](https://docs.guardion.ai/custom.md): Define your own custom safety policies for AI content evaluation - [Overview](https://docs.guardion.ai/detectors.md): Learn about Guardion's AI runtime guardrails for safe and reliable AI systems. - [Detokenize](https://docs.guardion.ai/detokenize.md): Restore original data from vaulted tokens using the Detokenize endpoint. - [Detokenize (reveal) API](https://docs.guardion.ai/detokenize-api.md): Inbound process to restore original sensitive data from vaulted tokens (e.g., [CONTACT_HASH]). Requires valid authorization to reveal raw PII. - [Security Gateway](https://docs.guardion.ai/gateway.md): A specialized gateway to enforce runtime guardrails, PII redaction, and policies on any LLM and MCP call - [Grounding](https://docs.guardion.ai/grounding.md): Detect hallucinations and ungrounded claims in RAG and agentic workflows - [Guard API](https://docs.guardion.ai/guard-api.md): The primary entry point for real-time evaluation. Scans messages against your policies and returns a breakdown of violations and a redacted correction choice. - [Guardion-1-8B](https://docs.guardion.ai/guardion-1-8b.md): Multilingual AI safety judge for grounding, hallucination detection, and custom policy evaluation - [Prompt Security](https://docs.guardion.ai/injection.md): Prompt Injection and Jailbreak detection powered by ModernGuard models - [CrewAI](https://docs.guardion.ai/integrations/crewai.md): Learn how to integrate GuardionAI with CrewAI for real-time AI guardrails and policy enforcement in your agent workflows. - [Guardion SDK](https://docs.guardion.ai/integrations/guardion-sdk.md): Protect your LLM applications from prompt injection and misuse with Guardion's AI Firewall SDK. - [LangChain](https://docs.guardion.ai/integrations/langchain.md): Learn how to integrate GuardionAI with LangChain for real-time AI guardrails and policy enforcement in your LLM applications. - [LangGraph](https://docs.guardion.ai/integrations/langgraph.md): Learn how to integrate GuardionAI with LangGraph for real-time AI guardrails and policy enforcement in your agent workflows. - [LiteLLM](https://docs.guardion.ai/integrations/lite-llm.md): Learn how to integrate GuardionAI with LiteLLM for real-time AI guardrails and policy enforcement in your LLM routing workflows. - [OpenAI Agents SDK](https://docs.guardion.ai/integrations/openai-agents-sdk.md): Learn how to integrate GuardionAI with the OpenAI Agents SDK for real-time AI guardrails and policy enforcement. - [Introduction](https://docs.guardion.ai/introduction.md) - [Logs API](https://docs.guardion.ai/logs-api.md): Query historical evaluation data, filtered by application, session, or time range for auditing and debugging. - [Overview](https://docs.guardion.ai/models.md): Learn about Guardion's AI safety models powering runtime guardrails. - [Content Moderation](https://docs.guardion.ai/moderation.md): Classify and filter unsafe or policy-violating content - [Moderation v0](https://docs.guardion.ai/moderation-model.md): Model card for Content Moderation - [ModernGuard v1](https://docs.guardion.ai/modern-guard.md): Multilingual and Ultra-Fast Prompt Attack Detector for AI Agent Security - [OpenAI Compatible API](https://docs.guardion.ai/openai-compatible.md) - [Data Protection (PII)](https://docs.guardion.ai/pii.md): Identifies, classifies, and manages the exposure of Personally Identifiable Information (PII) in LLM inputs and outputs. - [PII v0](https://docs.guardion.ai/pii-model.md): Model card for PII detection (Data Protection) - [Applications](https://docs.guardion.ai/platform/applications.md): Organize logs and guardrail policies by application; assign reusable policies to apps; evaluate via API with an application ID. - [Feedbacks](https://docs.guardion.ai/platform/feedbacks.md): Improve your AI guardrails through our feedback loop system. - [Logs & Investigation](https://docs.guardion.ai/platform/investigation.md): Monitor, analyze, and respond to AI system activity with comprehensive logging and feedback tools. - [Policies](https://docs.guardion.ai/platform/policies.md): Define guardrails once and reuse them across multiple applications. Configure detectors, targets, sensibilities and safe responses. - [Account Security](https://docs.guardion.ai/platform/security.md): Strengthen your Guardion account with multi-factor authentication. - [Track usage & limits](https://docs.guardion.ai/platform/usage.md): Track API requests and token consumption trends. - [Policy API](https://docs.guardion.ai/policy-api.md): Defines a new policy with specific detectors (PII, Injection, etc.) and enforcement thresholds. - [Quickstart](https://docs.guardion.ai/quickstart.md): Run your first Guardion runtime control in under 2 minutes. - [Support](https://docs.guardion.ai/support.md) ## OpenAPI Specs - [openapi](https://docs.guardion.ai/openapi.json)