AI Runtime Guardrails
Guardion provides multiple guardrails to protect your AI systems from various threats and ensure safe, reliable outputs.Available Guardrails
Prompt Security
Detect and block prompt injections, jailbreaks, and context hijacking attempts. Backed by ModernGuard models.
Data Protection
Identify and control exposure of personally identifiable information across inputs and outputs.
Content Moderation
Classify and filter unsafe or policy-violating content across multiple safety categories.
Grounding
Detect hallucinations and verify that AI responses are grounded in provided context, documents, or tool results.
Custom
Define your own safety criteria and policies for domain-specific AI content evaluation.
Guardrail Configuration
Each guardrail can be configured with custom thresholds and policies through the Guardion dashboard or API.How do guardrails work?
How do guardrails work?
Guardion’s guardrails use advanced machine learning models to analyze inputs and outputs, identifying patterns that match known attack vectors, unsafe content, or ungrounded claims.
Can I customize detection thresholds?
Can I customize detection thresholds?
Yes, each guardrail allows for custom confidence thresholds to balance security with usability for your specific use case.
How do I implement guardrails?
How do I implement guardrails?
Guardrails can be implemented via our API or SDK. Check our Quickstart guide for implementation details.