Detectors for AI runtime control

Guardion provides multiple detectors to protect your AI systems from various threats and ensure safe, reliable outputs.

Available Detectors

Coming Soon

Content Moderation

Filter harmful, inappropriate, or unsafe content from both inputs and outputs.

Hallucination Detection

Identify uncertainty in AI responses using representation engineering techniques.

PII Detection

Automatically detect and redact personally identifiable information in prompts and responses.

Output Handling

Safely manage code execution and other potentially risky outputs.

More Guardrails

Additional safety features coming to the Guardion platform.

Detector Configuration

Each detector can be configured with custom thresholds and policies through the Guardion dashboard or API.
Need help configuring your detectors? Contact our Support Team.