Skip to main content
The Grounding detector identifies hallucinations and ungrounded claims in AI-generated responses. It verifies that assistant outputs are faithfully supported by the provided context, retrieved documents, or tool call results — critical for RAG pipelines and agentic workflows.

What it detects

  • Responses that contradict or are unsupported by retrieved context (groundedness)

Available models (versions)

  • Guardion-1-8B — multilingual grounding and hallucination detection
See the detailed model card in Guardion-1-8B for architecture and benchmarks.

Detection Categories

CategoryStatusDescription
GROUNDEDNESSAvailableAssistant’s response includes claims or facts not supported by or contradicted by the provided context.
CONTEXT_RELEVANCEComing soonRetrieved context is not pertinent to answering the user’s question or addressing their needs.
ANSWER_RELEVANCEComing soonAssistant’s response fails to address or properly respond to the user’s input.
FUNCTION_CALLComing soonAssistant’s response contains function calls that have syntax or semantic errors based on the user query and available tools.
AGENT_CHAINComing soonAgent traces contain dangerous action chains or combinations, looping behaviors, or irrelevant actions that deviate from the intended task.

Dashboard Output

In the Guardion dashboard, the Grounding guardrail displays a binary SAFE / UNSAFE verdict for each evaluated response, along with the confidence score. This makes it easy to monitor grounding quality at a glance and drill into individual flagged responses for investigation.

Using the Grounding detector

// 1) Create or update a Grounding policy
await fetch("https://api.guardion.ai/v1/policies", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer YOUR_API_KEY"
  },
  body: JSON.stringify({
    id: "grounding-check",
    definition: "Detect hallucinations and ungrounded claims",
    threshold: 0.9, // L1 (Confident). Use 0.8 for L2, 0.7 for L3, 0.6 for L4
    detector: {
      model: "grounding",
      target: "assistant",
    }
  })
});

// 2) Evaluate using that policy
const response = await fetch("https://api.guardion.ai/v1/guard", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer YOUR_API_KEY"
  },
  body: JSON.stringify({
    messages: [
      { role: "context", content: "Our return policy allows returns within 14 days of purchase." }, // or "system" role
      { role: "user", content: "What is the return policy?" },
      { role: "assistant", content: "The return policy allows returns within 30 days." }
    ],
    policy: "grounding-check"
  })
});

Threshold levels

  • L1 (0.9): Confident
  • L2 (0.8): Very Likely
  • L3 (0.7): Likely
  • L4 (0.6): Less Likely

Notes

  • Best suited for RAG pipelines where factual accuracy against retrieved documents is critical.
  • Combine with Injection and Moderation detectors for comprehensive runtime safety.