The Advanced Prompt Attack Detection

Developed by industry experts with experience building enterprise-grade AI guardrails at Siri Apple, Nubank and other leading companies,

ModernGuard is a specialized and modern transformer-encoder model designed to detect and prevent prompt attacks in real-time. This enterprise-grade solution offers multilingual support and ultra-fast inference capabilities to protect GenAI systems across various domains.


Model Card

Modern Transformer-Encoder Architecture

  • Built on ModernBERT, a high-efficiency encoder
  • Features Rotary Positional Embeddings, Flash Attention, and memory optimizations
  • Supports 8K token context with low latency

⚡ Ultra-Fast Inference

  • Optimized for real-time streaming and in-line LLM applications
  • Achieves sub ~50ms latency in production environments

Multilingual and Domain-Aware

  • Trained on data in 8+ languages
  • Covers banking, fintech, ecommerce, healthcare, and other verticals

🔐 Threat Intelligence Training + Continuous Updates

  • Pretrained on 1 trillion tokens
  • Fine-tuned on millions of simulated and real-world prompt attacks
    • Proprietary red teaming data generated by AI attackers + red team partners
    • AI threat databases & state-of-the-art prompt attack vectors
    • Diverse synthetic data generation for safe examples
    • Continuous updates with emerging threat patterns

Benchmark Results

This is the result for the benchmark, collecting public and private threats from red teaming partners and with set of updated threats database used from NVIDIA Garak and PromptFoo libraries. Our comprehensive evaluation demonstrates ModernGuard’s superior performance across diverse attack vectors.

The benchmark methodology includes:

  • Evaluation against 40+ attack classes
  • Cross-validation across multiple domains and languages

Overall F1-Scores

ModelOverall F1-Score
guardion/Modern-Guard-v10.9718
Lakera Guard0.8600
protectai/deberta-v3-base-prompt-injection-v20.6008
deepset/deberta-v3-base-injection0.5725
meta-llama/Prompt-Guard-86M0.4555
jackhhao/jailbreak-classifier0.5000

We missed any other prompt injection detector model or solution? Please, let us know, and we can add the evaluation as well.

Threat Category Coverage

Threat Categoryguardion/Modern-Guard-1meta-llama/Prompt-Guard-86Mprotectai/deberta-v3-base-prompt-injection-v2deepset/deberta-v3-base-injectionjackhhao/jailbreak-classifierlakera-guard
Encoding0.9726670.5673330.5302220.8895560.0000000.677778
Prompt Injection0.9686020.3080430.7552990.8999800.1428570.878889
Jailbreaking0.9812740.6212970.3609960.7648240.0000000.738333
Exfiltration & Leakage0.9996670.2840000.5877300.9816670.0000000.850000
Evasion & Obfuscation0.9946590.5837640.4532160.7943320.0000000.728889
Code and Command Injection0.9902000.4740000.4552000.7964000.0000000.808000
Hard Negatives0.9580000.7540000.7560000.0140001.0000000.840000
Regular Content0.9680000.3790000.7860000.2220001.0000000.940000

Benchmarks span 40+ attack classes including obfuscation (e.g. ANSI, ASCII), jailbreaks (e.g. DAN, Goodside), injections (e.g. SQL, shell), and real-world attacks observed in LLM deployments.

A comprehensive research paper detailing ModernGuard’s architecture, training methodology, and benchmark results will be published soon.


How to Use ModernGuard

You need to combine the ModernGuard detector with a guardrail policy, so you can have control and fine-tune it for the specific risk level you want to manage (threshold levels).

💡 Example integration using a default policy

const messages = [
  { role: "user", content: "Your user input here" }
];

// Evaluate user message
const response = await fetch("https://api.guardion.ai/v1/guard", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer YOUR_API_KEY"
  },
  body: JSON.stringify({ message: messages })
});

const result = await response.json();

if (result.flagged) {
  console.log("Threat detected:", result.reason);
} else {
  console.log("Prompt is safe to use");
}