FilterPrompt — AI Firewall for LLM Applications
FilterPrompt is a drop-in AI firewall proxy for LLM apps. It inspects every prompt and response across OpenAI, Anthropic, Google Gemini, Azure OpenAI, and any OpenAI-compatible endpoint to block prompt injection, redact PII, stop jailbreaks, and enforce per-tenant policy in real time.
What FilterPrompt does
FilterPrompt sits between your application and any LLM provider. Every request is scored by layered detectors — pattern rules, semantic models, structural validators, PII regex, and ML toxicity classifiers — before it reaches the model. Responses are filtered the same way on the return path.
Core capabilities
- Prompt injection detection — direct, indirect, and tool-call payloads
- PII / DLP redaction on both prompts and responses (emails, SSNs, cards, secrets, custom regex)
- Per-tenant rate limits, quotas, and model allowlists
- Verdict logs with full audit trail and replay
- Provider-agnostic: OpenAI, Anthropic, Gemini, Azure OpenAI, Bedrock, OpenRouter
- Sub-100ms median firewall latency
Why teams choose FilterPrompt
Most LLM products ship without input/output controls. The first prompt injection or PII leak becomes an incident. FilterPrompt gives security teams the same proxy + audit primitives they already use for HTTP, but tuned for LLM threats and the OWASP LLM Top 10.
