Prompt security
Jailbreaks and direct / indirect prompt injection on the input side of the model.
Content moderation
Toxicity, keyword, moderation, and language controls on prompts and responses.
Data protection & masking
PII, health data, secrets, and custom entities — detect and mask.
Application security
CORS, injection protection, IP allow / deny, and code safety for generated output.
Document analyzer
The same PII and jailbreak detectors applied to uploaded files and attachments.
URL analyzer
Fetch and inspect the content behind URLs in prompts before the model sees it.
Agent & MCP security
Tool permission, guardrails, selection safety, and budget limits for agent workflows.
How they compose
Each capability exposes signals that policies consume. A single policy can combine multiple detectors — for example: “If the request contains PII and the route goes to an external provider, mask and log.” See Policies & Enforcement for the decision model.Where they enforce
Detectors are surface-agnostic — the same PII or jailbreak classifier can run on the Gateway, Browser, API, or Endpoint surface. Picking the right surface is a separate decision: see Enforcement surfaces.What to read next
Policies & Enforcement
Turn these signals into Allow / Log / Mask / Block decisions.
Content analyzers
Document and URL analyzers — inspect files and linked pages before they reach the model.