Skip to main content

What TrustGate is

TrustGate is the runtime layer of NeuralTrust. It sits between your applications and the LLMs or agents they call, inspecting every request and response and enforcing security, privacy, and governance policies in real time. Think of it as a WAF for AI: the same pattern as a web application firewall, adapted to the shape of prompts, generations, documents, and tool calls.

Why a runtime layer

Generative AI breaks assumptions that traditional app security relies on:
  • Inputs are open-ended natural language, not structured payloads.
  • Outputs are generated, not returned from a known system of record.
  • Agents autonomously chain calls across tools, APIs, and documents.
  • Sensitive data flows both ways: prompts can leak PII, responses can leak system prompts, model state, or extracted content.
Static controls (auth, WAF rules, network policy) cannot see these flows. TrustGate is purpose-built to inspect and govern them.

What it protects

Prompts

Detect jailbreaks, prompt injection, policy violations, and sensitive data before the model ever sees the request.

Responses

Moderate and mask generated content, block disallowed topics, and stop exfiltration of system prompts or confidential context.

Documents

Analyze uploaded files for PII, malicious instructions, and hidden content — including OCR for scanned material.

Tool calls

Govern agent and MCP tool usage: which tools can run, with which arguments, and under which conditions.

Where it sits

TrustGate is deployed inline between your application and the LLM provider (or agent runtime). Every call goes through it, and every decision — allow, log, mask, or block — is made at the edge of your AI workload.
[ Your app / agent ] ──▶ [ TrustGate Runtime ] ──▶ [ LLM / AI Gateway ]

                                └─▶ policies, detections, logs, alerts
Tool calls and MCP servers are not downstream of TrustGate — the agent talks to them directly. TrustGate governs them by inspecting the tools[] and tool_calls fields inside the LLM payload that does flow through it.

Apply it to your stack

How you wire TrustGate in depends on what you’re running. If you control the LLM call (SDKs, LangChain, LangGraph, custom backends), a single base-URL swap is enough. If you’re on a managed agent platform (Bedrock Agents, Copilot Studio, Agentforce), you wrap the invocation edge instead. For AI IDEs and chat apps, enforcement happens on the managed device. See the Integration guides for step-by-step wiring for the most common stacks.

What you can do from here

Architecture

See how requests flow through the gateway, plugins, and policy engine.

Enforcement surfaces

Compare Gateway, Browser, API, and Endpoint — and when to layer them.

Integration guides

Apply Runtime to SDKs, LangChain, Bedrock Agents, Copilot Studio, Agentforce, Cursor, and more.

AI Gateway

The NeuralTrust AI Gateway — what a Gateway integration becomes when its routes point directly at LLMs (embedded AI Gateway mode).

Policies & Enforcement

Author rules that allow, log, mask, or block traffic.

Security capabilities

Review the detectors and controls TrustGate ships with.