What this covers
LangChain applications and LangGraph workflows that call LLM providers throughChatOpenAI, ChatAnthropic, ChatGoogleGenerativeAI, AzureChatOpenAI, or ChatBedrock. TrustGate inspects each LLM hop independently — not the LangChain orchestration itself — so policies apply to every model call the chain or graph makes.
- Surface: Gateway
- Who is this for: Python (
langchain,langgraph) and TypeScript (@langchain/core,@langchain/langgraph) stacks.
Architecture
llm.invoke(...) becomes a discrete request in the Gateway, with its own route match and detector decisions.
Prerequisites
- A Gateway integration with Routes for every provider your chain touches (OpenAI, Anthropic, etc.).
- A Gateway API key with permission to call those routes.
Wire it up
Swapbase_url / baseURL and api_key / apiKey on each model client. No other changes to prompts, tools, or graph structure.
LangChain (Python)
LangChain (TypeScript)
LangGraph
LangGraph nodes use the same LangChain model clients. Configure them once and pass them into your graph as usual:Tools and agents
For tool-calling agents (create_react_agent, AgentExecutor, LangGraph prebuilt agents), the LLM hop is protected by the Gateway. To also protect tool execution — for example, a tool that calls an internal HTTP API — wrap that tool’s endpoint with a separate Gateway route or an API integration.
Verify
- Run a chain or graph that makes at least one LLM call.
- Open Runtime → Explorer.
- You should see one entry per LLM hop. Chains with multiple steps produce multiple entries.
conversation_id:
Policies to turn on first
- Prompt security — jailbreak and prompt injection on user input and retrieved documents (critical for RAG chains).
- Data protection & masking — PII and secrets on both sides.
- Tool-guard (when using agents) — validate tool arguments before they reach the LLM’s action plan.
- Context security / RAG poisoning — on chains that retrieve untrusted content.
Limitations
- Per-hop inspection: detectors run on each LLM call independently. Multi-turn context is inferred from the messages you pass in; use a conversation header to correlate.
- Streaming: streamed responses are inspected once the stream completes;
MaskandBlockapply to the final consolidated payload. - Tool execution: LangChain tool calls that hit external APIs are not automatically covered. Wrap those endpoints with their own Gateway or API integration if they handle sensitive data.
- Custom runnables: anything that bypasses the standard chat models (direct
httpxcalls, custom providers) will not be inspected unless it also uses the Gateway base URL.