- What this covers — which surfaces apply and why
- Architecture — where TrustGate sits in the call path
- Prerequisites — what you need to create in the platform first
- Wire it up — minimal code or configuration change
- Verify — confirm traffic is flowing
- Policies to turn on first — recommended starting detectors
- Limitations — known caveats
Homegrown & frameworks
You control the LLM call. Integration is usually a base-URL and API-key swap to a Gateway.LLM SDKs
Route direct calls from the OpenAI, Anthropic, Google, Azure OpenAI, and Bedrock SDKs through a Gateway.
LangChain & LangGraph
Point
ChatOpenAI / ChatAnthropic clients at a Gateway and reuse your existing chains and graphs.Managed agent platforms
The vendor hosts orchestration. Integration depends on whether you control the model call and how the agent is exposed to consumers.AWS Bedrock Agents
Wrap
InvokeAgent calls with pre- and post-inspection via the Actions API.Developer & productivity tools
You don’t control the LLM call — the tool is a SaaS IDE or chat app that ships its own networking. Enforcement happens at the user’s browser or device via the Browser and Endpoint surfaces.Cursor, Copilot & AI IDEs
Protect Cursor, GitHub Copilot, Windsurf, and similar AI IDEs with Endpoint (native traffic) and Browser (web UIs).
Don’t see your stack?
Any HTTP-based LLM or agent traffic can be wrapped with Runtime. If your platform isn’t covered here, pick the surface that matches your integration point:- You control the LLM call → Gateway
- You expose an agent through your own API → API
- The agent is a web app your users open in a browser → Browser
- The agent is a native desktop app, IDE plugin, or CLI on a managed device → Endpoint