What this covers
AI-assisted IDEs and coding assistants where the LLM provider and orchestration are hosted by the vendor — you can’t intercept the call server-side. Enforcement happens on the developer’s managed device. This includes:- Cursor, Windsurf, GitHub Copilot, Sourcegraph Cody, JetBrains AI Assistant, and similar native IDEs or IDE plugins.
- Web-based AI IDEs and chat apps used for coding (Cursor web, Copilot chat in the browser, claude.ai, chatgpt.com, gemini.google.com) — only the public, approved chat apps, not internal tools.
- Surfaces: Endpoint for native IDE traffic, Browser for web UIs.
Architecture
Prerequisites
- An Endpoint integration — provides the PAC file URL, the client certificate+key, and the TrustGate CA certificate used for dynamic TLS.
- A Browser integration — provides the MDM-distributable extension package for Chromium-based browsers.
- An MDM tool (Jamf, Intune, Kandji, Workspace ONE, or similar) with enrollment on developer machines.
- Chromium-based browsers only for the Browser surface (Chrome, Edge, Brave, Arc, Opera). Firefox and Safari are not supported.
Wire it up
Endpoint — native IDE traffic
- Create an Endpoint integration in the console. Download the PAC URL, the per-org client certificate + key, and the TrustGate CA certificate.
- Using your MDM, push to managed developer machines:
- The TrustGate CA certificate into the system trust store (so dynamic TLS certificates are trusted by native apps).
- The client certificate + key into the system keychain (used for mTLS to identify your org).
- A proxy configuration that uses the PAC URL so the OS-level HTTPS traffic for the targeted IDEs is routed through the TrustGate Endpoint MITM.
- In TrustGate, configure the Endpoint policies to match the LLM endpoints those IDEs call (for example,
api.openai.com,api.anthropic.com,api.githubcopilot.com,api.cursor.sh). Anything outside the scope defined on the integration is not intercepted.
Browser — web AI apps
- Create a Browser integration. Download the MDM extension package.
- Push the extension to Chromium browsers via MDM force-install policies (for Chrome and Edge, use their enterprise policy for
ExtensionInstallForcelist). - Configure which AI chat apps the extension activates on (for example,
chatgpt.com,claude.ai,gemini.google.com,cursor.com,github.com/copilot).
Verify
- On a test managed machine, open Cursor or use GitHub Copilot in an IDE. Send a prompt.
- Open Runtime → Explorer and filter by the Endpoint integration. The prompt and response should appear.
- Separately, open
chatgpt.comin Chrome on the same machine and send a prompt. It should appear filtered by the Browser integration.
Policies to turn on first
- Source-code IP leakage — pattern and semantic detectors for proprietary identifiers, internal project names, and secret tokens.
- Data protection & masking — PII and credentials embedded in code or pasted prompts.
- Prompt security — jailbreak attempts originating from pasted content (indirect prompt injection).
- Keyword block — list of project codenames or customer identifiers that must not leave the device.
Limitations
- Streaming on Endpoint: many AI IDE providers use streaming responses.
MaskandBlockactions apply to the final consolidated payload; the detector decision is enforced once the stream completes. - Certificate pinning: some native vendors pin their API certificates and will reject the TrustGate dynamic TLS certificate. Consult the vendor’s enterprise documentation; a few IDEs offer an MDM-managed CA override.
- Browser scope: only Chromium-based browsers are supported. Firefox and Safari are not covered.
- Unmanaged devices: BYOD laptops without MDM are out of scope for both surfaces. Restrict access from unmanaged devices via SSO / conditional access upstream.
- Internal chat apps: these surfaces protect your developers when they use the approved public LLM tools. Homegrown internal chat apps should be covered with the Gateway or API surfaces, not Browser or Endpoint.