- Discovers every AI agent, model, dataset, IDE, browser extension, MCP server, agent CLI, and managed endpoint touching AI
- Assesses the security posture of each resource — tools, instructions, guardrails, authentication, access controls
- Monitors usage telemetry — conversations, token consumption, latency, errors, tool invocations
- Alerts on misconfigurations, missing guardrails, and policy violations
- Re-syncs on a configurable schedule to reflect changes in your environment
Concepts
How it works
The five-stage lifecycle — connect, discover, assess, monitor, alert — and how data flows from your environment.
Inventory
The unified catalog of every AI surface — agents, models, IDEs, browsers, extensions, MCP servers, configs, and endpoint hosts.
Risk & findings
How posture is scored, the catalog of finding types, and the triage workflow for resolving them.
Connect your first environment
Pick the platform you want to discover first. Each guide covers the exact permissions required, step-by-step setup, and what you gain (and lose) at each permission level.Azure
Azure AI Foundry (v2), AI Hub, Azure OpenAI Classic (v1), and legacy ML Workspaces under one integration.
GCP Vertex AI
Vertex AI Reasoning Engines, models, datasets, and Model Armor guardrails.
Mistral AI
Mistral agents, models, document libraries, and native moderation policies.
M365 Copilot
Copilot Studio bots and Microsoft 365 Copilot agents via Dataverse and Microsoft Graph.
GitHub
Agent configs, MCP server definitions, and agent source code across your repositories.
Endpoint Discovery (MDM)
Deploy a read-only discovery script via Microsoft Intune or Kandji to inventory AI on managed endpoints.
Reference
Data handling
What is collected, what is never collected, where it’s stored, retention, and how to revoke access.
Pair with Runtime
Use TrustLens findings as the input to TrustGate Runtime enforcement policies.