The lifecycle
Connect
You register an Integration once per environment using the credentials each provider expects:
- Cloud platforms (Azure, GCP) — service principal or service account with read-only RBAC roles
- SaaS providers (Mistral) — read-only API key
- Microsoft 365 — Azure AD service principal + Dataverse Application User
- Source code (GitHub) — GitHub App installation with
contents:readonly - Managed endpoints (Intune, Kandji) — signed Device Discovery script with a per-integration write-only token
Discover
The connector enumerates every AI-related resource the credential can see:
- Agents, models, datasets, vector stores, document libraries
- Tools, instructions, knowledge bases bound to each agent
- Guardrail policies (RAI filters, Model Armor templates, Mistral moderation policies)
- Source files implementing agents and MCP servers
- Installed AI software, browser extensions, and MCP configs on managed devices
Agent, Model, MCPServer, EndpointHost, etc.).Assess
Each discovered resource is scored against the security controls relevant to its type:
- Authentication — is the resource reachable without auth? Anonymous? Restricted to a group?
- Guardrails — are content filters or moderation policies attached?
- Tool exposure — what tools can the agent invoke? Are any high-risk (script execution, identity management)?
- Instructions — does the agent have a system prompt that constrains behavior?
- Data sources — what knowledge bases or files can it read?
- Configuration drift — has any of the above changed since the last sync?
Monitor
Where the upstream platform exposes telemetry, the connector pulls usage signals to track adoption and flag anomalies:
No prompt or response content ever leaves your environment. Telemetry is metadata only.
| Source | Signals |
|---|---|
| Application Insights (Azure v2) | Runs, tokens, latency, errors, tool-call breakdown |
| Azure Monitor (Azure AI Hub) | AgentRuns, AgentTokens, AgentThreads, AgentToolCalls |
| Cloud Monitoring + Cloud Trace (GCP) | Request count, latency, CPU/memory, tool-call spans |
| Mistral Conversations API | Per-agent runs, conversations, tool-call counts |
| Dataverse transcripts (M365) | Per-bot conversation count |
| Endpoint Discovery script | Inventory delta per device per run |
Alert
Findings are surfaced three ways:
- Dashboards — Posture Risk Trend, Risk Distribution, Attack Surface by Type
- Insights panel — actionable summaries (e.g. “6 high-risk resources — Investigate”)
- Notifications — high-severity findings can be forwarded to the SIEMs configured under Audit & Compliance (Splunk, Elastic, IBM QRadar, Microsoft Sentinel, Datadog)
Data flow
- Inbound from your environment: structured metadata only (resource names, configs, telemetry counters).
- Outbound to your environment: nothing. There is no return channel that mutates your resources.
- Outbound from the platform: SIEM forwarding for high-severity findings, if configured.
Sync cadence
| Integration | Default cadence | Tunable |
|---|---|---|
| Azure | Every 6 hours | Yes |
| GCP Vertex AI | Every 6 hours | Yes |
| Mistral AI | Every 6 hours | Yes |
| M365 Copilot | Every 12 hours | Yes |
| GitHub | On-demand + every 24 hours | Incremental: skipped when HEAD SHA unchanged |
| Endpoint Discovery (MDM) | Driven by MDM script schedule (typical: daily) | Yes — change the MDM script frequency |
Read-only by construction
Every integration in TrustLens is built around the principle that no credential should be able to change anything in your environment:- Cloud roles are scoped to
*.viewer/*.readequivalents - Mistral and Microsoft Graph API permissions are read-only application permissions
- The GitHub App requests
contents:readandmetadata:read— nothing else - The Endpoint Discovery script enumerates the local filesystem and exits; the per-integration token can only write to that integration’s inventory
- All credentials are encrypted at rest and never echoed back through the API