Skip to main content
TrustLens supports all four Azure AI agent infrastructure models under a single integration:
  • Azure AI Foundry Agents (v2) — agents built with the Azure AI Foundry Agents SDK, hosted in AI Services accounts (kind=AIServices)
  • AI Hub projects — enterprise AI Foundry setups created before late 2024, backed by Microsoft.MachineLearningServices/workspaces with kind=Project
  • Azure OpenAI Classic Assistants (v1) — assistants built with the Azure OpenAI Assistants API (kind=OpenAI)
  • Legacy ML Workspaces — pre-AI-Foundry Azure ML Studio workspaces (kind=Default) that can host Promptflow-based agents
A single integration discovers all four types automatically from your subscription.
How to identify your infrastructure model: In the Azure Portal, navigate to your AI Foundry project. If the URL shows ai.azure.com and the resource is under a Microsoft.CognitiveServices account, you are on the AI Services model (v2). If your project lives under a Microsoft.MachineLearningServices Hub, you are on the AI Hub model.You can also check from the CLI: az resource list --resource-type Microsoft.MachineLearningServices/workspaces --query "[].{name:name, kind:kind}"

What TrustLens discovers

Azure AI Foundry agents (v2)

DataSource
Agent name, description, statusAzure AI Foundry Agents API
Model, instructions, toolsAzure AI Foundry Agents API
Knowledge bases (vector stores)Azure AI Foundry Agents API
Content filters / RAI policiesCognitive Services Management API
Guardrails configurationDerived from RAI policies
Usage metrics (runs, tokens, latency, errors, tool calls)Application Insights via Log Analytics (requires telemetry setup — see below)

AI Hub projects (ML Workspace-backed)

DataSource
Agent name, description, statusAzure AI Foundry Agents API (via ML Workspace endpoint)
Model, instructions, toolsAzure AI Foundry Agents API
Knowledge bases (vector stores)Azure AI Foundry Agents API
Model deploymentsAzure AI Projects API
RAI policies and content filters are not available for AI Hub projects. These fields will show as unavailable for hub-backed agents.

Azure OpenAI Classic assistants (v1)

DataSource
Assistant name, description, statusAzure OpenAI Assistants API
Model, instructions, toolsAzure OpenAI Assistants API
Vector stores (knowledge bases)Azure OpenAI Assistants API
Usage metrics (runs, conversations, tool call counts)Assistants API — Threads & Run Steps (no Log Analytics required)
Content filters and guardrails are not exposed via the Azure OpenAI Assistants (v1) API. These fields will show as unavailable for Classic assistants.

Legacy ML Workspaces

DataSource
Agent name, description, statusAzure AI Foundry Agents API (via ML Workspace endpoint)
Model deploymentsAzure AI Projects API
Legacy ML Workspaces may not expose the full Agents API surface. TrustLens will discover what is available and skip unsupported endpoints gracefully.

Required permissions

The required roles depend on which infrastructure model your agents use. If you are unsure, assign all applicable roles — unused roles do not cause errors.

Azure AI Foundry (v2) — AI Services model

Assign Azure AI User to the service principal at the subscription level:
RoleScopePurpose
Azure AI UserSubscriptionDiscover Cognitive Services accounts and projects (management plane), list agents, models, datasets, and vector stores (data plane), read RAI policies and content filters
Why this role: Azure AI User includes both Microsoft.CognitiveServices/*/read (management plane — enumerate accounts, read RAI policies, read diagnostic settings) and Microsoft.CognitiveServices/* (data plane — access agents, models, datasets). Assigning it at subscription level means it applies to all AI Services resources in the subscription automatically.

Granular setup — two roles (least-privilege alternative)

If your security policy requires strictly minimal permissions, you can use two more targeted roles instead:
RoleScopePurpose
ReaderSubscriptionEnumerate Cognitive Services accounts, read account metadata, read RAI policies, read diagnostic settings
Cognitive Services OpenAI UserEach AI Services / OpenAI resourceAccess the data plane to list agents, models, assistants, vector stores
With the granular approach, you must assign Cognitive Services OpenAI User on each AI Services resource individually. Using Azure AI User at subscription level is simpler and equally secure for read-only access.

AI Hub projects — ML Workspace-backed model

If your agents were created via an AI Foundry Hub (enterprise setup before late 2024), they live under Microsoft.MachineLearningServices/workspaces resources. The roles required are different from the AI Services model:
RoleScopePurpose
ReaderSubscriptionEnumerate ML Workspace resources
Azure AI DeveloperEach AI Hub / Project workspaceAccess the Agents API data plane on Hub-backed projects
Azure AI Developer grants access to Microsoft.MachineLearningServices/workspaces/*/read and the data plane actions needed to list agents. Assign it at the Hub or Project workspace scope, or at the subscription level if you have multiple hubs.
Azure AI Developer does not include Microsoft.CognitiveServices/*/read. If you have a mix of AI Services (v2) and AI Hub projects, you need both Azure AI User (for AI Services) and Azure AI Developer (for AI Hub). Reader at subscription level covers resource enumeration for both.

Legacy ML Workspaces — pre-AI-Foundry Azure ML Studio

For older workspaces created before AI Foundry existed (Azure ML Studio / kind=Default):
RoleScopePurpose
ReaderSubscriptionEnumerate ML Workspace resources
Azure Machine Learning Data ScientistEach ML WorkspaceAccess the Agents API data plane on legacy workspaces
If you have a mix of infrastructure models, or simply want a zero-friction setup that covers everything without tracking which roles apply to which resources, assign all roles at subscription level. Azure RBAC cascades automatically to all child resources.

Core roles (always required)

RoleScopeCovers
ReaderSubscriptionControl-plane enumeration of all resource types: Cognitive Services accounts, ML Workspaces, App Insights components, diagnostic settings, resource groups
Azure AI UserSubscriptionData plane for AI Services (v2) and Azure OpenAI Classic (v1): list agents, models, vector stores, RAI policies

Conditional roles (add only what applies to you)

RoleScopeRequired for
Azure AI DeveloperSubscriptionData plane for AI Hub / ML Workspace-backed projects (pre-late-2024 Foundry Hubs)
Azure Machine Learning Data ScientistSubscriptionData plane for legacy ML Workspaces (kind=Default, pre-AI-Foundry Azure ML Studio)
Log Analytics ReaderSubscriptionv2 usage telemetry via App Insights / Log Analytics
Monitoring ReaderSubscriptionHub-backed agent metrics via Azure Monitor (AgentRuns, AgentTokens, etc.)
Reader and Azure AI User are always required. Neither can replace the other: Reader handles broad ARM control-plane enumeration (ML Workspaces, App Insights, etc.) while Azure AI User adds the Cognitive Services data plane. Azure AI User does not include Microsoft.MachineLearningServices or Microsoft.Insights read permissions.

One-command setup

SUBSCRIPTION_ID=$(az account show --query id -o tsv)
APP_ID="<your-app-id>"
SCOPE="/subscriptions/$SUBSCRIPTION_ID"

# Always required
az role assignment create --assignee $APP_ID --role "Reader"         --scope $SCOPE
az role assignment create --assignee $APP_ID --role "Azure AI User"  --scope $SCOPE

# Add if you have AI Hub / ML Workspace-backed projects
az role assignment create --assignee $APP_ID --role "Azure AI Developer" --scope $SCOPE

# Add if you have legacy ML Workspaces (kind=Default)
az role assignment create --assignee $APP_ID --role "Azure Machine Learning Data Scientist" --scope $SCOPE

# Add for usage telemetry (v2 App Insights + Hub metrics)
az role assignment create --assignee $APP_ID --role "Log Analytics Reader" --scope $SCOPE
az role assignment create --assignee $APP_ID --role "Monitoring Reader"    --scope $SCOPE

Optional — telemetry (usage metrics)

TrustLens collects two distinct types of telemetry depending on your infrastructure model:
  • AI Foundry v2 (AI Services): Usage metrics, token counts, latency, error rates, and tool call breakdowns are collected from Application Insights via a linked Log Analytics workspace.
  • AI Hub (ML Workspace-backed): Agent-level metrics (AgentRuns, AgentTokens, AgentThreads, AgentToolCalls, AgentMessages) are collected from Azure Monitor Metrics on the Hub resource.
RoleScopePurpose
Log Analytics ReaderLog Analytics Workspace (or Subscription)Query Application Insights traces (AppDependencies table) for v2 usage metrics, token counts, latency, and tool call breakdowns
Monitoring ReaderAI Foundry Hub resource (or Subscription)Query Azure Monitor Metrics (AgentRuns, AgentTokens, AgentThreads, etc.) for Hub-backed agent telemetry
Both roles can be assigned at subscription level to cascade automatically to all workspaces and resources in the subscription, eliminating the need for per-resource assignments.
What you lose without these: Usage statistics and tool call metrics will be unavailable for the corresponding infrastructure type. v1 Classic usage is collected directly via the Assistants API and does not require either role. What you gain: Per-agent usage metrics, conversation-level telemetry signals (used for aggregation, not enumeration), per-tool call breakdown by type (code interpreter, file search, custom functions), token consumption, latency, and Hub-level run and thread counts.

Prerequisites

Before configuring the integration, ensure you have:
  • An Azure subscription containing AI Services, Azure OpenAI, ML Workspace, or AI Hub resources
  • Permission to create app registrations and assign RBAC roles (User Access Administrator or Owner on the subscription)
  • For v2 telemetry: an Application Insights resource connected to your AI Foundry project, linked to a Log Analytics workspace (see Enabling telemetry)

Step-by-step setup

1

Create a service principal

az ad sp create-for-rbac --name "neuraltrust-trustlens" --skip-assignment
Save the output — you will need appId (client ID), password (client secret), and tenant (tenant ID).
2

Assign the Azure AI User role

SUBSCRIPTION_ID=$(az account show --query id -o tsv)
APP_ID="<your-app-id>"

az role assignment create \
  --assignee $APP_ID \
  --role "Azure AI User" \
  --scope /subscriptions/$SUBSCRIPTION_ID
3

(Optional) Assign telemetry roles

Skip this step if you do not need usage metrics. Two roles cover different telemetry sources:
  • Log Analytics Reader — required for v2 (AI Services) usage metrics via Application Insights
  • Monitoring Reader — required for Hub-backed agent metrics via Azure Monitor
Assigning at subscription level cascades to all workspaces and Hub resources automatically:
SUBSCRIPTION_ID=$(az account show --query id -o tsv)

az role assignment create \
  --assignee $APP_ID \
  --role "Log Analytics Reader" \
  --scope /subscriptions/$SUBSCRIPTION_ID

az role assignment create \
  --assignee $APP_ID \
  --role "Monitoring Reader" \
  --scope /subscriptions/$SUBSCRIPTION_ID
4

Configure the integration in TrustLens

Provide the following credentials when creating the Azure integration:
FieldWhere to find it
Tenant IDAzure Portal → Microsoft Entra ID → Overview → Directory (tenant) ID
Client IDAzure Portal → App registrations → your app → Application (client) ID
Client SecretCopied in Step 1
Subscription IDAzure Portal → Subscriptions → your subscription → Subscription ID

Enabling telemetry

TrustLens retrieves v2 usage metrics from Application Insights via a linked Log Analytics workspace. Azure AI Foundry automatically writes server-side OpenTelemetry traces (gen_ai.* semantic conventions) to the AppDependencies table when a Foundry project is connected to Application Insights. TrustLens queries this table to produce per-agent usage, token counts, latency, and tool call breakdowns.
Classic (v1) assistants do not require this setup. Usage for v1 assistants is collected directly from the Assistants API (Threads and Run Steps) and is always available once the core RBAC role is in place.
1

Connect Application Insights to your AI Foundry project

  1. Open Azure AI Foundry and navigate to your project
  2. Go to SettingsTracing
  3. Select or create an Application Insights resource
  4. Save — Azure will begin writing traces automatically; no SDK instrumentation is required
2

Grant Log Analytics Reader on the linked workspace

Application Insights stores trace data in a linked Log Analytics workspace. Find the workspace and grant access:
WORKSPACE_ID=$(az monitor log-analytics workspace show \
  --resource-group <resource-group> \
  --workspace-name <workspace-name> \
  --query id -o tsv)

az role assignment create \
  --assignee $APP_ID \
  --role "Log Analytics Reader" \
  --scope $WORKSPACE_ID
Traces may take a few minutes to appear after the first agent invocations.

Telemetry source summary

Data typeInfrastructureSourceRequires
Usage metrics (runs, tokens, latency, errors)AI Foundry v2 (AI Services)Application Insights — AppDependenciesApp Insights connected to Foundry project + Log Analytics Reader
Tool call breakdown (code interpreter, file search, functions)AI Foundry v2 (AI Services)Application Insights — AppDependenciesSame as above
Agent metrics (runs, tokens, threads, messages, tool calls)AI Hub (ML Workspace-backed)Azure Monitor MetricsMonitoring Reader on Hub resource (or subscription)
Usage metrics (runs, conversations, tool call counts)Azure OpenAI Classic (v1)Assistants API — Threads & Run StepsCore RBAC role only — no Log Analytics or Monitor roles needed

Feature availability by permission level

Azure AI Foundry (v2) — AI Services model

FeatureAzure AI User only+ Log Analytics Reader
Agent discoveryYesYes
Model and dataset discoveryYesYes
Security posture assessmentYesYes
Tools, instructions, knowledge basesYesYes
Content filters / RAI policiesYesYes
Usage metrics (runs, tokens, latency, errors)NoYes
Tool call breakdown (code interpreter, file search, functions)NoYes
Conversation-level telemetry (aggregation signals)NoYes

AI Hub — ML Workspace-backed model

FeatureReader + Azure AI Developer+ Monitoring Reader
Agent discoveryYesYes
Model deploymentsYesYes
Tools, instructions, knowledge basesYesYes
Content filters / RAI policiesNo — not available for Hub-backed agentsNo — not available for Hub-backed agents
Agent run and thread countsNoYes
Token consumption (input / output)NoYes
Tool call countsNoYes
Message countsNoYes

Azure OpenAI Classic (v1)

FeatureAzure AI User only
Assistant discoveryYes
Tools, instructions, vector storesYes
Usage metrics (runs, conversations, tool call counts)Yes
Content filters / RAI policiesNo — Azure API limitation

Known limitations

LimitationDetails
No conversation enumeration for v2The Azure AI Foundry API does not expose a conversations.list() endpoint. App Insights provides conversation-level telemetry signals (conversation ID, message count, latency per conversation) used for aggregation — not a list of conversations you can browse.
Tool call telemetry requires App Insights (v2)Tool call breakdown (code interpreter, file search, function calls) requires Application Insights connected to the Foundry project. Foundry automatically emits gen_ai.tool.name spans — no client instrumentation needed.
Content filters unavailable for v1The Azure OpenAI Assistants (v1) API does not expose content filter or RAI policy configuration. These fields show as unavailable for Classic assistants.
App Insights trace delayTraces may take a few minutes to appear in Log Analytics after initial agent invocations.
Conversation ID mismatch (v2)Application Insights conversation_id values (UUID format) do not match Foundry API conversation IDs (conv_xxx format). TrustLens uses telemetry-level aggregation.

Security considerations

  • The service principal has read-only access. TrustLens cannot modify, delete, or create any Azure resources.
  • Azure AI User at subscription level includes listkeys/action (the ability to read API keys for Cognitive Services accounts). If your policy prohibits this, use the granular setup (Reader + Cognitive Services OpenAI User), which does not include key listing.
  • Client secrets should be rotated regularly. Update the integration in TrustLens when you rotate the secret.
  • All credentials are encrypted at rest.

Troubleshooting

  • Verify the service principal has Azure AI User at the subscription level (not at a resource group or resource level only).
  • Confirm your AI agents are deployed in AI Services accounts with kind=AIServices or kind=OpenAI. If they are in a different resource type, contact support.
  • Check that the Subscription ID entered in the integration matches the subscription where your agents are deployed.
  • Verify that Application Insights is connected to your AI Foundry project (Foundry Portal → Settings → Tracing).
  • Verify the service principal has Log Analytics Reader on the Log Analytics workspace linked to Application Insights (found under App Insights → Settings → Linked workspace).
  • Traces appear within minutes of the first agent invocation. If no invocations have occurred, usage metrics will correctly show zero.
  • Hub agent metrics are collected from Azure Monitor Metrics, not Application Insights.
  • Verify the service principal has Monitoring Reader on the AI Foundry Hub resource (or at subscription level).
  • If no agent runs have occurred, metrics will correctly show zero.
  • Classic usage is collected via the Assistants API (Threads and Run Steps) — no Log Analytics configuration is needed.
  • Verify the service principal has data plane access (Azure AI User or Cognitive Services OpenAI User) on the Azure OpenAI resource.
  • If no threads or runs exist for an assistant, usage will correctly show zero.
Content filter data is only available for Azure AI Foundry (v2) agents. Classic (v1) assistants do not expose this via API — this is an Azure limitation, not a configuration issue.
  • The service principal may not have the role assigned at the correct scope. Verify the role assignment is at subscription level, not resource group or resource level only.
  • Role assignments can take a few minutes to propagate after being created.
Classic assistants require the service principal to have data plane access to the Azure OpenAI resource. With Azure AI User at subscription level this is covered automatically. With the granular setup, verify that Cognitive Services OpenAI User is assigned on the specific OpenAI resource.