Skip to main content
Tool Selection validates that the tool calls an LLM generates are correct and safe before they’re allowed to execute. It acts as a safety net against LLM hallucinations: if the model invents a tool that doesn’t exist or produces arguments that don’t match the declared schema, Tool Selection detects and blocks the call before the agent runs it. The check runs after the model emits a response — it parses the declared tools (from the request) and the tool calls (from the response) according to the provider’s format (OpenAI, Anthropic, etc.), then applies a layered validation.

The three validation layers

Tool Selection verifies a tool call on three levels, in order:
  1. Name validation — the invoked tool exists in the list of declared tools in the request. Catches hallucinated tool names.
  2. Schema validation — the arguments the model passed comply with the tool’s declared JSON schema. Catches type errors, missing required fields, and out-of-range values.
  3. Semantic validation — a secondary LLM evaluates whether the call is coherent and safe in the context of the ongoing conversation. Catches prompt-injection-driven tool hijacking where the call is schematically valid but contextually wrong.
Name and schema validation are deterministic and incur no additional cost. Semantic validation is optional but recommended for critical applications — it’s the layer that closes the gap against manipulated conversations.

Where it lives in the picker

Tool Selection sits under the Agent Security category in Create Policy → When, alongside Tool Permission and Tool Guard. Attach it to a policy and set the outcome in the Then step:
  • Log — observe validation failures without blocking.
  • Block — reject the LLM response (and therefore the tool call) when a layer fails.
Scope the detection with the policy’s Where filters so it only runs on routes that actually expose tool-calling.

Configuration

Tool Selection is automatically configured when enabled — all three validations are active by default: name, schema, and semantic.
RequirementPurpose
OpenAI API keyThe semantic-validation layer uses an OpenAI model as the secondary evaluator. A valid OpenAI API key must be configured in your team settings; if it isn’t, semantic validation is skipped and only name + schema validation run.

How it fits with the other tool controls

Tool permissionTool guardTool selection
WhenBefore the LLM plansBefore the LLM plansAfter the LLM emits a call
WatchesThe tools array in the requestSystem prompt + tool descriptions in the requestThe tool call in the response
CatchesUnauthorized tools reaching the modelJailbreaks planted in the agent’s definitionHallucinated tools, schema violations, manipulated calls
Running all three in sequence is the full defence for an agentic workflow — permission narrows the catalog, Tool Guard keeps the definitions clean, and Tool Selection verifies that whatever the model eventually emits is a legitimate call.

Pairs well with

  • Tool permission — eliminate unauthorized tools so Tool Selection only has to validate calls to approved ones.
  • Tool guard — upstream scan of system prompt and tool descriptions.
  • Observability — trace which validation layer (name / schema / semantic) caused a block.