Skip to main content

What this covers

Agents built with the Strands Agents SDK, AWS’s open-source, model-driven agent SDK. Because the agentic loop runs in your code, every model invocation the agent makes is a standard SDK call that you can route through a TrustGate Gateway. This is fundamentally different from the managed AWS Bedrock Agents service, where orchestration happens inside AWS and TrustGate has to sit outside via the API surface.
  • Surface: Gateway on the model hop; optionally API on the deployed agent endpoint.
  • Who is this for: Python apps using strands-agents with any upstream model the Gateway supports (Bedrock, Anthropic, OpenAI, Azure OpenAI, Google, etc.).

Architecture

your Strands agent ──► OpenAIModel ──► TrustGate Gateway ──► upstream model
        │                                      │               (Bedrock, Anthropic,
        ├─ tools (MCP / @tool functions)       │                OpenAI, Azure, …)
        │                                      │
        └─ optionally expose the agent         └── inspects every model hop
           behind an API (Lambda / Fargate /
           EC2) protected by a TrustGate API engine
TrustGate sees every iteration of the Strands agent loop — the messages sent to the model (user prompt, prior tool results, full history), the model’s response on each hop (reasoning and any tool-call requests), and the final answer. One Explorer entry per model hop. The loop itself (plan → tool select → tool run → reflect) stays in your process; TrustGate sees what the model asks each tool to do and what the tool returns (via the next hop’s messages), but not the tool’s internal behavior. The Gateway speaks the OpenAI wire format to clients regardless of the upstream provider. That means you use Strands’ OpenAIModel as the single integration shape for any upstream — Bedrock, Anthropic, OpenAI, or anything else the Gateway is configured to route to.

Step-by-step setup

The upstream model provider’s credentials (Bedrock IAM, OpenAI API key, etc.) live on a provider Integration — your agent process never sees them. The Gateway Route just references that Integration. This is the same setup flow as any Gateway guide; the difference is only on the client (Step 5), where Strands’ OpenAIModel replaces a direct provider SDK.
1

Install Strands

pip install strands-agents strands-agents-tools python-dotenv
Add bedrock-agentcore if you plan to deploy on BedrockAgentCore.
2

Register the upstream model provider as an Integration

Integrations → Add Integration and pick the provider your agent’s model hops should go to:
  • Bedrock — IAM role / access key / workload identity, region, models allowed.
  • OpenAI — API key.
  • Anthropic / Google / Azure OpenAI / Mistral / … — the provider’s native credentials.
No credentials live on the agent process — only on this Integration.
3

Create a Gateway Integration

Integrations → Add Integration → Gateway. Pick Serverless or Dedicated, name it (e.g. strands-agents-prod), save, and copy the Endpoint from Gateway → Overview.A default Route for the provider Integration from Step 2 is created automatically — it exposes /v1/chat/completions (the OpenAI Chat Completions path OpenAIModel targets) and does OpenAI↔provider translation on the wire. Add a Use Case like strands-agent or Tags on the Route for policy scoping if you want; no manual Route creation is needed.
4

Issue a Gateway API key

On the Gateway Integration’s API Keys tab, create a key. This is TG_API_KEY in the snippets below.
5

Point OpenAIModel at the Gateway

Set base_url to https://<gateway>.neuraltrust.ai/v1 and pass the TrustGate API key. model_id is whatever the upstream expects (Bedrock model ID for a Bedrock Integration, OpenAI model name for an OpenAI Integration, etc.). See the snippet below.
6

Verify in Runtime → Explorer

Run the agent with a tool-using prompt. In Explorer, expect one entry per model hop (plan → tool call → reflect → answer). Confirm the Route matched and detector decisions are recorded.

Client code

Point Strands’ OpenAIModel at the Gateway’s /v1 endpoint and authenticate with your TrustGate API key. The Gateway handles provider-specific translation and upstream auth:
import os
from pathlib import Path

from dotenv import load_dotenv
from strands import Agent
from strands.models.openai import OpenAIModel

load_dotenv(Path(__file__).parent / ".env", override=True)

GATEWAY_BASE_URL = os.getenv("GATEWAY_BASE_URL", "http://localhost:8081/v1")
TG_API_KEY = os.getenv("TG_API_KEY", "")
GATEWAY_MODEL_ID = os.getenv(
    "GATEWAY_MODEL_ID", "anthropic.claude-sonnet-4-20250514-v1:0"
)

model = OpenAIModel(
    client_args={
        "base_url": GATEWAY_BASE_URL,
        "api_key": TG_API_KEY or "not-used",
        "default_headers": {"X-TG-API-Key": TG_API_KEY} if TG_API_KEY else {},
    },
    model_id=GATEWAY_MODEL_ID,
)

agent = Agent(
    model=model,
    system_prompt="You are a helpful assistant.",
    tools=[],
)

agent("Hello! How can you help me?")
A few details worth calling out:
  • GATEWAY_BASE_URL points at the Gateway’s /v1 prefix (the OpenAI-compatible base path).
  • model_id is whatever the upstream expects. For Bedrock upstreams, use the Bedrock model ID (anthropic.claude-sonnet-4-20250514-v1:0); for OpenAI upstreams, use the OpenAI model name (gpt-4o). The provider Integration bound to the Route decides which provider actually receives the call.
  • api_key / X-TG-API-Key — the Gateway accepts the TrustGate key on either the standard Authorization: Bearer header (via OpenAI SDK’s api_key) or the X-TG-API-Key custom header. Passing both is safe and lets one config work across Gateway auth modes.

With tools

Wiring in tools is unchanged from standard Strands:
from tools import check_inventory, get_current_time, initiate_return, lookup_order

agent = Agent(
    model=model,
    system_prompt=(
        "You are a customer support agent for an electronics store. "
        "Look up orders, check inventory, and initiate returns when asked."
    ),
    tools=[lookup_order, check_inventory, initiate_return, get_current_time],
)

agent("What is the status of order ORD-1001?")
TrustGate sees the planner’s reasoning that leads to each tool call and the model’s consumption of the tool result, since both go through the model hop. To also inspect what the tools themselves do over HTTP, see Tools below.

Deploy on AWS with BedrockAgentCore

If you’re deploying on AWS, the BedrockAgentCore runtime wraps a Strands agent in a production HTTP server with one decorator. The TrustGate integration is unchanged — the Gateway is still in front of the model call:
from bedrock_agentcore import BedrockAgentCoreApp

app = BedrockAgentCoreApp()


@app.entrypoint
def invoke(payload):
    prompt = payload.get("prompt", "Hello! How can I help you today?")
    result = agent(prompt)
    return {"result": result.message}


if __name__ == "__main__":
    app.run()
Call it locally:
curl -X POST http://localhost:8080/invocations \
  -H "Content-Type: application/json" \
  -d '{"prompt": "What is the status of order ORD-1001?"}'
Traffic flow: your client → BedrockAgentCore entrypoint → Strands Agent → OpenAIModel → TrustGate Gateway → upstream model. Optionally, front the BedrockAgentCore endpoint itself with a TrustGate API engine to also inspect requests coming into the agent.

Tools

Strands tools are either MCP servers or Python functions decorated with @tool. Tool execution runs in your process, so TrustGate doesn’t see it automatically. Two ways to protect tools:
  • Tools that call HTTP APIs — put those APIs behind their own Gateway route or API engine. Swap the tool’s base URL to point at TrustGate.
  • MCP tools — if the MCP server you register with the agent is one you host, front it with a Gateway route (enable the streaming/SSE profile on the route for MCP’s transport).

Correlate hops into one conversation

Every model hop is an independent Explorer entry. To thread them into one logical session, pass a stable conversation ID on OpenAIModel.client_args.default_headers:
client_args={
    "base_url": GATEWAY_BASE_URL,
    "api_key": TG_API_KEY,
    "default_headers": {
        "X-TG-API-Key": TG_API_KEY,
        "x-conversation-id": conversation_id,
    },
}

Policies to apply

Because the Strands loop runs in your process and every model hop goes through the Gateway, policies fire on every iteration — the user prompt, each planner step, each tool-call request the model emits, and the final answer. Read the Policies & Enforcement page for the Where / When / Then authoring model and precedence rules. Scope policies with the Gateways or Routes filter so the agent’s model route can have rules distinct from other Gateway traffic.

Block prompt injection on every hop

  • WhereGateway + filter Gateways = <your-gateway>
  • WhenInput · Triggers · Prompt Injection, Jailbreak
  • ThenBlock
Because prior tool results are fed back as messages, this policy also catches injection that arrives indirectly — for example, a malicious substring returned by a web-search tool.

Mask PII on inputs and outputs

  • WhereGateway + filter Gateways = <your-gateway>
  • WhenInput or Output · Triggers · Email Address, Phone Number, Credit Card, Social Security Number
  • ThenMask
Protects against grounded data pulled in via retrieve, current_time, or similar built-in tools from making it back to the user or to subsequent model hops.

Block credential leakage

  • WhereGateway + filter Gateways = <your-gateway>
  • WhenInput or Output · Triggers · API Key / Secret
  • ThenBlock

Guard tool-call arguments

  • WhereGateway + filter Gateways = <your-gateway>
  • WhenTool Call · Triggers · Suspicious Arguments, Prompt Injection
  • ThenBlock
Especially important with the pre-built http_request tool, where the model can synthesize arbitrary URLs and payloads. The tool call is inspected before your code dispatches it.

Moderate the final response

  • WhereGateway + filter Routes = <customer-facing-routes>
  • WhenOutput · Triggers · Toxicity, Harmful Content
  • ThenBlock
Author each policy in Log mode first, review hits across multiple agent runs in Runtime → Logs, and promote to Mask / Block once the false-positive rate is acceptable. Mask / Block precedence means a narrow team policy can never weaken a broader organization-wide rule.

Limitations

  • Per-hop inspection, not loop-wide — TrustGate inspects each model call independently. Correlate them with a x-conversation-id header if you want one logical session in Explorer.
  • Tool execution is local — only HTTP-based tools can be protected by adding their own Gateway/API route. Pure Python tools (@tool functions with no network calls) are invisible to TrustGate by design.
  • Streaming — Strands streams model responses by default. Enable the streaming profile on your Gateway route so the Gateway can inspect chunks in order; Mask and Block are applied when the stream completes.
  • Multi-agent tools (workflow, graph, swarm) — sub-agent model hops produce additional Explorer entries. Use a shared conversation ID across sub-agents to keep them grouped.

References