Skip to main content

Models

NeuralTrust runs internal model calls on your behalf. Detectors that rely on an LLM (for example LLM-as-judge evaluations, semantic jailbreak classifiers, some data-protection analyzers) and features that rely on embeddings (for example semantic similarity, clustering, memory) all need a provider to call. The Models panel in Team Settings is where you pick which provider handles those internal calls. It has two independent configurations:
  1. Model provider — the LLM used for reasoning and classification.
  2. Embeddings provider — the embedding model used for vector operations.
Open it from Team settings → Models.
This page does not change which models your applications can call through a TrustGate Gateway. Those upstreams are configured per-route on the Gateway itself — see Routes.

Model provider

Configures which LLM NeuralTrust uses for its own internal decisions — judge calls, classification, and any detector that asks a model a question. Available providers
ProviderSetupNotes
NeuralTrust (default)Pre-configured, no additional settings.NeuralTrust-managed models hosted in NeuralTrust infrastructure. Recommended for SaaS deployments.
Selecting NeuralTrust is the right choice for virtually every team. Additional providers (for example bring-your-own OpenAI / Azure OpenAI / Bedrock keys) are made available on hybrid and self-hosted plans where the team wants internal model calls to run against their own account and billing. To change the provider
  1. Go to Team settings → Models.
  2. Open the Provider dropdown.
  3. Pick a provider. Fill any credentials the provider exposes.
  4. Click Save.
Changes take effect on the next internal model call. In-flight evaluations finish on the previous provider.

Embeddings provider

Configures the embedding model NeuralTrust uses for vector operations — semantic similarity in detectors, red team test clustering, memory retrieval, and anywhere the platform needs to compare text by meaning rather than exact match. Available providers
ProviderSetupNotes
NeuralTrust (default)Pre-configured, no additional settings.NeuralTrust-managed embeddings. Recommended for SaaS.
To change it
  1. Go to Team settings → Models.
  2. Open the Embeddings Provider dropdown.
  3. Pick a provider and fill any credentials.
  4. Click Save.
Re-embedding existing content is not automatic. Vectors already stored against the previous provider are still queryable but are not mixed with new vectors — comparisons across providers are not meaningful. If you change providers, plan a back-fill for the corpora that need semantic continuity (typically memory stores and red team test indices).

What changes when you switch provider

AreaEffect of changing Model providerEffect of changing Embeddings provider
Detector decisions (e.g., LLM judge, semantic prompt injection)Yes — new calls go to the new provider.No.
Runtime throughput and latencyDepends on the new provider’s region and quota.Minor — embeddings are typically smaller and cached.
Cost accountingCalls show up against the new provider’s account, not NeuralTrust’s.Same.
Data localityRequest payloads for internal decisions hit the new provider.Text submitted for embedding hits the new provider.
On hybrid and self-hosted deployments, configuring your own provider is usually the whole point — it keeps internal model traffic inside your account and region. On SaaS, the default (NeuralTrust) is the normal choice.
  • Deployment modes — when to consider bring-your-own-model vs NeuralTrust-managed.
  • Security features — which detectors consume the model provider and which are fully deterministic.
  • Routes — not the same thing; routes configure which LLMs your applications call, this panel configures which LLM NeuralTrust itself calls.