Models
NeuralTrust runs internal model calls on your behalf. Detectors that rely on an LLM (for example LLM-as-judge evaluations, semantic jailbreak classifiers, some data-protection analyzers) and features that rely on embeddings (for example semantic similarity, clustering, memory) all need a provider to call.
The Models panel in Team Settings is where you pick which provider handles those internal calls. It has two independent configurations:
- Model provider — the LLM used for reasoning and classification.
- Embeddings provider — the embedding model used for vector operations.
Open it from Team settings → Models.
This page does not change which models your applications can call through a TrustGate Gateway. Those upstreams are configured per-route on the Gateway itself — see Routes.
Model provider
Configures which LLM NeuralTrust uses for its own internal decisions — judge calls, classification, and any detector that asks a model a question.
Available providers
| Provider | Setup | Notes |
|---|
| NeuralTrust (default) | Pre-configured, no additional settings. | NeuralTrust-managed models hosted in NeuralTrust infrastructure. Recommended for SaaS deployments. |
Selecting NeuralTrust is the right choice for virtually every team. Additional providers (for example bring-your-own OpenAI / Azure OpenAI / Bedrock keys) are made available on hybrid and self-hosted plans where the team wants internal model calls to run against their own account and billing.
To change the provider
- Go to Team settings → Models.
- Open the Provider dropdown.
- Pick a provider. Fill any credentials the provider exposes.
- Click Save.
Changes take effect on the next internal model call. In-flight evaluations finish on the previous provider.
Embeddings provider
Configures the embedding model NeuralTrust uses for vector operations — semantic similarity in detectors, red team test clustering, memory retrieval, and anywhere the platform needs to compare text by meaning rather than exact match.
Available providers
| Provider | Setup | Notes |
|---|
| NeuralTrust (default) | Pre-configured, no additional settings. | NeuralTrust-managed embeddings. Recommended for SaaS. |
To change it
- Go to Team settings → Models.
- Open the Embeddings Provider dropdown.
- Pick a provider and fill any credentials.
- Click Save.
Re-embedding existing content is not automatic. Vectors already stored against the previous provider are still queryable but are not mixed with new vectors — comparisons across providers are not meaningful. If you change providers, plan a back-fill for the corpora that need semantic continuity (typically memory stores and red team test indices).
What changes when you switch provider
| Area | Effect of changing Model provider | Effect of changing Embeddings provider |
|---|
| Detector decisions (e.g., LLM judge, semantic prompt injection) | Yes — new calls go to the new provider. | No. |
| Runtime throughput and latency | Depends on the new provider’s region and quota. | Minor — embeddings are typically smaller and cached. |
| Cost accounting | Calls show up against the new provider’s account, not NeuralTrust’s. | Same. |
| Data locality | Request payloads for internal decisions hit the new provider. | Text submitted for embedding hits the new provider. |
On hybrid and self-hosted deployments, configuring your own provider is usually the whole point — it keeps internal model traffic inside your account and region. On SaaS, the default (NeuralTrust) is the normal choice.
- Deployment modes — when to consider bring-your-own-model vs NeuralTrust-managed.
- Security features — which detectors consume the model provider and which are fully deterministic.
- Routes — not the same thing; routes configure which LLMs your applications call, this panel configures which LLM NeuralTrust itself calls.