TrustTest is designed to be completely agnostic to the LLM you want to test. This flexibility allows you to evaluate any LLM model, regardless of its underlying implementation or hosting environment.

Currently, only LLM models that expose a REST API endpoint are supported to run tests through the web UI.

Key Features

  • LLM Agnostic: Works with any LLM that exposes a REST API endpoint
  • Web UI Integration: Run and manage all your tests through an intuitive web interface

Deployment Recommendations

TrustTest is optimized to evaluate LLM models that are not running in the same process. For best performance and reliability, we recommend:

  • Running TrustTest in a separate Python process from your LLM or in a different server.
  • Using the HTTP model interface to connect to your LLM’s API endpoint

The following sections will guide you through the process of connecting your LLM to TrustTest and configuring it for optimal testing.