LLMs & Embeddings
LLM Clients
TrustTest provides a flexible abstraction layer for working with different LLM providers through its LLMClient
interface. This architecture allows for seamless integration with various LLM services while maintaining a consistent interface for generating questions, evaluations, and other LLM-powered features.
Architecture
The core of this system is the LLMClient
abstract base class, which defines two main methods:
complete(instructions, system_prompt)
: For single-prompt completionscomplete_conversation(messages)
: For multi-turn conversations
Each implementation handles provider-specific details while exposing a unified interface.
Supported Providers
TrustTest currently supports the following LLM providers:
- OpenAI
- Anthropic
- Ollama
- vLLM
- Azure OpenAI
- Deepseek
Usage Example
The abstraction allows for easy switching between providers while maintaining consistent behavior across the application.
Embeddings Clients
TrustTest provides a flexible abstraction layer for working with different embedding providers through its EmbeddingsModel
interface. This architecture allows for seamless integration with various embedding services while maintaining a consistent interface for generating vector representations of text.
Architecture
The core of this system is the EmbeddingsModel
abstract base class, which defines the main method:
embed(texts)
: Converts a sequence of texts into numerical vector representations
Each implementation handles provider-specific details while exposing a unified interface.
Supported Providers
TrustTest currently supports the following embedding providers:
- OpenAI
- Ollama
Usage Example
The abstraction allows for easy switching between providers while maintaining consistent behavior across the application.
Global Configuration
TrustTest provides a global configuration system to manage LLM and embeddings settings across your application. The configuration can be set using the set_config()
function, which accepts a dictionary with settings for different components:
The configuration supports the following components:
evaluator
: LLM settings for evaluation tasksquestion_generator
: LLM settings for generating test questionsembeddings
: Settings for the embeddings modeltopic_summarizer
: LLM settings for topic summarization
Each component accepts:
provider
: One of “openai”, “azure”, “google”, “anthropic”, “ollama” (for LLMs) or “openai”, “azure”, “google”, “ollama” (for embeddings)model
: The specific model name for the chosen providertemperature
: (LLMs only) Controls randomness in model outputs (0.0 to 1.0)
Implementing Custom Clients
Both LLM and Embeddings clients can be easily extended by implementing custom providers. The base classes provide a clear interface that you need to implement.
Custom LLM Client
To create a custom LLM client, inherit from LLMClient
and implement the required methods:
Custom Embeddings Client
To create a custom embeddings client, inherit from EmbeddingsModel
and implement the required method:
Once implemented, you are ready to use them: