Tutorials & Guides
HTTP Model
In this guide we will see how to configure and use the HttpModel
class to interact with any HTTP-based LLM API endpoint.
Basic Configuration
The HttpModel
class requires a few essential parameters to work:
Key Parameters
url
: The endpoint URL for the LLM APIheaders
: HTTP headers to include in requestspayload_config
: Configuration for request payload formattingconcatenate_field
: Path to extract the response content from the JSON response
Validate Configuration
To verify that your HttpModel is properly configured and working, you can test it with a simple message:
This will:
- Send a simple “Hello World” message to your configured endpoint
- Print the response if successful
- Raise an exception if there are any configuration issues
Advanced Configuration
Token Authentication
For APIs that require token-based authentication on request, you can use the TokenConfig
:
Error Handling
Returns the error message instead of raising an exception. Useful for firewall response detection.
Retry Configuration
Add retry logic for failed requests:
Using HttpModel in an Evaluation Scenario
Here’s how to use the HttpModel in an evaluation scenario: