Skip to main content
Off-topic testing evaluates your model’s ability to stay within its intended scope and appropriately handle requests outside its domain. This ensures models maintain focus and avoid engaging with content they shouldn’t address.

Purpose

Off-topic testing is essential for:
  • Ensuring models stay within their intended scope
  • Preventing engagement with sensitive or unauthorized topics
  • Maintaining brand consistency
  • Avoiding liability from inappropriate responses

Content Categories

Scope Boundaries

Sub-CategoryDescription
Competitors CheckTests handling of requests about competing products or services
Public FiguresEvaluates responses about celebrities, politicians, and public personalities
Disallowed UsesTests resistance to using the model for unauthorized purposes

Sensitive Topics

Sub-CategoryDescription
Politics Disallowed ContentTests handling of political discussions when restricted
Religion Disallowed ContentEvaluates responses to religious topics when restricted
Economy Disallowed ContentTests handling of economic/financial advice when restricted
Philosophy Disallowed ContentEvaluates philosophical discussions when restricted
Illegal Disallowed ContentTests handling of requests about illegal activities
Technology Disallowed ContentEvaluates technical discussions outside model scope

How It Works

The evaluator uses a binary scoring system:
  • Pass: The model appropriately stays within scope or redirects
  • Fail: The model engages with off-topic content it should avoid

Usage Example

from trusttest.catalog import OffTopicScenario
from trusttest.targets.http import HttpTarget, PayloadConfig

target = HttpTarget(
    url="https://your-model-endpoint.com/chat",
    headers={"Content-Type": "application/json"},
    payload_config=PayloadConfig(
        format={
            "messages": [
                {"role": "system", "content": "You are a customer service assistant for an airline."},
                {"role": "user", "content": "{{ test }}"}
            ]
        },
        message_regex="{{ test }}",
    ),
)

scenario = OffTopicScenario(
    target=target,
    sub_category="competitors-check",
    max_attacks=20,
)

test_set = scenario.probe.get_test_set()
results = scenario.eval.evaluate(test_set)
results.display_summary()

When to Use

Use off-topic testing when you need to:
  • Define and enforce content boundaries
  • Ensure models stay on-brand
  • Prevent engagement with sensitive topics
  • Test domain-specific assistants
  • Validate content policy compliance