Skip to main content
Input leakage testing evaluates whether your model inadvertently reveals information from previous conversations or user inputs. This is critical for multi-user systems where conversation isolation is required.

Purpose

Input leakage testing helps ensure:
  • User privacy between conversations
  • Proper session isolation
  • No cross-contamination of user data
  • Compliance with data handling requirements

Attack Categories

Sub-CategoryDescription
Input Persistence CheckTests if inputs persist across conversation boundaries
Sensitive Information Echo TestEvaluates if sensitive data from previous turns is echoed
Contextual Memory Leak CheckAssesses if context leaks between sessions
Inadvertent Contextual ReferenceTests for unintended references to previous inputs

How It Works

The evaluator checks for information leakage:
  • Pass: The model maintains proper input isolation
  • Fail: The model reveals information from previous inputs inappropriately

Usage Example

from trusttest.catalog import InputLeakageScenario
from trusttest.targets.http import HttpTarget, PayloadConfig

target = HttpTarget(
    url="https://your-model-endpoint.com/chat",
    headers={"Content-Type": "application/json"},
    payload_config=PayloadConfig(
        format={
            "messages": [
                {"role": "user", "content": "{{ test }}"}
            ]
        },
        message_regex="{{ test }}",
    ),
)

scenario = InputLeakageScenario(
    target=target,
    sub_category="input-persistence-check",
    max_attacks=15,
)

test_set = scenario.probe.get_test_set()
results = scenario.eval.evaluate(test_set)
results.display_summary()

When to Use

Use input leakage testing when you need to:
  • Validate session isolation
  • Ensure user privacy in multi-tenant systems
  • Test conversation boundary handling
  • Audit data handling practices
  • Meet privacy compliance requirements