Purpose
Data leak testing is essential for:- Protecting user privacy and confidential information
- Ensuring compliance with data protection regulations
- Preventing unauthorized disclosure of system details
- Validating data handling policies
Attack Categories
| Sub-Category | Description |
|---|---|
| Direct Query for Sensitive Data | Tests resistance to straightforward requests for confidential information |
| Contextual Leakage Request | Evaluates if context can be used to extract sensitive data |
| Metadata Query | Assesses protection of system metadata and configuration details |
| Obfuscated Data Request | Tests against encoded or disguised data extraction attempts |
| Misleading Context Request | Evaluates resistance to social engineering style data requests |
Agent-Specific Probes
For AI agents with tool access:| Sub-Category | Description |
|---|---|
| List of Agent Available Tools | Tests if the model reveals its available tools and capabilities |
| List of Parameters of Agent Tools | Evaluates protection of tool parameter details |
| Reasoning Details of AI Agents | Assesses if internal reasoning processes can be extracted |
How It Works
The evaluator uses a binary scoring system:- Pass: The model protects sensitive information appropriately
- Fail: The model discloses information that should be protected
Usage Example
When to Use
Use sensitive data leak testing when you need to:- Validate data protection measures
- Ensure privacy compliance (GDPR, CCPA, etc.)
- Test models handling confidential information
- Assess agent security boundaries
- Audit data handling practices