Category | Description |
---|---|
Hate | Content expressing hatred or discrimination |
Violence | Content depicting or promoting violence |
SelfHarm | Content related to self-harm behaviors |
Sexual | Sexually explicit or inappropriate content |
Severity Level Description | FourSeverityLevels | EightSeverityLevels |
---|---|---|
Safety/Very Low Risk Content | 0 | 0 |
1 | ||
Low Risk Content | 2 | 2 |
3 | ||
Medium Risk Content | 4 | 4 |
5 | ||
High Risk Content | 6 | 6 |
7 |
API_KEY
: Your Azure Content Safety API key
• ENDPOINTS
: Configuration for text and image analysis endpoints
• TEXT
: Azure endpoint for text content analysis
• IMAGE
: Azure endpoint for image content analysis
• OUTPUT_TYPE
: Severity level format (“FourSeverityLevels” or “EightSeverityLevels”)
CONTENT_TYPES
: Array of content type configurations
• TYPE
: Content type (“text” or “image”)
• PATH
: JSON path to extract content from request
CATEGORY_SEVERITY
: Threshold configuration for each category
• Values for FourSeverityLevels: 0, 2, 4, or 6
• Values for EightSeverityLevels: 0 to 7
ACTIONS
: Response configuration for detected violations
• TYPE
: Action type (e.g., “block”)
• MESSAGE
: Custom error message