Skip to main content

OpenAI Toxicity Detection

Technical Overviewโ€‹

The OpenAI Toxicity Detection plugin implements real-time content moderation using OpenAI's moderation API. It processes both text and image content through a multi-stage analysis pipeline.

Core Componentsโ€‹

  1. Content Extractor

    • Processes multiple message types
    • Handles text and image URL content
    • Supports structured message formats
    • Maintains content context
  2. Moderation Engine

    • Real-time API integration
    • Batch processing capability
    • Configurable thresholds
    • Category-specific scoring
  3. Response Analyzer

    • Score evaluation
    • Threshold comparison
    • Category aggregation
    • Detailed violation reporting

Implementation Detailsโ€‹

Message Processingโ€‹

The plugin processes messages in the following format:

{
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "message content"
},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.jpg"
}
}
]
}
]
}

Content Typesโ€‹

  1. Text Content

    • Direct text analysis
    • Multi-message support
    • UTF-8 encoding
    • Length validation
  2. Image Content

    • URL-based processing
    • Image format validation
    • Size restrictions
    • Accessibility checks

Moderation Categoriesโ€‹

The plugin supports comprehensive content analysis across multiple categories:

CategoryDescriptionImplementation Details
SexualSexual content detection- Base category scoring \n- Sub-category detection \n- Context analysis
ViolenceViolence and threats- Direct violence detection \n- Graphic content analysis \n- Threat assessment
HateHate speech and bias- Bias detection \n- Discriminatory content \n- Hate speech patterns
Self-harmSelf-harm content- Intent detection \n- Instruction filtering \n- Risk assessment
HarassmentHarassment detection- Personal attacks \n- Threatening behavior \n- Bullying patterns
IllicitIllegal activity- Criminal content \n- Prohibited activities \n- Legal compliance

API Integrationโ€‹

The plugin integrates with OpenAI's moderation API:

  1. Request Formation
{
"input": [
{
"type": "text",
"text": "content to moderate"
}
],
"model": "omni-moderation-latest"
}
  1. Response Processing
{
"id": "modr-123",
"model": "omni-moderation-latest",
"results": [
{
"flagged": true,
"categories": {
"sexual": false,
"violence": true
},
"category_scores": {
"sexual": 0.01,
"violence": 0.92
}
}
]
}

Error Handlingโ€‹

The plugin implements comprehensive error handling:

  1. Configuration Validation

    • API key verification
    • Action type validation
    • Threshold validation
    • Category validation
  2. Runtime Error Handling

    • API connection errors
    • Response parsing errors
    • Timeout handling
    • Rate limit management
  3. Content Processing Errors

    • Invalid content format
    • Missing required fields
    • Size limit violations
    • Encoding issues

Performance Optimizationsโ€‹

  1. Request Processing

    • Batch message processing
    • Efficient JSON parsing
    • Minimal memory allocation
    • Request pooling
  2. Response Handling

    • Streaming response processing
    • Efficient score calculation
    • Early termination
    • Result caching

Configuration Referenceโ€‹

Required Settingsโ€‹

{
"openai_key": "YOUR_API_KEY",
"actions": {
"type": "block",
"message": "Content violation detected"
},
"categories": ["sexual", "violence", "hate"],
"thresholds": {
"sexual": 0.3,
"violence": 0.5,
"hate": 0.4
}
}

Advanced Optionsโ€‹

  • Custom error messages
  • Category-specific actions
  • Threshold adjustments
  • Logging configuration

Monitoring and Metricsโ€‹

The plugin provides detailed monitoring capabilities:

  • Request/response logging
  • Category score tracking
  • Error rate monitoring
  • Performance metrics

Featuresโ€‹

FeatureCapabilities
Multi-Category Detectionโ€ข Comprehensive content analysis across multiple categories (sexual, violence, hate, etc.)
โ€ข Real-time detection with configurable sensitivity levels
โ€ข Customizable thresholds per category
Flexible Actionsโ€ข Configurable response actions
โ€ข Custom error messages
โ€ข Block or allow decisions
OpenAI Integrationโ€ข Powered by OpenAI's moderation API
โ€ข Real-time content analysis
โ€ข High accuracy detection
Request Stage Processingโ€ข Pre-request content analysis
โ€ข Configurable priority in plugin chain
โ€ข Non-blocking architecture

How It Worksโ€‹

Content Analysisโ€‹

The plugin analyzes incoming requests by examining both text and image content for various types of toxic or inappropriate material. For text content, it processes the content directly through OpenAI's moderation API. For images, it can analyze image URLs provided in the request. The results are evaluated against configured thresholds:

// Example Request Content - Text
{
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Let's discuss this topic respectfully"
}
]
}
]
}

// OpenAI Moderation API Response (Internal)
{
"results": [
{
"category_scores": {
"sexual": 0.0001,
"violence": 0.0002,
"hate": 0.0001
}
}
]
}

Threshold Evaluationโ€‹

Each category has its own configurable threshold. Content is blocked if any category's score exceeds its threshold:

{
"thresholds": {
"sexual": 0.3, // Block if sexual content score > 0.3
"violence": 0.5, // Block if violence score > 0.5
"hate": 0.4 // Block if hate speech score > 0.4
}
}

Action Executionโ€‹

Based on the evaluation results, the plugin can take configured actions:

{
"actions": {
"type": "block",
"message": "Content contains inappropriate content."
}
}

Configuration Examplesโ€‹

Basic Configurationโ€‹

A simple configuration that enables toxicity detection with default settings:

{
"name": "toxicity_detection",
"enabled": true,
"stage": "pre_request",
"priority": 1,
"settings": {
"openai_key": "${OPENAI_API_KEY}",
"actions": {
"type": "block",
"message": "Content contains inappropriate content."
},
"categories": [
"sexual",
"violence",
"hate"
],
"thresholds": {
"sexual": 0.3,
"violence": 0.5,
"hate": 0.4
}
}
}

Key components of the basic configuration:

Plugin Settings

PropertyDescriptionRequiredDefault
namePlugin identifierYes"toxicity_detection"
enabledEnable/disable pluginYestrue
stageProcessing stageYes"pre_request"
priorityPlugin execution priorityYes1

Category Thresholds

CategoryDescriptionDefault ThresholdImpact
sexualSexual content detection0.3Lower values = stricter filtering
violenceViolence detection0.5Higher values = more permissive
hateHate speech detection0.4Balance based on needs

This configuration:

โ€ข Focuses solely on violence detection

โ€ข Sets a moderate threshold of 0.5 for violent content

โ€ข Provides a specific error message for violent content

โ€ข Enables warning-level logging for monitoring

Best Practicesโ€‹

Threshold Configurationโ€‹

  1. Content Policy Alignment:

    โ€ข Set thresholds according to your content policy

    โ€ข Consider your audience and use case

    โ€ข Test thresholds with sample content

  2. Category Selection:

    โ€ข Enable relevant categories for your use case

    โ€ข Consider regulatory requirements

    โ€ข Balance between safety and usability

  3. Performance Considerations:

    โ€ข Set appropriate plugin priority

    โ€ข Consider API rate limits

    โ€ข Monitor response times

Security Considerationsโ€‹

  1. API Key Management:

    โ€ข Secure storage of OpenAI API key

    โ€ข Regular key rotation

    โ€ข Access control for configuration changes

  2. Logging and Monitoring:

    โ€ข Enable appropriate logging

    โ€ข Monitor blocked content patterns

    โ€ข Regular threshold adjustments

Performance Considerationsโ€‹

The Toxicity Detection plugin uses a straightforward HTTP client implementation to interact with OpenAI's moderation API. The plugin processes requests sequentially, making direct API calls to OpenAI's moderation endpoint for each incoming request. The implementation includes comprehensive logging at various levels (debug, info, error) to help track and diagnose the plugin's behavior.

The plugin performs efficient JSON processing by unmarshaling only the required fields from the request and response bodies. It concatenates multiple messages with newlines when needed and processes them in a single API call to OpenAI, which helps reduce the number of API requests when handling multi-message content.

The plugin's architecture is designed to be lightweight, with minimal memory overhead as it doesn't maintain any state between requests. However, be aware that each request will incur the latency of an HTTP call to OpenAI's API. Consider this when planning your rate limits and timeout configurations, as the total processing time will largely depend on OpenAI's API response time.