Installing with Docker
Prerequisites
Before installing NeuralTrust with Docker, ensure you have:
- Docker Engine 20.10+
- Docker Compose v2.0+
- 4GB RAM minimum
- 10GB disk space
- OpenAI API key (for AI model access)
- HuggingFace token (provided by NeuralTrust)
Quick Start
1. Clone the Repository
2. Configure Environment
Edit the .env
file with your configuration:
3. Start the Services
Architecture Components
The deployment consists of two main APIs:
-
Admin API (Port 8080)
- Gateway management
- Configuration management
- API key management
- Plugin configuration
-
Proxy API (Port 8081)
- Request routing
- Load balancing
- Plugin execution
Monitoring
Enable monitoring in your configuration:
Available Metrics
Metric | Description |
---|---|
trustgate_requests_total | Request counts by gateway, method, and status |
trustgate_latency_ms | Overall request processing time |
trustgate_detailed_latency_ms | Granular latency by service/route |
trustgate_upstream_latency_ms | Upstream service latency |
trustgate_connections | Active connection tracking |
Plugin System
NeuralTrust includes several built-in plugins:
Rate Limiting
- Basic rate limiter
- Token rate limiter
Prompt Moderation
- Data masking (entities, keywords, regex)
- Prompt moderation (keywords, regex)
Toxicity Detection
- OpenAI Moderation API
- Azure Content Safety API
Other Plugins
- External API calls
- Custom integrations
Managing the Deployment
Common management commands:
Troubleshooting
Common issues and solutions:
- Service Status
- View Logs
- Check Configuration
- Resource Usage
- Network Issues
Security Features
The deployment includes:
- Authentication and authorization
- Rate limiting and token management
- CORS protection
- SQL injection prevention
- Cross-site injection protection
- Prompt moderation and jailbreak protection
Additional Resources
For more information:
For support, contact support@neuraltrust.ai