DatomsDB Oracle Agent Setup Guide
Overview
This guide will help you set up the DatomsDB Oracle AI Agent system, including the local LLM service using Docker.
Prerequisites
- Docker and Docker Compose installed
- Node.js 14+ and npm
- At least 8GB RAM (for running local LLM models)
- 10GB+ free disk space (for LLM models)
Step 1: Install Dependencies
# Install the required Node.js packages
npm install axios sqlite3 uuid
# Verify installation
npm list axios sqlite3 uuid
Step 2: Set Up Local LLM Service with Docker
Option A: Using Ollama (Recommended)
- Pull and run Ollama container:
# Create a directory for Ollama data
mkdir -p ./ollama-data
# Run Ollama container
docker run -d \
--name ollama \
-p 11434:11434 \
-v ./ollama-data:/root/.ollama \
--restart unless-stopped \
ollama/ollama:latest
# Verify Ollama is running
curl http://localhost:11434/api/version
- Download a model:
# Download Mistral (recommended for balanced performance)
docker exec ollama ollama pull mistral
# Alternative models:
# docker exec ollama ollama pull llama3 # Larger, more capable
# docker exec ollama ollama pull gemma # Smaller, faster
# docker exec ollama ollama pull codellama # Code-focused
# List available models
docker exec ollama ollama list
- Test the model:
# Test with a simple prompt
curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{
"model": "mistral",
"prompt": "Hello, how are you?",
"stream": false
}'
Option B: Using Docker Compose
Create a docker-compose.llm.yml file:
version: '3.8'
services:
ollama:
image: ollama/ollama:latest
container_name: datoms-ollama
ports:
- "11434:11434"
volumes:
- ./ollama-data:/root/.ollama
restart: unless-stopped
environment:
- OLLAMA_HOST=0.0.0.0
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:11434/api/version"]
interval: 30s
timeout: 10s
retries: 3
# Optional: Web UI for Ollama
ollama-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: datoms-ollama-webui
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://ollama:11434
depends_on:
- ollama
restart: unless-stopped
Run with Docker Compose:
# Start services
docker-compose -f docker-compose.llm.yml up -d
# Download model
docker-compose -f docker-compose.llm.yml exec ollama ollama pull mistral
# Check logs
docker-compose -f docker-compose.llm.yml logs -f ollama
Step 3: Configure the Agent System
- Update agent configuration:
# Edit the configuration file
nano src/agents/config/agent_config.json
Ensure the LLM service URL is correct:
{
"llm_service_url": "http://localhost:11434/api/generate",
"llm_model_name": "mistral",
"llm_chat_endpoint": "http://localhost:11434/api/chat"
}
- Set environment variables (optional):
export LLM_SERVICE_URL="http://localhost:11434/api/generate"
export LLM_MODEL_NAME="mistral"
Step 4: Initialize the Agent System
- Create required directories:
# Ensure data directory exists
mkdir -p data logs
# Set proper permissions
chmod 755 data logs
- Test agent health:
# Start the DatomsDBS server
npm start
# In another terminal, test agent health
curl http://localhost:9000/api/agent/v1/health
Expected response:
{
"status": "healthy",
"components": {
"chatAgent": "available",
"masterAgent": "healthy",
"config": "loaded"
},
"timestamp": "2024-01-15T10:30:00.000Z"
}
Step 5: Test the Agent
Test with Natural Language
curl -X POST http://localhost:9000/api/agent/v1/interact \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-d '{
"userId": "test-user",
"sessionId": "test-session",
"type": "natural_language",
"query": "List all my data sources"
}'
Test with Structured Command
curl -X POST http://localhost:9000/api/agent/v1/interact \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-d '{
"userId": "test-user",
"sessionId": "test-session",
"type": "structured_command",
"command": {
"action": "list_data_sources",
"params": {}
}
}'
Step 6: Production Deployment
Docker Integration
Add the agent system to your existing docker-compose.yml:
version: '3.8'
services:
datoms-server:
# ... existing configuration ...
environment:
- LLM_SERVICE_URL=http://ollama:11434/api/generate
- LLM_MODEL_NAME=mistral
depends_on:
- ollama
networks:
- datoms-network
ollama:
image: ollama/ollama:latest
container_name: datoms-ollama
ports:
- "11434:11434"
volumes:
- ollama-data:/root/.ollama
restart: unless-stopped
networks:
- datoms-network
volumes:
ollama-data:
networks:
datoms-network:
driver: bridge
Environment Configuration
Create a .env file for production:
# LLM Configuration
LLM_SERVICE_URL=http://ollama:11434/api/generate
LLM_MODEL_NAME=mistral
# Agent Configuration
AGENT_RATE_LIMIT=100
AGENT_MAX_TOKENS=4096
AGENT_TEMPERATURE=0.1
# Security
AGENT_ENABLE_DEBUG=false
AGENT_LOG_LEVEL=info
Troubleshooting
Common Issues
- Ollama not responding:
# Check if container is running
docker ps | grep ollama
# Check logs
docker logs ollama
# Restart if needed
docker restart ollama
- Model not found:
# List available models
docker exec ollama ollama list
# Pull the model if missing
docker exec ollama ollama pull mistral
- Memory issues:
# Check system resources
docker stats ollama
# Reduce model size or increase system RAM
# Consider using smaller models like gemma
- Agent connection errors:
# Test LLM service directly
curl http://localhost:11434/api/version
# Check agent configuration
cat src/agents/config/agent_config.json
# Check agent logs
tail -f logs/server.log | grep Agent
Performance Optimization
-
Model Selection:
gemma: Fastest, smallest (2GB RAM)mistral: Balanced performance (4GB RAM)llama3: Best quality (8GB RAM)
-
Configuration Tuning:
{
"llm_temperature": 0.1,
"max_llm_tokens": 2048,
"max_context_history_length": 3
}
- Resource Limits:
# In docker-compose.yml
ollama:
deploy:
resources:
limits:
memory: 8G
cpus: '4'
reservations:
memory: 4G
cpus: '2'
Monitoring and Maintenance
Health Checks
Set up regular health checks:
#!/bin/bash
# health-check.sh
# Check Ollama
if ! curl -s http://localhost:11434/api/version > /dev/null; then
echo "Ollama is down"
exit 1
fi
# Check Agent
if ! curl -s http://localhost:9000/api/agent/v1/health > /dev/null; then
echo "Agent is down"
exit 1
fi
echo "All services healthy"
Log Monitoring
Monitor agent logs:
# Real-time agent logs
tail -f logs/server.log | grep -E '\[.*Agent\]'
# Error monitoring
tail -f logs/server.log | grep ERROR
Database Maintenance
# Check SQLite database size
ls -lh data/agent_memory.sqlite
# Backup conversation history
cp data/agent_memory.sqlite data/agent_memory_backup_$(date +%Y%m%d).sqlite
# Clean old interactions (optional)
sqlite3 data/agent_memory.sqlite "DELETE FROM interactions WHERE timestamp < datetime('now', '-30 days');"
Security Considerations
-
Network Security:
- Keep Ollama on internal network only
- Use reverse proxy for external access
- Enable HTTPS in production
-
Access Control:
- Ensure proper authentication on agent endpoints
- Monitor rate limiting effectiveness
- Regular security audits
-
Data Privacy:
- Review conversation logging policies
- Implement data retention policies
- Consider encryption for sensitive data
Next Steps
- Customize Prompts: Edit prompt templates in
src/agents/prompts/ - Add Tools: Create new tool schemas and implement in ExecAgent
- Monitor Usage: Set up analytics and monitoring dashboards
- Scale: Consider distributed deployment for high-load scenarios
For more detailed information, see the main Agent README.