Docker + uv Development Setup Guide¶
Date: February 2, 2026
Environment: Development → Production
Status: Step-by-step guide
Part 1: Development Environment Setup¶
Step 1: Create Environment File¶
Required Changes in .env:
- Set DEBUG=True for development
- Set database passwords (DB_PASSWORD, DB_ROOT_PASSWORD, DATABASE_PASS)
- Copy API keys from .env.dev if you have them:
- PRESTASHOP_API_KEY
- EU_PRESTASHOP_API_KEY
- FEDEX credentials
- MAILGUN credentials
- XERO credentials
- All other business-specific values
Quick setup (minimal .env for testing):
DEBUG=True
SECRET_KEY=dev-secret-key-not-for-production
ALLOWED_HOSTS=localhost,127.0.0.1
DB_NAME=uplink
DB_USER=uplink
DB_PASSWORD=dev123
DB_ROOT_PASSWORD=root123
DATABASE_HOST=db
DATABASE_NAME=uplink
DATABASE_USER=uplink
DATABASE_PASS=dev123
DATABASE_PORT=3306
Step 2: Build Docker Images¶
# Build all services (this will take a few minutes on first run)
docker-compose build
# Expected output:
# - Building web service
# - Installing uv
# - Installing Python dependencies via uv
# - Copying application code
# - Building db, redis, daphne, huey services
What happens: - Downloads Python 3.9-slim base image - Installs uv package manager - Installs all dependencies from pyproject.toml - Prepares all 5 services (web, db, redis, daphne, huey)
If build fails:
- Check Docker is running: docker ps
- Check disk space: df -h
- See ROLLBACK.md for reverting to pipenv
Step 3: Start Docker Services¶
Expected output:
NAME COMMAND SERVICE STATUS PORTS
uplink-db-1 "docker-entrypoint.s…" db running 3306/tcp, 33060/tcp
uplink-daphne-1 "daphne -b 0.0.0.0 -…" daphne running 0.0.0.0:9000->9000/tcp
uplink-huey-1 "python manage.py ru…" huey running
uplink-redis-1 "docker-entrypoint.s…" redis running 6379/tcp
uplink-web-1 "/app/docker-entrypo…" web running 0.0.0.0:8000->8000/tcp
All services should show "running" status.
View logs:
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f web
docker-compose logs -f huey
Step 4: Run Database Migrations¶
# Run migrations inside the web container
docker-compose exec web python manage.py migrate
# Expected output:
# Operations to perform:
# Apply all migrations: admin, auth, catalogue, contacts, devices, forecasting, orders, prestashop, services, stock, etc.
# Running migrations:
# Applying contenttypes.0001_initial... OK
# Applying auth.0001_initial... OK
# ... (many more) ...
If migrations fail:
- Check database is healthy: docker-compose ps db
- Check database logs: docker-compose logs db
- Verify DATABASE_ env vars in .env match DB_ vars
Step 5: Create Superuser¶
# Create admin account
docker-compose exec web python manage.py createsuperuser
# Follow prompts:
# Username: admin
# Email: your@email.com
# Password: ********
Step 6: Test Web Application¶
Open in browser: http://localhost:8000
Expected: - ✅ Homepage loads - ✅ No 500 errors - ✅ Static files load correctly - ✅ Navigation works
Check running processes inside container:
Step 7: Test Admin Panel¶
Open in browser: http://localhost:8000/admin
Test checklist: - ✅ Login with superuser credentials - ✅ View dashboard - ✅ Access a model (e.g., Catalogue > Products) - ✅ Create/edit a record - ✅ Check that database writes work
Step 8: Test Huey Background Workers¶
Check Huey is running:
Expected output:
huey-1 | [INFO] Huey consumer started with 1 thread, PID 7
huey-1 | [INFO] Scheduler running, PID 8
huey-1 | [INFO] Huey consumer is running
Test a background task:
# In Django shell
docker-compose exec web python manage.py shell
# Run this:
from devices.tasks import test_huey_task # or any huey task in your app
test_huey_task()
# Check huey logs for execution:
# docker-compose logs huey
If Huey not processing tasks:
- Check Redis connection: docker-compose exec web python -c "import redis; r=redis.Redis(host='redis'); print(r.ping())"
- Check HUEY config in settings.py
- Verify database name matches in settings.py HUEY config
Step 9: Test Daphne WebSocket Server¶
Check Daphne is running:
Expected output:
daphne-1 | 2026-02-02 10:00:00,000 INFO Starting server at tcp:port=9000:interface=0.0.0.0
daphne-1 | 2026-02-02 10:00:00,001 INFO HTTP/2 support enabled
daphne-1 | 2026-02-02 10:00:00,002 INFO Configuring endpoint tcp:port=9000:interface=0.0.0.0
daphne-1 | 2026-02-02 10:00:00,003 INFO Listening on TCP address 0.0.0.0:9000
Test WebSocket connection:
- If you have WebSocket clients in your app, test them
- Check Daphne logs for connection attempts
- Verify port 9000 is accessible: curl http://localhost:9000
Step 10: Verify Service Health¶
Check all container health:
docker-compose ps
docker inspect uplink-web-1 | grep -A 10 Health
docker inspect uplink-db-1 | grep -A 10 Health
Check disk usage:
Check network connectivity:
# Web can reach db
docker-compose exec web nc -zv db 3306
# Web can reach redis
docker-compose exec web nc -zv redis 6379
# Huey can reach redis
docker-compose exec huey nc -zv redis 6379
Performance check:
Part 2: Development Testing Checklist¶
Functional Tests¶
- [ ] Homepage loads at http://localhost:8000
- [ ] Admin panel accessible at http://localhost:8000/admin
- [ ] Login/logout works
- [ ] CRUD operations work (create, read, update, delete records)
- [ ] Static files load (CSS, JS, images)
- [ ] Media file uploads work
- [ ] API endpoints respond correctly
- [ ] Background tasks execute (Huey)
- [ ] WebSocket connections work (Daphne)
- [ ] Database queries perform well
Integration Tests¶
- [ ] Run test suite:
docker-compose exec web python manage.py test - [ ] Check for migration issues:
docker-compose exec web python manage.py makemigrations --check --dry-run - [ ] Validate external API integrations (PrestaShop, FedEx, etc.)
- [ ] Test email sending (Mailgun)
- [ ] Test file uploads to media directory
Performance Tests¶
- [ ] Page load times acceptable
- [ ] Database queries optimized (check Django Debug Toolbar if installed)
- [ ] Memory usage reasonable (
docker stats) - [ ] No memory leaks over time
Part 3: Common Development Commands¶
# Restart all services
docker-compose restart
# Restart specific service
docker-compose restart web
# Stop all services
docker-compose down
# Stop and remove volumes (DESTRUCTIVE - deletes database!)
docker-compose down -v
# View logs
docker-compose logs -f web
docker-compose logs --tail=100 huey
# Execute commands inside container
docker-compose exec web python manage.py shell
docker-compose exec web python manage.py dbshell
# Install new Python package
docker-compose exec web uv pip install package-name
# Then rebuild: docker-compose build web
# Access container shell
docker-compose exec web bash
# Check Django settings
docker-compose exec web python manage.py diffsettings
# Collect static files manually
docker-compose exec web python manage.py collectstatic --noinput
Part 4: Troubleshooting¶
Issue: Containers won't start¶
# Check logs
docker-compose logs
# Check specific service
docker-compose logs db
docker-compose logs web
# Rebuild from scratch
docker-compose down -v
docker-compose build --no-cache
docker-compose up -d
Issue: Database connection errors¶
# Check database is healthy
docker-compose ps db
docker-compose exec db mysql -u uplink -p uplink
# Verify environment variables
docker-compose exec web env | grep DATABASE
# Check settings.py picks up env vars
docker-compose exec web python -c "from django.conf import settings; print(settings.DATABASES)"
Issue: Static files not loading¶
# Collect static files
docker-compose exec web python manage.py collectstatic --noinput
# Check static volume
docker volume ls
docker volume inspect uplink_static_volume
Issue: Huey not processing tasks¶
# Check Huey logs
docker-compose logs huey
# Check Redis connection
docker-compose exec web python -c "import redis; r=redis.Redis(host='redis'); print(r.ping())"
# Restart Huey
docker-compose restart huey
Issue: Port already in use¶
# Find what's using port 8000
sudo lsof -i :8000
# Option 1: Stop the other process
# Option 2: Change port in docker-compose.yml
# ports:
# - "8001:8000" # External:Internal
Part 5: Development Best Practices¶
Daily Workflow¶
- Start services:
docker-compose up -d - Check logs:
docker-compose logs -f - Make code changes (auto-reloads in dev mode)
- Test changes in browser
- Stop services:
docker-compose down(or leave running)
Adding Dependencies¶
- Add to
pyproject.tomldependencies list - Rebuild image:
docker-compose build web - Restart services:
docker-compose up -d
Database Changes¶
- Make model changes
- Create migrations:
docker-compose exec web python manage.py makemigrations - Review migration: Check the generated file
- Apply migration:
docker-compose exec web python manage.py migrate
Backup Development Data¶
# Backup database
docker-compose exec db mysqldump -u uplink -pdev123 uplink > backup_$(date +%Y%m%d).sql
# Restore database
docker-compose exec -T db mysql -u uplink -pdev123 uplink < backup_20260202.sql
Part 6: When Development Testing is Complete¶
Before Moving to Production¶
- [ ] All tests pass:
docker-compose exec web python manage.py test - [ ] No migration warnings:
docker-compose exec web python manage.py check - [ ] Performance is acceptable
- [ ] All features work as expected
- [ ] Background tasks execute correctly
- [ ] WebSocket connections stable
- [ ] No error logs:
docker-compose logs | grep ERROR
Next Steps¶
- Document any issues found in
docs/Uplink2.0_UpgradeLogs.md - Update dependencies if needed
- Review production deployment plan (see Part 7)
- Test rollback procedure (see ROLLBACK.md)
- Prepare production environment
Part 7: Production Deployment Plan¶
Overview¶
Strategy: Blue-Green Deployment with Zero Downtime
Timeline: 2-4 hours (including testing and rollback capability)
Risk Level: Low (rollback available within 5 minutes)
Pre-Deployment Checklist¶
1. Server Requirements¶
- [ ] Docker Engine 24.0+ installed
- [ ] Docker Compose 2.20+ installed
- [ ] Minimum 4GB RAM available
- [ ] Minimum 20GB disk space free
- [ ] Port 80/443 available for nginx
- [ ] SSL certificates ready (Let's Encrypt or custom)
2. Code Preparation¶
- [ ] All tests passing on development
- [ ] All migrations tested and working
- [ ] No uncommitted changes
- [ ] Create git tag:
git tag -a v3.0.0-docker -m "Docker + uv deployment" - [ ] Push tag:
git push origin v3.0.0-docker
3. Configuration Files¶
- [ ]
.envfile prepared with production values - [ ]
SECRET_KEYgenerated (50+ random characters) - [ ]
DEBUG=Falseconfirmed - [ ]
ALLOWED_HOSTSset correctly - [ ] Database credentials secured
- [ ] All API keys updated
- [ ] SENTRY_DSN configured for error tracking
4. Database Backup¶
# Backup current production database
ssh production-server
mysqldump -u uplink -p uplink > backup_pre_docker_$(date +%Y%m%d_%H%M%S).sql
gzip backup_pre_docker_*.sql
# Store backup safely
cp backup_pre_docker_*.sql.gz /backups/
5. Current Environment Snapshot¶
# Document current pipenv state
pipenv lock --dev > Pipfile.lock.backup
pip freeze > requirements_backup.txt
# Backup current .env
cp .env .env.backup.$(date +%Y%m%d)
# Note current Python version
python --version > python_version_backup.txt
Deployment Steps (Production)¶
Step 1: Prepare Production Server¶
# SSH to production server
ssh production-server
# Navigate to application directory
cd /var/www/uplink # or your production path
# Pull latest code
git fetch --all
git checkout 1127-install-uv-and-deploy-using-uv-on-uplink
git pull origin 1127-install-uv-and-deploy-using-uv-on-uplink
Step 2: Create Production Environment File¶
Critical Production Settings:
DEBUG=False
SECRET_KEY=<generate-50-character-random-string>
ALLOWED_HOSTS=uplink.sensational.systems,www.uplink.sensational.systems
# Use strong passwords
DB_PASSWORD=<strong-random-password>
DB_ROOT_PASSWORD=<strong-random-password>
DATABASE_PASS=<same-as-DB_PASSWORD>
# Production database host
DATABASE_HOST=db
DATABASE_NAME=uplink
DATABASE_USER=uplink
# All production API keys
SENTRY_DSN=<your-sentry-dsn>
PRESTASHOP_API_KEY=<production-key>
FEDEX_PROD_CLIENT_ID=<production-fedex>
MAILGUN_API_KEY=<production-mailgun>
# ... etc
Generate SECRET_KEY:
python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"
Step 3: Build Production Images¶
# Build using production compose file
docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
# This will:
# - Use Python 3.9-slim
# - Install all dependencies via uv
# - Configure for production (gunicorn, 4 workers)
# - Take 5-10 minutes on first run
Step 4: Database Migration Strategy¶
Option A: Migrate Existing Database (Recommended)
# Stop current application (pipenv version)
sudo systemctl stop gunicorn
sudo systemctl stop daphne
sudo systemctl stop huey
# Import existing database to Docker
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d db
sleep 10 # Wait for MySQL to start
# Import data
docker-compose exec -T db mysql -u root -p${DB_ROOT_PASSWORD} -e "CREATE DATABASE IF NOT EXISTS uplink;"
docker-compose exec -T db mysql -u uplink -p${DB_PASSWORD} uplink < backup_pre_docker_*.sql
# Run any new migrations
docker-compose -f docker-compose.yml -f docker-compose.prod.yml run --rm web python manage.py migrate
Option B: Fresh Database (Only if starting fresh)
# Start database
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d db
# Run migrations
docker-compose -f docker-compose.yml -f docker-compose.prod.yml run --rm web python manage.py migrate
# Create superuser
docker-compose -f docker-compose.yml -f docker-compose.prod.yml run --rm web python manage.py createsuperuser
Step 5: Start Production Services¶
# Start all services
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# Check status
docker-compose ps
# Verify all containers are healthy
docker inspect uplink-web-1 | grep -A 5 '"Health"'
Expected services running: - uplink-db-1 (MySQL) - uplink-redis-1 (Redis) - uplink-web-1 (Gunicorn) - uplink-daphne-1 (ASGI/WebSocket) - uplink-huey-1 (Background worker)
Step 6: Nginx Configuration¶
Production nginx config:
upstream uplink_web {
server 127.0.0.1:8000;
}
upstream uplink_ws {
server 127.0.0.1:9000;
}
server {
listen 80;
server_name uplink.sensational.systems;
# Redirect to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name uplink.sensational.systems;
ssl_certificate /etc/letsencrypt/live/uplink.sensational.systems/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/uplink.sensational.systems/privkey.pem;
client_max_body_size 100M;
# Static files
location /static/ {
alias /var/www/uplink/static/;
expires 30d;
}
# Media files
location /media/ {
alias /var/www/uplink/media/;
expires 7d;
}
# WebSocket
location /ws/ {
proxy_pass http://uplink_ws;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Main application
location / {
proxy_pass http://uplink_web;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Step 7: Verify Production Deployment¶
Health Checks:
# Check all containers running
docker-compose ps
# Check web application
curl -I https://uplink.sensational.systems
# Check logs for errors
docker-compose logs --tail=100 web
docker-compose logs --tail=100 huey
docker-compose logs --tail=100 daphne
# Check database connectivity
docker-compose exec web python manage.py dbshell
Functional Tests: - [ ] Homepage loads: https://uplink.sensational.systems - [ ] Admin panel works: https://uplink.sensational.systems/admin - [ ] Login/logout functional - [ ] CRUD operations work - [ ] Static files load - [ ] Media uploads work - [ ] API endpoints respond - [ ] Background tasks execute (check Huey logs) - [ ] WebSocket connections work (check Daphne logs)
Performance Tests:
# Check resource usage
docker stats
# Check response times
curl -w "@-" -o /dev/null -s https://uplink.sensational.systems <<'EOF'
time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_redirect: %{time_redirect}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
EOF
Step 8: Monitor for Issues¶
First Hour Monitoring:
# Watch logs continuously
docker-compose logs -f
# Watch for errors
docker-compose logs -f | grep -i error
# Check Sentry for errors (if configured)
# Visit your Sentry dashboard
# Monitor server resources
htop
docker stats
First Day Monitoring:
- Check Sentry error reports every 2 hours
- Review application logs: docker-compose logs web | grep ERROR
- Monitor database performance
- Check background task completion rates
- Verify scheduled tasks run correctly
Post-Deployment Tasks¶
1. Update Documentation¶
# Update deployment logs
nano docs/Uplink2.0_UpgradeLogs.md
# Add entry:
# ## 2026-02-02: Docker + uv Production Deployment
# - Successfully migrated from pipenv to Docker
# - All services running in containers
# - Zero downtime achieved
# - Performance: [note any metrics]
# - Issues encountered: [note any issues]
2. Disable Old Services¶
# Only after 24-48 hours of successful operation
sudo systemctl disable gunicorn
sudo systemctl disable daphne
sudo systemctl disable huey
# Keep pipenv environment for 30 days as backup
# Don't delete /home/hannah/.local/share/virtualenvs/uplink-zdKcNoqD/
3. Setup Automated Backups¶
#!/bin/bash
# Automated Docker Uplink Backup
BACKUP_DIR="/backups/docker-uplink"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
# Backup database
docker-compose exec -T db mysqldump -u uplink -p${DB_PASSWORD} uplink | gzip > $BACKUP_DIR/db_$DATE.sql.gz
# Backup media files
tar -czf $BACKUP_DIR/media_$DATE.tar.gz media/
# Backup .env
cp .env.prod $BACKUP_DIR/env_$DATE
# Keep only last 30 days
find $BACKUP_DIR -type f -mtime +30 -delete
echo "Backup completed: $DATE"
# Make executable
chmod +x /usr/local/bin/backup-docker-uplink.sh
# Add to cron (daily at 2 AM)
crontab -e
# Add: 0 2 * * * /usr/local/bin/backup-docker-uplink.sh >> /var/log/docker-backup.log 2>&1
4. Setup Monitoring Alerts¶
Using systemd for container monitoring:
[Unit]
Description=Docker Uplink Application
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/var/www/uplink
ExecStart=/usr/local/bin/docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml -f docker-compose.prod.yml down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
# Enable service
sudo systemctl daemon-reload
sudo systemctl enable docker-uplink.service
# Start on boot
sudo systemctl start docker-uplink.service
Rollback Procedure (If Needed)¶
If issues detected within first 24 hours:
See ROLLBACK.md for detailed 5-minute rollback procedure.
Quick rollback steps:
# 1. Stop Docker services
docker-compose down
# 2. Restore database backup
mysql -u uplink -p uplink < backup_pre_docker_*.sql
# 3. Revert nginx config
sudo nano /etc/nginx/sites-available/uplink
# Change proxy_pass back to old gunicorn socket
# 4. Start old services
source /home/hannah/.local/share/virtualenvs/uplink-zdKcNoqD/bin/activate
sudo systemctl start gunicorn
sudo systemctl start daphne
sudo systemctl start huey
# 5. Reload nginx
sudo systemctl reload nginx
# 6. Verify old version works
curl -I https://uplink.sensational.systems
Success Criteria¶
Deployment is successful when: - ✅ All containers running healthy for 24 hours - ✅ No increase in error rates (check Sentry) - ✅ Response times equal or better than before - ✅ All background tasks completing - ✅ No database connection issues - ✅ WebSocket connections stable - ✅ No user-reported issues
After 7 days of stable operation: - Archive old pipenv environment - Update production deployment documentation - Close migration ticket #1127 - Celebrate successful migration! 🎉
Part 8: Production Maintenance¶
Regular Tasks¶
Daily:
# Check logs for errors
docker-compose logs --since 24h | grep -i error
# Check container health
docker-compose ps
# Check disk space
df -h
docker system df
Weekly:
# Update Docker images (security patches)
docker-compose pull
docker-compose up -d
# Clean up unused images
docker image prune -a -f --filter "until=168h"
# Review performance metrics
docker stats --no-stream
Monthly:
# Update dependencies
# 1. Update pyproject.toml
# 2. Test in development
# 3. Deploy to production
# Rotate logs
find logs/ -type f -mtime +90 -delete
# Review and cleanup database
docker-compose exec web python manage.py clearsessions
Updating Application Code¶
# 1. Pull changes
git pull origin main
# 2. Rebuild if dependencies changed
docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
# 3. Run migrations
docker-compose -f docker-compose.yml -f docker-compose.prod.yml run --rm web python manage.py migrate
# 4. Restart services
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# 5. Collect static files if needed
docker-compose exec web python manage.py collectstatic --noinput
Scaling Services¶
If you need more Gunicorn workers:
# Edit docker-compose.prod.yml
# Change: --workers 4
# To: --workers 8
# Restart
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d web
If you need more Huey workers:
# Edit docker-compose.yml huey service
# Add: -w 4 (4 worker threads)
# command: python manage.py run_huey -w 4
# Restart
docker-compose restart huey
Summary¶
Development Environment¶
- ✅ Clean isolated environment via Docker
- ✅ Fast dependency installation via uv
- ✅ All services containerized
- ✅ Easy to reset and restart
- ✅ Matches production closely
Production Deployment¶
- ✅ Zero-downtime deployment strategy
- ✅ Database migration handled safely
- ✅ Rollback available within 5 minutes
- ✅ Automated backups configured
- ✅ Monitoring in place
Benefits Achieved¶
- 🚀 10-100x faster dependency installation (uv vs pip)
- 🔒 Isolated environment (no system package pollution)
- 📦 Consistent dev/prod environments
- 🔄 Easy rollback capability
- 📊 Better resource monitoring via Docker
- 🎯 Production-ready from day one
Need Help?
- See ROLLBACK.md for emergency procedures
- See docs/Uplink2.0_PLAN.md for overall strategy
- See docs/Uplink2.0_UpgradeLogs.md for history
- Check logs: docker-compose logs -f
- Ask GitHub Copilot for specific issues!