Docker Deployment
Deploy Deenruv using Docker and Docker Compose with PostgreSQL, Redis, and MinIO
Docker provides a consistent and reproducible way to deploy Deenruv. This guide walks through setting up a production-ready Docker deployment with PostgreSQL, Redis, and optional S3-compatible object storage.
Local development with Docker Compose
Deenruv ships with a docker-compose.yml file in the repository root that spins up the required infrastructure services for local development:
name: deenruv-local
services:
deenruv-postgres:
image: ghcr.io/cloudnative-pg/postgresql:16.3
environment:
POSTGRES_USER: deenruv
POSTGRES_PASSWORD: deenruv
POSTGRES_DB: deenruv
ports:
- 5432:5432
deenruv-redis:
image: bitnami/redis:7.2
environment:
ALLOW_EMPTY_PASSWORD: yes
ports:
- 6379:6379
deenruv-minio:
image: minio/minio:latest
ports:
- 9000:9000
- 9090:9090
environment:
MINIO_ROOT_USER: root
MINIO_ROOT_PASSWORD: password
command: minio server /data/minio --console-address ":9090"Start the services:
pnpm server-docker-upThen populate the database and start the server:
pnpm server-populate
pnpm startCreating a production Dockerfile
Create a Dockerfile in your project root:
FROM node:18-slim AS build
WORKDIR /usr/src/app
# Install pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate
# Copy dependency manifests
COPY package.json pnpm-lock.yaml ./
# Install dependencies
RUN pnpm install --frozen-lockfile
# Copy source code
COPY . .
# Build the application
RUN pnpm build
# Production stage
FROM node:18-slim AS production
WORKDIR /usr/src/app
RUN corepack enable && corepack prepare pnpm@latest --activate
COPY package.json pnpm-lock.yaml ./
# Install production dependencies only
RUN pnpm install --frozen-lockfile --prod
# Copy built artifacts
COPY --from=build /usr/src/app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/index.js"]The multi-stage build ensures your production image does not include development dependencies or source code, resulting in a smaller and more secure image.
Production Docker Compose
Here is an example docker-compose.prod.yml for a complete production deployment:
version: "3.8"
services:
server:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
DB_HOST: database
DB_PORT: 5432
DB_NAME: deenruv
DB_USERNAME: postgres
DB_PASSWORD: ${DB_PASSWORD}
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
COOKIE_SECRET: ${COOKIE_SECRET}
SUPERADMIN_IDENTIFIER: ${SUPERADMIN_IDENTIFIER:-superadmin}
SUPERADMIN_PASSWORD: ${SUPERADMIN_PASSWORD}
S3_BUCKET: ${S3_BUCKET:-deenruv-assets}
S3_ACCESS_KEY_ID: ${S3_ACCESS_KEY_ID}
S3_SECRET_ACCESS_KEY: ${S3_SECRET_ACCESS_KEY}
S3_REGION: ${S3_REGION:-us-east-1}
ASSET_URL_PREFIX: ${ASSET_URL_PREFIX}
depends_on:
database:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
worker:
build:
context: .
dockerfile: Dockerfile
command: ["node", "dist/index-worker.js"]
environment:
DB_HOST: database
DB_PORT: 5432
DB_NAME: deenruv
DB_USERNAME: postgres
DB_PASSWORD: ${DB_PASSWORD}
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
depends_on:
database:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
database:
image: postgres:16
volumes:
- db-data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: deenruv
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis-data:/data
restart: unless-stopped
volumes:
db-data:
redis-data:Environment file
Create a .env file alongside your compose file (never commit this to version control):
DB_PASSWORD=your-secure-database-password
REDIS_PASSWORD=your-secure-redis-password
COOKIE_SECRET=your-secure-cookie-secret
SUPERADMIN_IDENTIFIER=superadmin
SUPERADMIN_PASSWORD=your-secure-admin-password
S3_BUCKET=deenruv-assets
S3_ACCESS_KEY_ID=your-s3-key
S3_SECRET_ACCESS_KEY=your-s3-secret
S3_REGION=us-east-1
ASSET_URL_PREFIX=https://your-cdn.example.com/assets/Deploying
# Build and start all services
docker compose -f docker-compose.prod.yml up -d --build
# Run database migrations
docker compose -f docker-compose.prod.yml exec server node dist/migrate.js
# View logs
docker compose -f docker-compose.prod.yml logs -f serverSeparating server and worker
In production, it is recommended to run the Deenruv server and worker as separate processes. The server handles HTTP requests (GraphQL API, admin UI), while the worker processes background jobs (search indexing, email sending, etc.).
This separation allows you to:
- Scale the server and worker independently
- Restart the worker without affecting API availability
- Monitor resource usage separately
In your Deenruv configuration, the server and worker are bootstrapped separately:
import { bootstrap } from '@deenruv/core';
import { config } from './deenruv-config';
bootstrap(config);import { bootstrapWorker } from '@deenruv/core';
import { config } from './deenruv-config';
bootstrapWorker(config);Health checks
Deenruv supports health check endpoints for container orchestration. Configure a health check strategy in your config:
import { TypeORMHealthCheckStrategy } from '@deenruv/core';
export const config: DeenruvConfig = {
systemOptions: {
healthChecks: [new TypeORMHealthCheckStrategy()],
},
};This exposes a /health endpoint that Docker or your orchestrator can use to verify the service is running correctly.
Tips for production Docker deployments
- Use named volumes for database and Redis data to persist data across container restarts.
- Set resource limits on containers to prevent a single service from consuming all host resources.
- Use a reverse proxy (Nginx, Caddy, or Traefik) in front of the Deenruv server for TLS termination, rate limiting, and caching.
- Back up your database regularly using
pg_dumpor a managed database service. - Pin image versions in production to avoid unexpected breaking changes.
- Monitor logs using Docker's logging drivers or a centralized logging solution.
For horizontal scaling and running multiple server instances behind a load balancer, ensure you are using Redis-backed session caching and an external asset storage strategy so that all instances share the same state.