Docker for Developers: Containers, Images, and Compose

· 12 min read

Table of Contents

Understanding Docker: Why It Matters

Docker has fundamentally changed how developers build, ship, and run applications. Before Docker, setting up development environments was a nightmare of dependency conflicts, version mismatches, and the infamous "works on my machine" problem.

Docker solves this by packaging your application with everything it needs to run—code, runtime, system tools, libraries, and settings—into a standardized unit called a container. This container runs identically on your laptop, your colleague's machine, and production servers.

The benefits are immediate and tangible:

For developers, Docker means you can spin up a complete application stack—web server, database, cache, message queue—with a single command. No more spending hours installing PostgreSQL or debugging Redis configuration issues.

Core Concepts Explained

Understanding Docker's core concepts is essential before diving into practical usage. These building blocks work together to create the Docker ecosystem.

Concept What It Is Analogy Key Characteristics
Image Read-only template with app + dependencies A class definition in OOP Immutable, layered, shareable
Container Running instance of an image An object (instance of class) Isolated, ephemeral, stateless
Dockerfile Instructions to build an image A recipe or blueprint Text file, version controlled
Volume Persistent storage outside container An external hard drive Survives container deletion
Network Communication between containers A local area network (LAN) Isolated, configurable, secure
Registry Storage for images (Docker Hub) npm/PyPI for containers Public or private, versioned

Images vs Containers: The Critical Distinction

This is where many beginners get confused. An image is a static snapshot—think of it as a frozen template. A container is what you get when you run that image—it's a live, running process.

You can create unlimited containers from a single image, just like you can create multiple objects from one class. Each container is isolated from the others, even if they're all based on the same image.

Pro tip: Images are built in layers. Each instruction in your Dockerfile creates a new layer. Docker caches these layers, so rebuilding an image only rebuilds the layers that changed. This makes builds incredibly fast.

Volumes: Solving the Persistence Problem

Containers are ephemeral by design—when you delete a container, everything inside it disappears. This is great for stateless applications but problematic for databases or any data you need to keep.

Volumes solve this by storing data outside the container's filesystem. The data persists even when containers are deleted and recreated. There are three types of mounts:

Dockerfile Anatomy and Structure

A Dockerfile is a text document containing instructions for building a Docker image. Each instruction creates a layer in the image, and Docker caches these layers for efficiency.

Here's a breakdown of the most common Dockerfile instructions:

Instruction Purpose Example Best Practice
FROM Base image to build from FROM node:22-alpine Use specific versions, prefer alpine
WORKDIR Set working directory WORKDIR /app Use absolute paths
COPY Copy files from host to image COPY package.json . Copy dependencies first for caching
RUN Execute commands during build RUN npm install Chain commands with && to reduce layers
EXPOSE Document which port app listens on EXPOSE 3000 Documentation only, doesn't publish port
ENV Set environment variables ENV NODE_ENV=production Use for configuration
USER Set user for subsequent commands USER node Never run as root in production
CMD Default command when container starts CMD ["node", "server.js"] Use JSON array format

Understanding Layer Caching

Docker builds images layer by layer, from top to bottom. Each instruction creates a new layer. If a layer hasn't changed, Docker reuses the cached version instead of rebuilding it.

This is why you should structure your Dockerfile to put rarely-changing instructions at the top and frequently-changing ones at the bottom. For example, your dependencies change less often than your source code, so install dependencies before copying source files.

# Bad: Copies everything first, then installs
COPY . .
RUN npm install

# Good: Installs dependencies first, leverages cache
COPY package*.json ./
RUN npm install
COPY . .

Dockerfile Best Practices

Writing efficient Dockerfiles is an art. Here's a production-ready example that demonstrates multiple best practices:

# Multi-stage build: builder stage
FROM node:22-alpine AS builder
WORKDIR /app

# Copy dependency files first for better caching
COPY package*.json ./
RUN npm ci --production

# Copy source code and build
COPY . .
RUN npm run build

# Multi-stage build: production stage
FROM node:22-alpine
WORKDIR /app

# Copy only what's needed from builder
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./

# Security: run as non-root user
USER node

# Document the port (doesn't actually publish it)
EXPOSE 3000

# Health check for container orchestration
HEALTHCHECK --interval=30s --timeout=3s \
  CMD node healthcheck.js || exit 1

# Start the application
CMD ["node", "dist/server.js"]

Multi-Stage Builds: Dramatic Size Reduction

Multi-stage builds let you use multiple FROM statements in one Dockerfile. Each FROM starts a new stage, and you can copy artifacts from previous stages.

The magic happens because only the final stage becomes your image. Build tools, compilers, and intermediate files stay in earlier stages and never make it to production. This can reduce image sizes by 10x or more.

Quick tip: Name your stages with AS builder so you can reference them later with COPY --from=builder. This makes your Dockerfile more readable and maintainable.

Alpine Images: Small but Mighty

Alpine Linux is a minimal Linux distribution that's only 5MB in size. Compare that to Ubuntu-based images at 900MB+. For most applications, Alpine provides everything you need.

The tradeoff is that Alpine uses musl libc instead of glibc, which can occasionally cause compatibility issues with pre-compiled binaries. For 95% of use cases, Alpine works perfectly and dramatically reduces image size, download time, and attack surface.

The .dockerignore File

Just like .gitignore, a .dockerignore file tells Docker which files to exclude when building images. This speeds up builds and reduces image size.

# .dockerignore example
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.env.local
dist
coverage
.vscode
.idea
*.log
.DS_Store

Security Best Practices

Security should be baked into your Dockerfiles from the start:

Need help generating optimized Dockerfiles? Try our Dockerfile Generator tool.

Essential Docker Commands

Mastering these commands will cover 90% of your daily Docker usage. Each command includes practical examples and common flags.

Building and Running

# Build an image from Dockerfile in current directory
docker build -t myapp:1.0 .

# Build with build arguments
docker build --build-arg NODE_ENV=production -t myapp:1.0 .

# Build without cache (force rebuild)
docker build --no-cache -t myapp:1.0 .

# Run a container in detached mode
docker run -d -p 3000:3000 --name myapp myapp:1.0

# Run with environment variables
docker run -d -p 3000:3000 -e NODE_ENV=production --name myapp myapp:1.0

# Run with volume mount
docker run -d -p 3000:3000 -v $(pwd)/data:/app/data --name myapp myapp:1.0

# Run interactively with shell access
docker run -it --rm myapp:1.0 sh

Managing Containers

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# View container logs
docker logs myapp

# Follow logs in real-time
docker logs -f myapp

# View last 100 lines of logs
docker logs --tail 100 myapp

# Execute command in running container
docker exec -it myapp sh

# Run a one-off command
docker exec myapp npm test

# Stop a container gracefully
docker stop myapp

# Kill a container immediately
docker kill myapp

# Remove a stopped container
docker rm myapp

# Stop and remove in one command
docker stop myapp && docker rm myapp

# Remove all stopped containers
docker container prune

Working with Images

# List all images
docker images

# List images with custom format
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"

# Remove an image
docker rmi myapp:1.0

# Remove all unused images
docker image prune -a

# Tag an image
docker tag myapp:1.0 myapp:latest

# Push to registry
docker push myregistry.com/myapp:1.0

# Pull from registry
docker pull myregistry.com/myapp:1.0

# Save image to tar file
docker save myapp:1.0 > myapp.tar

# Load image from tar file
docker load < myapp.tar

System Management

# View disk usage
docker system df

# Remove all unused data (containers, networks, images, cache)
docker system prune -a

# View real-time container stats
docker stats

# Inspect container details (JSON output)
docker inspect myapp

# View container processes
docker top myapp

Pro tip: Add --rm flag when running containers for testing. This automatically removes the container when it stops, keeping your system clean: docker run --rm -it myapp:1.0 sh

Docker Compose for Multi-Container Apps

Docker Compose is a tool for defining and running multi-container applications. Instead of running multiple docker run commands, you define everything in a single YAML file.

This is essential for modern applications that typically consist of multiple services: a web server, database, cache, message queue, and more.

Complete Docker Compose Example

# docker-compose.yml
version: '3.8'

services:
  # Web application
  app:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        NODE_ENV: development
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
      - REDIS_URL=redis://cache:6379
      - NODE_ENV=development
    volumes:
      - .:/app
      - /app/node_modules
    depends_on:
      - db
      - cache
    networks:
      - app-network
    restart: unless-stopped

  # PostgreSQL database
  db:
    image: postgres:16-alpine
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=myapp
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
      - "5432:5432"
    networks:
      - app-network
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Redis cache
  cache:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis-data:/data
    networks:
      - app-network
    restart: unless-stopped
    command: redis-server --appendonly yes

  # Nginx reverse proxy
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - app
    networks:
      - app-network
    restart: unless-stopped

volumes:
  postgres-data:
  redis-data:

networks:
  app-network:
    driver: bridge

Docker Compose Commands

# Start all services in background
docker-compose up -d

# Start and rebuild images
docker-compose up -d --build

# View logs from all services
docker-compose logs

# Follow logs from specific service
docker-compose logs -f app

# Stop all services
docker-compose stop

# Stop and remove containers, networks
docker-compose down

# Stop and remove everything including volumes
docker-compose down -v

# List running services
docker-compose ps

# Execute command in service
docker-compose exec app sh

# Run one-off command
docker-compose run app npm test

# View service logs
docker-compose logs app

# Scale a service
docker-compose up -d --scale app=3

# Restart a service
docker-compose restart app

Environment Variables in Compose

Docker Compose supports environment variables in multiple ways. You can define them directly in the YAML file, use an .env file, or pass them at runtime.

# .env file
DATABASE_URL=postgresql://postgres:password@db:5432/myapp
REDIS_URL=redis://cache:6379
NODE_ENV=development
API_KEY=your-secret-key

Then reference them in your docker-compose.yml:

services:
  app:
    environment:
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
      - NODE_ENV=${NODE_ENV}

Quick tip: Never commit .env files to version control. Add them to .gitignore and provide a .env.example file with dummy values instead.

Development Workflow and Tips

Docker shines in development environments by providing consistency and eliminating setup time. Here's how to optimize your workflow.

Hot Reloading with Volume Mounts

During development, you want code changes to reflect immediately without rebuilding images. Use bind mounts to sync your local code with the container:

services:
  app:
    volumes:
      - .:/app              # Mount current directory
      - /app/node_modules   # Prevent overwriting node_modules

The second volume mount is crucial—it prevents your local node_modules (which might be for a different OS) from overwriting the container's node_modules.

Development vs Production Compose Files

Maintain separate compose files for different environments:

# docker-compose.yml (base configuration)
version: '3.8'
services:
  app:
    build: .
    environment:
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp

# docker-compose.dev.yml (development overrides)
version: '3.8'
services:
  app:
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
    command: npm run dev

# docker-compose.prod.yml (production overrides)
version: '3.8'
services:
  app:
    environment:
      - NODE_ENV=production
    restart: always
    command: npm start

Then run with:

# Development
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up

# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up

Database Seeding and Migrations

Initialize your database with seed data using the docker-entrypoint-initdb.d directory:

services:
  db:
    image: postgres:16-alpine
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      - postgres-data:/var/lib/postgresql/data

For migrations, run them as part of your application startup or as a separate service:

services:
  migrate:
    build: .
    command: npm run migrate
    depends_on:
      - db
    environment:
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp

Debugging Inside Containers

When things go wrong, you need to inspect what's happening inside containers:

# Open a shell in running container
docker exec -it myapp sh

# Check environment variables
docker exec myapp env

# View file contents
docker exec myapp cat /app/config.json

# Check network connectivity
docker exec myapp ping db

# View process list
docker exec myapp ps aux

# Check disk usage
docker exec myapp df -h

For Node.js debugging, expose the debug port and use Chrome DevTools:

services:
  app:
    command: node --inspect=0.0.0.0:9229 server.js
    ports:
      - "3000:3000"
      - "9229:9229"

Debugging and Troubleshooting

Even with Docker's consistency, issues arise. Here's how to diagnose and fix common problems.

Container Won't Start

If a container exits immediately, check the logs first:

# View logs of stopped container
docker logs myapp

# View last 50 lines
docker logs --tail 50 myapp

# Check exit code
docker ps -a --filter name=myapp

Common causes include:

Network Connectivity Issues

Containers can't communicate? Check network configuration:

# List networks
docker network ls

# Inspect network details
docker network inspect app-network

# Check if containers are on same network
docker inspect myapp | grep NetworkMode

# Test connectivity between containers
docker exec app ping db
docker exec app nc -zv db 5432

Pro tip: In Docker Compose, services can reach each other using the service name as hostname. So if you have a service named db, connect to it at db:5432, not localhost:5432.

Volume Permission Problems

Permission errors often occur when mounting volumes, especially on Linux. The container user's UID might not match your host user's UID:

# Check container user
docker exec myapp id

# Fix by matching UIDs in Dockerfile
RUN addgroup -g 1000 appuser && \
    adduser -D -u 1000 -G appuser appuser
USER appuser

Image Build Failures

Build failing? Common issues include:

Performance Problems

Container running slowly? Check resource usage:

# Real-time stats
docker stats

# Detailed container info
docker inspect myapp

# Check for resource limits
docker inspect myapp | grep -A 10 HostConfig

On Mac and Windows, Docker Desktop runs in a VM which can cause I/O performance issues with bind mounts. Consider using named volumes for better performance:

volumes:
  - node-modules:/app/node_modules  # Named volume (fast)
  - .:/app                          # Bind mount (slower on Mac/Windows)

Security Considerations

Docker security is critical, especially in production. Follow these practices to harden your containers.

Image Security

Start with secure base images:

# Scan image for vulnerabilities
docker scan myapp:1.0

# Use Trivy for comprehensive scanning
docker run aquasec/trivy image myapp:1.0

Runtime Security

Limit what containers can do at runtime:

# Run as non-root user
USER node

# Drop unnecessary capabilities
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp

# Use read-only filesystem
docker run --read-only myapp

# Limit resources
docker run --memory=512m --cpus=1 myapp

# Use security profiles
docker run --security-opt=no-new-privileges myapp

Secrets Management

Never hardcode secrets in Dockerfiles or images. Use environment variables, Docker secrets, or external secret management:

# Bad: Secret in Dockerfile
ENV API_KEY=sk_live_abc123

# Good: Secret from environment
docker run -e API_KEY=$API_KEY myapp

# Better: Docker secrets (Swarm mode)
echo "sk_live_abc123" | docker secret create api_key -
docker service create --secret api_key myapp

# Best: External secret manager
docker run -e AWS_SECRETS_MANAGER_ARN=arn:aws:... myapp

Network Security

Isolate containers using custom networks:

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true  # No external access

services:
  web:
    networks:
      - frontend
      - backend
  
  db:
    networks:
      - backend  # Only accessible from backend network

Performance Optimization

Optimizing Docker performance improves build times, reduces resource usage, and speeds up deployments.

Build Performance

Speed up image builds with these techniques:

# Enable BuildKit for faster builds
export DOCKER_BUILDKIT=1

# Use cache from registry
docker build --cache-from myregistry.com/myapp:latest -t myapp:1.0 .

# Build with inline cache
docker build --build-arg BUILDKIT_INLINE_CACHE=1 -t myapp:1.0 .

Image Size Optimization

Smaller images mean faster pulls, less storage, and reduced attack surface: