Docker for Beginners: Containers Made Simple
ยท 9 min read
What Is Docker?
Docker is a platform that lets you package applications and their dependencies into lightweight, portable containers. Think of a container as a tiny, self-contained box that holds everything your app needs to run: code, runtime, libraries, and system tools. No more "it works on my machine" problems.
Before Docker, deploying software meant manually installing dependencies, configuring servers, and hoping nothing conflicted. Docker eliminates this by ensuring your app runs identically everywhere โ your laptop, a teammate's machine, staging servers, or production.
Docker has become the standard tool for modern software development. Whether you're building microservices, setting up CI/CD pipelines, or just want a consistent development environment, Docker makes it possible with minimal overhead.
Containers vs Virtual Machines
Containers and virtual machines both provide isolation, but they work very differently:
Virtual Machines run a full operating system with its own kernel on top of a hypervisor. Each VM needs its own OS, consuming gigabytes of disk and significant memory. Boot times are measured in minutes.
Containers share the host OS kernel and only package the application layer. They're megabytes in size (not gigabytes), start in seconds (not minutes), and you can run dozens on a single machine.
# VM approach: Each app gets a full OS
App A โ Guest OS โ Hypervisor โ Host OS โ Hardware
App B โ Guest OS โ Hypervisor โ Host OS โ Hardware
# Container approach: Apps share the kernel
App A โ Container Runtime โ Host OS โ Hardware
App B โ Container Runtime โ Host OS โ Hardware
This lightweight architecture makes containers ideal for microservices, development environments, and CI/CD pipelines where you need fast startup and efficient resource usage.
Core Docker Concepts
Understanding these four concepts is key to working with Docker:
- Image: A read-only template that contains your application code, runtime, libraries, and configuration. Images are built from Dockerfiles and stored in registries like Docker Hub.
- Container: A running instance of an image. You can start, stop, move, and delete containers. Each container is isolated from others and from the host.
- Dockerfile: A text file with instructions for building an image. It specifies the base image, copies your code, installs dependencies, and defines how to start your app.
- Registry: A storage and distribution service for Docker images. Docker Hub is the default public registry, but you can run private registries too.
๐ ๏ธ Developer tools for Docker workflows
Essential Docker Commands
Here are the Docker commands you'll use most often:
# Pull an image from Docker Hub
docker pull nginx
docker pull node:20-alpine
# List downloaded images
docker images
# Run a container
docker run nginx # Foreground
docker run -d nginx # Detached (background)
docker run -d -p 8080:80 nginx # Map port 8080 to container's 80
docker run -d --name my-nginx -p 8080:80 nginx # Named container
# List running containers
docker ps # Running only
docker ps -a # All (including stopped)
# Stop and remove containers
docker stop my-nginx
docker rm my-nginx
docker rm -f my-nginx # Force stop and remove
# View container logs
docker logs my-nginx
docker logs -f my-nginx # Follow (tail)
# Execute commands inside a running container
docker exec -it my-nginx bash
docker exec my-nginx cat /etc/nginx/nginx.conf
# Remove unused resources
docker system prune # Remove stopped containers, unused networks, dangling images
docker system prune -a # Also remove unused images
Writing Dockerfiles
A Dockerfile is a recipe for building your application image. Here's a practical example for a Node.js application:
# Use an official Node.js runtime as the base image
FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package files first (for better caching)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Expose the port your app listens on
EXPOSE 3000
# Define the command to run your app
CMD ["node", "server.js"]
Build and run this image:
# Build the image
docker build -t my-node-app .
# Run it
docker run -d -p 3000:3000 --name app my-node-app
# Test it
curl http://localhost:3000
Multi-stage Builds
Multi-stage builds keep your final image small by separating build and runtime dependencies:
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/server.js"]
The final image only contains the compiled output and production dependencies, resulting in a much smaller image.
Docker Compose
Docker Compose defines multi-container applications in a single YAML file. Perfect for development environments with databases, caches, and other services:
# docker-compose.yml
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
volumes:
- .:/app # Mount source code for development
- /app/node_modules # Preserve container's node_modules
db:
image: postgres:16-alpine
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=myapp
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
pgdata:
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f app
# Stop all services
docker-compose down
# Rebuild and restart
docker-compose up -d --build
Docker Best Practices
- Use official base images. Start with official images from Docker Hub (
node:20-alpine,python:3.12-slim). They're maintained, secure, and optimized. - Use Alpine variants. Alpine Linux images are 5-10MB compared to 100MB+ for Debian-based images. Use
-alpineor-slimtags when possible. - Leverage layer caching. Copy
package.jsonbefore copying source code. Dependencies change less frequently, so Docker can cache that layer. - Use .dockerignore. Exclude
node_modules,.git, and other unnecessary files from the build context:node_modules .git .env *.md dist - Don't run as root. Create a non-root user in your Dockerfile:
RUN addgroup -S appgroup && adduser -S appuser -G appgroup USER appuser - Use environment variables for configuration. Never hardcode secrets, database URLs, or API keys. Pass them via
-eflags or.envfiles. Use our Hash Generator to create secure secrets and passwords. - Tag images properly. Use semantic versioning (
v1.2.3) instead oflatest. This ensures reproducible deployments. - Keep images small. Fewer layers, multi-stage builds, and minimal base images reduce attack surface and speed up deployments.
Frequently Asked Questions
What's the difference between Docker and Kubernetes?
Docker creates and runs containers. Kubernetes orchestrates containers at scale โ handling deployment, scaling, load balancing, and self-healing across clusters of machines. You typically use Docker to build images and Kubernetes to run them in production.
Is Docker free to use?
Docker Engine (the core runtime) is free and open source. Docker Desktop is free for personal use and small businesses (under 250 employees and $10M revenue). Larger organizations need a paid subscription.
Can Docker containers communicate with each other?
Yes. Containers on the same Docker network can communicate using container names as hostnames. Docker Compose automatically creates a network for all services defined in the compose file.
How do I persist data in Docker containers?
Use Docker volumes. Volumes store data outside the container's filesystem, so data persists even when containers are removed. Define them in docker-compose.yml or use the -v flag with docker run.