Docker 🐳

 # 🐳 DOCKER A TO Z - COMPLETE GUIDE (0 to 100)
# Everything You Need to Know About Docker - From Beginner to Pro

================================================================================
TABLE OF CONTENTS
================================================================================

1. WHAT IS DOCKER? (BASICS)
2. DOCKER INSTALLATION & SETUP
3. DOCKER TERMINOLOGY
4. DOCKER COMMANDS (COMPLETE A-Z)
5. DOCKERFILE - COMPLETE GUIDE
6. DOCKER COMPOSE - COMPLETE GUIDE
7. DOCKER IMAGES
8. DOCKER CONTAINERS
9. DOCKER VOLUMES
10. DOCKER NETWORKS
11. DOCKER BEST PRACTICES
12. TROUBLESHOOTING
13. PRODUCTION DEPLOYMENT
14. REAL-WORLD EXAMPLES


================================================================================
1. WHAT IS DOCKER? (BASICS)
================================================================================

WHAT IS DOCKER?
Docker is a containerization platform that packages your application with all
its dependencies into a single unit called a "container".

WHY USE DOCKER?
✅ Same environment everywhere (local, staging, production)
✅ No more "it works on my machine" problems
✅ Easy to deploy and scale
✅ Lightweight compared to virtual machines
✅ Microservices architecture friendly
✅ Easy CI/CD pipeline integration

DOCKER VS VIRTUAL MACHINE:
- Virtual Machine: Full OS + Kernel + Application = ~GB
- Docker Container: Just Application + Dependencies = ~MB

DOCKER ARCHITECTURE:
┌─────────────────────────────────────────────────┐
│         DOCKER DAEMON (Server)                  │
│  - Manages containers, images, networks, volumes│
└─────────────────────────────────────────────────┘
        ↑
        │ (Commands)
        ↓
┌─────────────────────────────────────────────────┐
│         DOCKER CLIENT (CLI)                     │
│  - Your commands (docker run, docker build, etc)│
└─────────────────────────────────────────────────┘


================================================================================
2. DOCKER INSTALLATION & SETUP
================================================================================

INSTALL DOCKER:

Option 1: Docker Desktop (Easiest - Recommended)
- Download from: https://www.docker.com/products/docker-desktop
- Works on: Windows, Mac, Linux
- Includes Docker Engine + Docker Compose

Option 2: Docker Engine on Linux
Ubuntu/Debian:
sudo apt-get update
sudo apt-get install docker.io docker-compose

Fedora/RHEL:
sudo dnf install docker docker-compose

Option 3: Docker on Windows (WSL2)
- Enable WSL2 (Windows Subsystem for Linux 2)
- Install Docker Desktop for Windows
- Configure to use WSL2 backend

VERIFY INSTALLATION:
docker --version
# Output: Docker version 20.10.x, build xxxxxxx

docker-compose --version
# Output: Docker Compose version 1.29.x, build xxxxxxx

TEST DOCKER:
docker run hello-world
# If successful, you see: "Hello from Docker!"

CONFIGURE DOCKER (Optional):
docker info  # Shows Docker configuration and stats


================================================================================
3. DOCKER TERMINOLOGY
================================================================================

IMAGE:
- A blueprint/template for containers
- Contains: Application code + Runtime + Libraries + Dependencies
- Immutable (doesn't change)
- Can be built from a Dockerfile or pulled from Docker Hub

CONTAINER:
- Running instance of an image
- Has: File system, network interface, process space
- Mutable (can change while running)
- Multiple containers can run from the same image

DOCKERFILE:
- Text file with instructions to build an image
- Contains: FROM, RUN, COPY, EXPOSE, CMD, ENTRYPOINT, etc.
- Used by: docker build command

DOCKER HUB:
- Public registry for Docker images
- Like GitHub but for Docker images
- Pull images: docker pull ubuntu:20.04
- Push your images: docker push username/myimage:tag

REGISTRY:
- Repository of Docker images
- Public: Docker Hub, GitHub Container Registry, Quay.io
- Private: AWS ECR, Azure Container Registry, GitLab Registry

TAG:
- Version identifier for images
- Format: imagename:tag (e.g., node:20-alpine, ubuntu:22.04)
- Latest: Default tag if not specified

VOLUME:
- Persistent storage for containers
- Data persists even after container stops
- Types: Named volumes, Bind mounts, tmpfs

NETWORK:
- Communication between containers
- Types: Bridge (default), Host, Overlay, Macvlan

COMPOSE:
- Tool to define and run multi-container applications
- Uses docker-compose.yml file
- Orchestrates multiple containers as one unit


================================================================================
4. DOCKER COMMANDS (COMPLETE A-Z)
================================================================================

A. IMAGE COMMANDS
==================

Build an image:
docker build -t myimage:1.0 .
docker build -t myimage:1.0 -f Dockerfile .
docker build --no-cache -t myimage:1.0 .  # Don't use cache

Pull an image:
docker pull ubuntu:20.04
docker pull node:20-alpine
docker pull postgres:16

List images:
docker images
docker images --all  # Include intermediate images
docker images --filter "dangling=true"  # Unused images

Remove image:
docker rmi myimage:1.0
docker rmi -f myimage:1.0  # Force remove
docker image prune  # Remove unused images

Tag image:
docker tag myimage:1.0 myimage:latest
docker tag myimage:1.0 registry.com/myimage:1.0

Push image:
docker push myimage:1.0
docker push registry.com/myimage:1.0

Inspect image:
docker inspect myimage:1.0
docker inspect --format='{{json .}}' myimage:1.0

Search images:
docker search ubuntu
docker search node --limit 10

Get image history:
docker history myimage:1.0

Save image to file:
docker save -o myimage.tar myimage:1.0

Load image from file:
docker load -i myimage.tar


B. CONTAINER COMMANDS
======================

Run a container:
docker run -d -p 8008:8008 --name myapp myimage:1.0

Flags explained:
-d              # Run in detached mode (background)
-p 8008:8008    # Port mapping (host:container)
--name myapp    # Container name
-e VAR=value    # Environment variable
-v /data:/data  # Volume mount
--rm            # Auto remove when stopped
-i              # Interactive mode
-t              # Terminal
-it             # Interactive terminal

Examples:
docker run -d -p 3000:3000 -e NODE_ENV=production node:20-alpine
docker run -it ubuntu:20.04 /bin/bash
docker run --rm -v $(pwd):/app node:20 npm install

List containers:
docker ps              # Only running containers
docker ps -a          # All containers (running and stopped)
docker ps -q          # Only container IDs
docker ps --filter "status=exited"

Stop container:
docker stop myapp              # Graceful stop (15 sec timeout)
docker stop -t 30 myapp       # Custom timeout
docker stop $(docker ps -q)    # Stop all containers

Start container:
docker start myapp
docker start myapp myapp2

Restart container:
docker restart myapp
docker restart -t 30 myapp

Remove container:
docker rm myapp
docker rm -f myapp             # Force remove
docker container prune         # Remove all stopped containers

View container logs:
docker logs myapp
docker logs -f myapp                  # Follow logs (like tail -f)
docker logs --tail 100 myapp         # Last 100 lines
docker logs --timestamps myapp       # With timestamps
docker logs --since 2024-01-01 myapp # Since specific time

Execute command in container:
docker exec -it myapp /bin/bash
docker exec -it myapp sh
docker exec myapp npm run migrate
docker exec myapp npm list

Inspect container:
docker inspect myapp
docker inspect --format='{{.State.Running}}' myapp

Get container stats:
docker stats
docker stats myapp
docker stats --no-stream

View container processes:
docker top myapp

Attach to container:
docker attach myapp

Copy files:
docker cp myapp:/app/file.txt ./file.txt  # From container to host
docker cp ./file.txt myapp:/app/file.txt  # From host to container

Commit container (create image):
docker commit myapp myimage:new

Export container:
docker export myapp -o myapp.tar

View container changes:
docker diff myapp


C. DOCKER SYSTEM COMMANDS
==========================

Show Docker info:
docker info
docker version

Manage resources:
docker system df           # Disk usage
docker system prune        # Remove unused (images, containers, networks, volumes)
docker system prune -a     # Remove ALL unused (including tagged images)
docker system prune -a --volumes  # Include volumes

Clean up:
docker container prune     # Remove stopped containers
docker image prune         # Remove unused images
docker volume prune        # Remove unused volumes
docker network prune       # Remove unused networks

Get events:
docker events
docker events --filter "type=container"

Get logs:
docker logs myapp


D. NETWORK COMMANDS
===================

List networks:
docker network ls

Create network:
docker network create mynetwork
docker network create -d bridge mynetwork
docker network create -d overlay mynetwork  # For Swarm

Inspect network:
docker network inspect mynetwork

Connect container to network:
docker network connect mynetwork myapp

Disconnect container from network:
docker network disconnect mynetwork myapp

Remove network:
docker network rm mynetwork


E. VOLUME COMMANDS
==================

List volumes:
docker volume ls

Create volume:
docker volume create myvolume

Inspect volume:
docker volume inspect myvolume

Remove volume:
docker volume rm myvolume
docker volume prune  # Remove unused volumes

Run container with volume:
docker run -v myvolume:/data myimage:1.0
docker run -v $(pwd):/data myimage:1.0  # Bind mount


F. REGISTRY/LOGIN COMMANDS
==========================

Login to Docker Hub:
docker login
docker login -u username -p password

Login to private registry:
docker login registry.example.com

Logout:
docker logout
docker logout registry.example.com

Push image:
docker push myusername/myimage:1.0

Pull image:
docker pull myusername/myimage:1.0


================================================================================
5. DOCKERFILE - COMPLETE GUIDE
================================================================================

WHAT IS DOCKERFILE?
A text file with instructions to build a Docker image.
Name: Dockerfile (no extension)
Used by: docker build command

DOCKERFILE STRUCTURE:

FROM ubuntu:20.04
RUN apt-get update && apt-get install -y curl
COPY ./app /app
WORKDIR /app
ENV NODE_ENV=production
EXPOSE 8008
CMD ["npm", "start"]


DOCKERFILE INSTRUCTIONS (A-Z):

ADD
- Copy files/folders and automatically extract archives
- Usage: ADD ./archive.tar.gz /app/
- Similar to COPY but with additional features
- Less recommended - use COPY instead

ARG
- Build-time variable (not available at runtime)
- Usage: ARG NODE_VERSION=20
- Access: Use ${NODE_VERSION} in Dockerfile
- Pass during build: docker build --build-arg NODE_VERSION=18

CMD
- Default command when container starts
- Can be overridden by docker run command
- Only one CMD per Dockerfile (last one wins)
- Usage: CMD ["npm", "start"]

COPY
- Copy files from host to container
- Usage: COPY ./src /app/src
- Recommended over ADD for most use cases

ENTRYPOINT
- Configure container to run as executable
- Not easily overridden (unlike CMD)
- Usage: ENTRYPOINT ["node", "app.js"]
- Can combine with CMD: ENTRYPOINT ["node"] CMD ["app.js"]

ENV
- Set environment variable
- Available during build and at runtime
- Usage: ENV NODE_ENV=production
- Access in container: $NODE_ENV

EXPOSE
- Document which ports the container listens on
- Doesn't actually publish the port
- Usage: EXPOSE 8008
- Publish ports with: docker run -p 8008:8008

FROM
- Set base image
- Must be first instruction (except ARG)
- Usage: FROM node:20-alpine
- Multi-stage: FROM node:20 AS builder

HEALTHCHECK
- Define how Docker should check container health
- Usage: HEALTHCHECK --interval=30s CMD curl localhost:8008
- States: starting, healthy, unhealthy

LABEL
- Add metadata to image
- Usage: LABEL maintainer="you@example.com"
- Query: docker images --filter "label=maintainer"

RUN
- Execute command during build
- Creates layer in image
- Usage: RUN npm install
- Multi-line: RUN apt-get update && apt-get install -y curl

SHELL
- Override default shell
- Default: /bin/sh -c on Linux, cmd /S /C on Windows
- Usage: SHELL ["/bin/bash", "-c"]

USER
- Set user for subsequent commands
- Usage: USER nodejs
- Security best practice: run as non-root

VOLUME
- Create mount point
- Usage: VOLUME ["/data"]
- Similar to docker run -v

WORKDIR
- Set working directory
- Usage: WORKDIR /app
- Creates directory if not exists

DOCKERFILE EXAMPLE (YOUR APP):

# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
COPY tsconfig.json ./
RUN npm ci
COPY src ./src
RUN npm run build

# Stage 2: Runtime
FROM node:20-alpine
WORKDIR /app
RUN adduser -S nodejs
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
USER nodejs
EXPOSE 8008
CMD ["node", "dist/app.js"]

BUILD DOCKERFILE:
docker build -t myapp:1.0 .
docker build -t myapp:1.0 -f Dockerfile .
docker build -t myapp:1.0 --build-arg NODE_VERSION=18 .
docker build -t myapp:1.0 --no-cache .

BUILD WITH LABELS:
docker build -t myapp:1.0 \
  --label "maintainer=you@example.com" \
  --label "version=1.0" .


================================================================================
6. DOCKER COMPOSE - COMPLETE GUIDE
================================================================================

WHAT IS DOCKER COMPOSE?
Tool to define and run multi-container Docker applications.
File: docker-compose.yml
Define: Services, volumes, networks, environment variables

WHY DOCKER COMPOSE?
✅ Run multiple containers with one command
✅ Define entire application in one file
✅ Easy networking between containers
✅ Persistent volumes management
✅ Environment variables management
✅ Service dependencies

DOCKER COMPOSE FILE STRUCTURE:

version: '3.9'

services:
  web:
    image: nginx:latest
    ports:
      - "8008:80"
    environment:
      - NODE_ENV=production
    volumes:
      - ./html:/usr/share/nginx/html
    networks:
      - mynetwork

  database:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - mynetwork

volumes:
  postgres_data:

networks:
  mynetwork:
    driver: bridge


DOCKER COMPOSE SERVICES (DETAILED):

services:
  servicename:
    # Image to use
    image: node:20-alpine
    
    # Or build from Dockerfile
    build:
      context: .
      dockerfile: Dockerfile
      args:
        NODE_VERSION: 20
    
    # Container name
    container_name: myapp
    
    # Ports
    ports:
      - "8008:8008"           # host:container
      - "3000:3000/udp"       # With protocol
    
    # Environment variables
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgres://user:pass@db:5432/mydb
    
    # Or from file
    env_file: .env
    
    # Volumes
    volumes:
      - myvolume:/data              # Named volume
      - ./src:/app/src              # Bind mount
      - /tmp:/tmp:ro                # Read-only
    
    # Networks
    networks:
      - mynetwork
    
    # Command override
    command: npm start
    
    # Entrypoint override
    entrypoint: /bin/bash
    
    # Restart policy
    restart: unless-stopped  # always, on-failure, no, unless-stopped
    
    # Depends on
    depends_on:
      - database
    
    # Health check
    healthcheck:
      test: ["CMD", "curl", "localhost:8008"]
      interval: 10s
      timeout: 5s
      retries: 5
    
    # Resource limits
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 256M
    
    # Logging
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"


DOCKER COMPOSE COMMANDS:

Start services:
docker-compose up                    # Foreground
docker-compose up -d                 # Background
docker-compose up --build            # Build before starting
docker-compose up -d --build backend # Specific service

Stop services:
docker-compose stop              # Graceful stop
docker-compose stop -t 30        # 30 second timeout
docker-compose stop backend      # Specific service

Start services:
docker-compose start
docker-compose start backend

Restart services:
docker-compose restart
docker-compose restart backend

Remove containers:
docker-compose down              # Remove containers
docker-compose down -v           # Also remove volumes
docker-compose down --remove-orphans

View logs:
docker-compose logs              # All services
docker-compose logs -f           # Follow logs
docker-compose logs backend      # Specific service
docker-compose logs -f backend --tail 100

View running services:
docker-compose ps

Build services:
docker-compose build
docker-compose build --no-cache
docker-compose build backend

Execute command:
docker-compose exec backend npm run migrate
docker-compose exec -it backend sh

Push services:
docker-compose push

Pull images:
docker-compose pull

List images:
docker-compose images

Remove images:
docker-compose down --rmi all

Validate compose file:
docker-compose config
docker-compose config --resolve-image-digests

Run one-off command:
docker-compose run backend npm run seed

Pause/Unpause:
docker-compose pause
docker-compose unpause

Events:
docker-compose events


DOCKER COMPOSE WITH MULTIPLE FILES:

Development:
docker-compose -f docker-compose.yml up

Production:
docker-compose -f docker-compose.yml -f docker-compose.production.yml up

Staging:
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up

Custom file:
docker-compose -f custom-compose.yml up

Multiple files:
docker-compose \
  -f docker-compose.yml \
  -f docker-compose.production.yml \
  -f docker-compose.monitoring.yml \
  up -d


ENVIRONMENT VARIABLES IN COMPOSE:

.env file (automatically loaded):
NODE_ENV=production
PORT=8008
DATABASE_URL=postgres://user:pass@db/mydb

Access in docker-compose.yml:
services:
  app:
    environment:
      - NODE_ENV=${NODE_ENV}
      - PORT=${PORT}
      - DATABASE_URL=${DATABASE_URL}

With defaults:
- NODE_ENV=${NODE_ENV:-development}  # Default: development

Command line:
docker-compose -e NODE_ENV=production up


NETWORKS IN DOCKER COMPOSE:

Default network (automatic):
- All services connected to default network
- Can access each other by service name

Custom network:
networks:
  mynetwork:
    driver: bridge

Connect service:
services:
  web:
    networks:
      - mynetwork
  db:
    networks:
      - mynetwork


VOLUMES IN DOCKER COMPOSE:

Named volume:
volumes:
  postgres_data:
    driver: local

Use in service:
services:
  db:
    volumes:
      - postgres_data:/var/lib/postgresql/data

Bind mount:
services:
  app:
    volumes:
      - ./src:/app/src

Temporary volume (tmpfs):
services:
  app:
    tmpfs:
      - /tmp
      - /run


DEPENDENCIES IN DOCKER COMPOSE:

depends_on:
  db:
    condition: service_healthy

Note: Waits for service to be healthy (if healthcheck defined)


DOCKER COMPOSE EXAMPLE (YOUR APP):

version: '3.9'

services:
  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: ${DB_USER:-postgres}
      POSTGRES_PASSWORD: ${DB_PASSWORD:-postgres}
      POSTGRES_DB: ${DB_NAME:-mydb}
    ports:
      - "${DB_PORT:-5432}:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-postgres}"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - app-network

  redis:
    image: redis:7-alpine
    ports:
      - "${REDIS_PORT:-6379}:6379"
    volumes:
      - redis_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "PING"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - app-network

  backend:
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      NODE_ENV: ${NODE_ENV:-production}
      PORT: 8008
      DATABASE_URL: postgresql://${DB_USER:-postgres}:${DB_PASSWORD:-postgres}@postgres:5432/${DB_NAME:-mydb}
      REDIS_URL: redis://:${REDIS_PASSWORD:-redis123}@redis:6379
    ports:
      - "${PORT:-8008}:8008"
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    volumes:
      - app_logs:/app/logs
    networks:
      - app-network
    restart: unless-stopped

volumes:
  postgres_data:
  redis_data:
  app_logs:

networks:
  app-network:
    driver: bridge


================================================================================
7. DOCKER IMAGES - DEEP DIVE
================================================================================

WHAT IS AN IMAGE?
Read-only template to create containers.
Contains: Code + Runtime + Libraries + Tools + Configuration

IMAGE STRUCTURE (Layers):
Base Layer (FROM ubuntu)
    ↓
Install Tools (RUN apt-get install)
    ↓
Copy Code (COPY ./src /app)
    ↓
Install Dependencies (RUN npm install)
    ↓
Set Environment (ENV NODE_ENV=production)
    ↓
Final Image

Each instruction creates a layer (immutable).
Layers are cached and reused.

IMAGE NAMING:
Format: [REGISTRY/]REPOSITORY:TAG
Examples:
- node:20-alpine
- docker.io/library/node:20-alpine
- gcr.io/my-project/myapp:1.0
- registry.example.com/myapp:latest

IMAGE TAGS:
- Latest (default if not specified)
- Version: 1.0, 1.0.0, 1.0.1
- Environment: production, staging, dev
- OS: alpine, ubuntu, debian

CREATE IMAGES:

Method 1: From Dockerfile
docker build -t myimage:1.0 .

Method 2: From existing container
docker commit mycontainer myimage:1.0

Method 3: From file
docker load -i myimage.tar

PULL IMAGES:
docker pull node:20-alpine
docker pull node:20  # Latest in 20.x
docker pull node     # Latest version
docker pull registry.example.com/myimage:1.0

PUSH IMAGES:
docker login
docker tag myimage:1.0 myusername/myimage:1.0
docker push myusername/myimage:1.0

IMAGE INSPECTION:
docker inspect myimage:1.0
docker history myimage:1.0
docker inspect --format='{{.Config.Cmd}}' myimage:1.0

MULTI-STAGE BUILDS:

Reduce image size by building in stages:

FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY src ./src
RUN npm run build

FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/app.js"]

Benefits:
- Remove build dependencies
- Smaller final image (~200MB vs 500MB)
- Faster deployments


================================================================================
8. DOCKER CONTAINERS - DEEP DIVE
================================================================================

WHAT IS A CONTAINER?
Running instance of an image.
Isolated file system, network, process space.
Lightweight virtual environment.

CONTAINER LIFECYCLE:

Created → Running → Paused → Stopped → Removed
  ↓        ↓         ↓       ↓        ↓
docker   docker    docker  docker  docker
create   start/run  pause   stop    rm

CREATE CONTAINER:
docker create myimage:1.0
# Returns container ID

RUN CONTAINER (Create + Start):
docker run myimage:1.0
docker run -d myimage:1.0         # Background
docker run -it myimage:1.0        # Interactive
docker run -it -p 8008:8008 myimage:1.0

START CONTAINER:
docker start container_id
docker start mycontainer

STOP CONTAINER:
docker stop container_id                    # Graceful (SIGTERM)
docker stop -t 30 container_id             # 30 second timeout
docker kill container_id                    # Force stop (SIGKILL)

VIEW CONTAINER STATUS:
docker ps              # Running only
docker ps -a          # All
docker ps -q          # IDs only
docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Status}}"

CONTAINER PROCESSES:
docker top mycontainer
docker stats mycontainer
docker stats  # All containers

LOGS:
docker logs mycontainer              # All logs
docker logs -f mycontainer          # Follow
docker logs --tail 50 mycontainer   # Last 50 lines
docker logs --timestamps mycontainer

EXECUTE COMMANDS:
docker exec mycontainer echo "Hello"
docker exec -it mycontainer /bin/bash
docker exec mycontainer npm run migrate

INSPECT CONTAINER:
docker inspect mycontainer
docker inspect --format='{{.State.Running}}' mycontainer
docker inspect --format='{{json .}}' mycontainer | jq

CONTAINER COPY:
docker cp mycontainer:/app/file.txt ./file.txt
docker cp ./file.txt mycontainer:/app/

CONTAINER FILES:
docker diff mycontainer

EXPORT CONTAINER:
docker export mycontainer > mycontainer.tar

RESTART CONTAINER:
docker restart mycontainer
docker restart -t 30 mycontainer


================================================================================
9. DOCKER VOLUMES - DEEP DIVE
================================================================================

WHAT IS A VOLUME?
Persistent data storage for containers.
Survives container deletion.
Can be shared between containers.

VOLUME TYPES:

1. NAMED VOLUME (Managed by Docker)
   Location: /var/lib/docker/volumes/
   Created: docker volume create myvolume
   Used: docker run -v myvolume:/data

2. BIND MOUNT (Mount host directory)
   Location: Any directory on host
   Used: docker run -v /host/path:/container/path
   Used: docker run -v $(pwd):/app

3. TMPFS MOUNT (Temporary in-memory storage)
   Used: docker run --tmpfs /tmp
   Used: tmpfs: [/tmp] in docker-compose


VOLUME COMMANDS:

Create volume:
docker volume create myvolume

List volumes:
docker volume ls
docker volume ls --filter "name=myvolume"

Inspect volume:
docker volume inspect myvolume

Remove volume:
docker volume rm myvolume
docker volume prune  # Remove unused

Use volume in container:
docker run -v myvolume:/data myimage:1.0

Mount with read-only:
docker run -v myvolume:/data:ro myimage:1.0

Bind mount:
docker run -v $(pwd):/app myimage:1.0

Multiple volumes:
docker run -v vol1:/data -v vol2:/logs myimage:1.0

VOLUME IN DOCKER COMPOSE:

Define volume:
volumes:
  postgres_data:
    driver: local

Use in service:
services:
  db:
    volumes:
      - postgres_data:/var/lib/postgresql/data

Bind mount:
services:
  app:
    volumes:
      - ./src:/app/src

VOLUME DRIVER:

Default: local
docker run -v myvolume:/data myimage:1.0

Other drivers:
- nfs: Network File System
- smb: Samba/Windows Share
- Custom: Custom driver

Example with NFS:
docker run -v myvolume:/data --mount type=volume,src=myvolume,dst=/data,volume-driver=nfs myimage:1.0


BACKUP VOLUME:

Backup:
docker run --rm -v myvolume:/data -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz /data

Restore:
docker run --rm -v myvolume:/data -v $(pwd):/backup alpine tar xzf /backup/backup.tar.gz -C /


================================================================================
10. DOCKER NETWORKS - DEEP DIVE
================================================================================

WHAT IS A DOCKER NETWORK?
Communication channel between containers.
Isolate containers from each other.
Multiple containers can share a network.

NETWORK TYPES:

1. BRIDGE (Default)
   - Default network for containers
   - Each container gets IP address
   - Containers communicate by container name (DNS)
   - Isolated from host network unless port mapped

2. HOST
   - Container uses host network stack
   - No IP address assigned
   - Direct access to host ports
   - Limited isolation
   - Usage: docker run --network host

3. OVERLAY
   - For Docker Swarm
   - Containers communicate across multiple hosts
   - Encrypted communication

4. MACVLAN
   - Assign MAC address to container
   - Appear as physical devices
   - Advanced use case

5. NONE
   - No network
   - Isolated container
   - Usage: docker run --network none


NETWORK COMMANDS:

List networks:
docker network ls

Create network:
docker network create mynetwork
docker network create -d bridge mynetwork
docker network create -d overlay mynetwork

Inspect network:
docker network inspect mynetwork

Remove network:
docker network rm mynetwork
docker network prune  # Remove unused

Connect container:
docker network connect mynetwork mycontainer

Disconnect container:
docker network disconnect mynetwork mycontainer


CONTAINER COMMUNICATION:

Same network - containers communicate by name:
Container1 → ping container2  # Works
Container1 → ping 172.20.0.3  # Works (IP address)

Different networks - cannot communicate:
Container1 (network1) → ping container2 (network2)  # Fails

Port mapping:
docker run -p 8008:8008 myimage:1.0
External: localhost:8008 → Container: 8008


DOCKER COMPOSE NETWORKING:

Default: All services on same network (service name)

services:
  web:
    image: nginx
    # Automatically connected to default network
    # Can access: postgres:5432 (by service name)

  postgres:
    image: postgres
    # Automatically connected to default network

Custom network:
networks:
  mynetwork:

services:
  web:
    networks:
      - mynetwork
  postgres:
    networks:
      - mynetwork


================================================================================
11. DOCKER BEST PRACTICES
================================================================================

DOCKERFILE BEST PRACTICES:

1. Use specific base image versions
   ✅ FROM node:20-alpine
   ❌ FROM node  or FROM node:latest

2. Use Alpine Linux for smaller images
   ✅ FROM node:20-alpine (150MB)
   ❌ FROM node:20 (900MB)

3. Use .dockerignore
   Exclude: node_modules, .git, dist, .env
   Reduces build context

4. Minimize layers
   ✅ RUN apt-get update && apt-get install -y curl
   ❌ RUN apt-get update
      RUN apt-get install -y curl

5. Remove unnecessary files
   RUN apt-get clean && rm -rf /var/lib/apt/lists/*

6. Use multi-stage builds
   Separate build and runtime stages
   Remove build dependencies from final image

7. Set working directory
   WORKDIR /app  # Not /

8. Use non-root user
   RUN addgroup -g 1001 -S nodejs
   RUN adduser -S nodejs -u 1001
   USER nodejs

9. Add health checks
   HEALTHCHECK --interval=30s CMD curl localhost:8008

10. Don't run as root
    Security risk
    Always: USER non-root-user

11. Expose ports
    EXPOSE 8008
    But still need -p flag to publish

12. Use environment variables
    ENV NODE_ENV=production
    ENV PORT=8008

13. Document with LABEL
    LABEL maintainer="you@example.com"
    LABEL version="1.0"

14. Copy in correct order
    ✅ COPY package*.json ./
       RUN npm install
       COPY src ./src
    ❌ COPY . .  # Rebuild every time


CONTAINER BEST PRACTICES:

1. One process per container
   Not: web server + database + cache in one container
   Yes: Separate containers for each

2. Use meaningful names
   ✅ docker run --name myapp
   ❌ docker run  (auto-generated name)

3. Set restart policy
   docker run --restart unless-stopped myimage:1.0
   Policies: always, unless-stopped, on-failure

4. Resource limits
   docker run -m 512M --cpus 1 myimage:1.0
   Prevent resource exhaustion

5. Use read-only volumes
   docker run -v myvolume:/data:ro myimage:1.0
   Prevent accidental writes

6. Don't use :latest tag in production
   ✅ myimage:1.0.0
   ❌ myimage:latest

7. Use environment variables
   docker run -e NODE_ENV=production myimage:1.0
   Not hardcoded in image


IMAGE BEST PRACTICES:

1. Keep images small
   Use Alpine Linux
   Multi-stage builds
   Remove unnecessary dependencies

2. Use semantic versioning
   v1.0.0, v1.0.1, v1.1.0
   Not: latest, master, current

3. Tag images appropriately
   myimage:1.0.0 (Production)
   myimage:staging (Staging)
   myimage:dev (Development)

4. Scan images for vulnerabilities
   docker scan myimage:1.0
   trivy image myimage:1.0

5. Use official base images
   ✅ FROM node:20-alpine
   ❌ FROM random-user/unknown-image

6. Keep images updated
   Patch security vulnerabilities
   Update dependencies


COMPOSE BEST PRACTICES:

1. Use version control
   Commit docker-compose.yml to git
   Track changes

2. Use .env files
   docker-compose loads .env automatically
   Secret management

3. Use separate files for environments
   docker-compose.yml (base)
   docker-compose.production.yml (override)
   docker-compose.staging.yml (override)

4. Set resource limits
   deploy:
     resources:
       limits:
         memory: 1G
         cpus: '1'

5. Use health checks
   healthcheck:
     test: ["CMD", "curl", "localhost"]
     interval: 10s

6. Proper logging
   logging:
     driver: json-file
     options:
       max-size: 10m
       max-file: 3

7. Restart policies
   restart: unless-stopped

8. Dependencies
   depends_on with condition service_healthy

9. Use named volumes
   Don't lose data when containers removed

10. Documentation
    Comments in compose file
    README with setup instructions


SECURITY BEST PRACTICES:

1. Don't run as root
   USER non-root-user

2. Use read-only filesystem
   read_only: true
   tmpfs for /tmp, /run

3. Don't store secrets in images
   Use environment variables
   Use secrets management tools

4. Use private registry for proprietary code
   Docker Hub for public
   Private registry for private

5. Scan images regularly
   docker scan myimage:1.0

6. Keep base images updated
   Regular docker pull
   Rebuild images

7. Use network policies
   Restrict communication between containers

8. Don't publish unnecessary ports
   Only expose required ports

9. Use strong credentials
   For private registries
   For Docker Hub

10. Enable content trust
    DOCKER_CONTENT_TRUST=1


================================================================================
12. TROUBLESHOOTING
================================================================================

COMMON ERRORS AND SOLUTIONS:

ERROR: Cannot connect to Docker daemon
Solution:
  docker daemon not running
  Start Docker Desktop or service
  sudo service docker start

ERROR: Permission denied while trying to connect to Docker daemon socket
Solution:
  Add user to docker group
  sudo usermod -aG docker $USER
  Logout and login

ERROR: Image not found
Solution:
  Pull image first
  docker pull node:20-alpine

ERROR: Port already in use
Solution:
  Change port: docker run -p 9000:8008
  Kill process: lsof -i :8008 && kill -9 PID

ERROR: Container exits immediately
Solution:
  Check logs: docker logs mycontainer
  Interactive mode: docker run -it myimage:1.0

ERROR: Container cannot reach other containers
Solution:
  Same network: docker network connect mynetwork container1
  Check DNS: docker exec container1 nslookup container2

ERROR: Out of disk space
Solution:
  docker system prune -a
  Remove unused: docker image prune

ERROR: Out of memory
Solution:
  Increase Docker memory limit
  Or set container limit: docker run -m 512M

ERROR: Logs not showing up
Solution:
  Check logs: docker logs mycontainer
  Follow logs: docker logs -f mycontainer

DEBUGGING:

View all processes:
docker ps -a

View container details:
docker inspect mycontainer

View logs:
docker logs mycontainer
docker logs -f mycontainer
docker logs --tail 50 mycontainer

Execute command:
docker exec -it mycontainer /bin/bash
docker exec mycontainer npm list

Check network:
docker network inspect mynetwork
docker exec mycontainer ping other-container

Check volumes:
docker volume inspect myvolume
docker run -it -v myvolume:/data alpine ls /data

Resource usage:
docker stats mycontainer
docker top mycontainer

Clean up:
docker system prune -a
docker container prune
docker image prune
docker volume prune


================================================================================
13. PRODUCTION DEPLOYMENT
================================================================================

PRODUCTION CHECKLIST:

Before deploying to production:

✅ Image security
  - No secrets in image
  - Run as non-root user
  - Minimal base image (Alpine)
  - Scanned for vulnerabilities

✅ Configuration
  - Environment variables for config
  - Secrets management
  - Proper logging

✅ Reliability
  - Health checks defined
  - Restart policies set
  - Resource limits defined
  - Error handling

✅ Performance
  - Multi-stage builds (smaller image)
  - Proper caching strategy
  - Resource optimization

✅ Documentation
  - README with setup instructions
  - Dockerfile comments
  - Environment variables documented

✅ Testing
  - Tested locally
  - Tested in staging
  - All services working


PRODUCTION COMPOSE FILE:

version: '3.9'

services:
  app:
    image: registry.example.com/myapp:1.0.0  # Specific version
    restart: always                           # Auto-restart
    environment:
      NODE_ENV: production
      LOG_LEVEL: warn
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 2G
        reservations:
          cpus: '1'
          memory: 1G
    logging:
      driver: json-file
      options:
        max-size: 10m
        max-file: 3
    healthcheck:
      test: ["CMD", "curl", "localhost:8008"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - app-network

  database:
    image: postgres:16-alpine
    restart: always
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}  # From secrets
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - app-network

volumes:
  postgres_data:

networks:
  app-network:
    driver: bridge


DEPLOYMENT PLATFORMS:

1. Docker Swarm (Built-in orchestration)
   Initialize: docker swarm init
   Deploy: docker stack deploy -c docker-compose.yml myapp

2. Kubernetes (Industry standard)
   Convert: kompose convert -f docker-compose.yml
   Deploy: kubectl apply -f deployment.yaml

3. Cloud platforms
   AWS ECS: Amazon Elastic Container Service
   Azure ACI: Azure Container Instances
   Google Cloud Run: Serverless containers
   DigitalOcean App Platform

4. VPS with docker-compose
   SSH to server
   Clone repository
   docker-compose up -d

5. Managed container services
   Heroku: Container deployment
   Railway: Modern PaaS
   Render: Container hosting


DEPLOYMENT STEPS (VPS EXAMPLE):

1. Setup server
   SSH to server
   Install Docker and Docker Compose

2. Clone repository
   git clone your-repo.git
   cd your-repo

3. Configure environment
   cp .env.example .env
   Edit .env with production values

4. Deploy
   docker-compose up -d

5. Monitor
   docker-compose ps
   docker-compose logs -f

6. Update
   git pull
   docker-compose build
   docker-compose up -d


MONITORING:

Check status:
docker-compose ps
docker stats

View logs:
docker-compose logs -f

Alerts:
Set up monitoring (Prometheus, Grafana)
Alert on container restarts
Alert on disk space

Backup:
Backup volumes: docker run --rm -v myvolume:/data -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz /data

Database backup:
docker-compose exec postgres pg_dump -U postgres mydb > backup.sql


================================================================================
14. REAL-WORLD EXAMPLES
================================================================================

EXAMPLE 1: SIMPLE WEB APP (Node.js + Nginx)

Dockerfile:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY src ./src
RUN npm run build
RUN npm ci --only=production
EXPOSE 8008
CMD ["node", "dist/app.js"]

docker-compose.yml:
version: '3.9'
services:
  app:
    build: .
    ports:
      - "8008:8008"
    restart: unless-stopped
  
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    depends_on:
      - app

Commands:
docker-compose up --build -d
docker-compose logs -f app
docker-compose down


EXAMPLE 2: FULL STACK (Frontend + Backend + Database)

docker-compose.yml:
version: '3.9'
services:
  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    environment:
      REACT_APP_API_URL: http://localhost:8008
  
  backend:
    build: ./backend
    ports:
      - "8008:8008"
    environment:
      DATABASE_URL: postgresql://user:pass@postgres:5432/mydb
      REDIS_URL: redis://redis:6379
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
  
  postgres:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5
  
  redis:
    image: redis:7-alpine
    healthcheck:
      test: ["CMD", "redis-cli", "PING"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  postgres_data:

Commands:
docker-compose up --build -d
docker-compose exec backend npm run migrate
docker-compose logs -f backend


EXAMPLE 3: MICROSERVICES

docker-compose.yml:
version: '3.9'
services:
  users-service:
    build: ./services/users
    environment:
      DATABASE_URL: postgresql://user:pass@postgres:5432/users
    depends_on:
      - postgres
    networks:
      - microservices
  
  products-service:
    build: ./services/products
    environment:
      DATABASE_URL: postgresql://user:pass@postgres:5432/products
    depends_on:
      - postgres
    networks:
      - microservices
  
  api-gateway:
    build: ./api-gateway
    ports:
      - "8008:8008"
    environment:
      USERS_SERVICE: http://users-service:3000
      PRODUCTS_SERVICE: http://products-service:3000
    depends_on:
      - users-service
      - products-service
    networks:
      - microservices
  
  postgres:
    image: postgres:16
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - microservices

networks:
  microservices:

volumes:
  postgres_data:


EXAMPLE 4: PYTHON APP

Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]

docker-compose.yml:
version: '3.9'
services:
  app:
    build: .
    ports:
      - "5000:5000"
    environment:
      FLASK_ENV: production
      DATABASE_URL: postgresql://user:pass@postgres:5432/mydb
    depends_on:
      postgres:
        condition: service_healthy
  
  postgres:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  postgres_data:


================================================================================
QUICK REFERENCE CARD
================================================================================

ESSENTIAL COMMANDS:

# Build and run
docker build -t myapp:1.0 .
docker run -d -p 8008:8008 myapp:1.0
docker-compose up --build -d

# Monitor
docker ps
docker logs -f myapp
docker stats

# Stop and cleanup
docker stop myapp
docker rm myapp
docker-compose down

# Images and registries
docker pull node:20-alpine
docker push username/myapp:1.0
docker images
docker rmi myapp:1.0

# Debugging
docker exec -it myapp /bin/bash
docker inspect myapp
docker top myapp

# Cleanup
docker system prune -a
docker volume prune

================================================================================
END OF DOCKER A TO Z GUIDE
================================================================================

You now have complete knowledge of Docker from basics to production!

Key Takeaways:
✅ Docker simplifies deployment
✅ Containers are lightweight and portable
✅ Docker Compose orchestrates multiple containers
✅ Images are blueprints, containers are instances
✅ Volumes provide persistent storage
✅ Networks enable container communication
✅ Multi-stage builds optimize image size
✅ Best practices ensure security and performance
✅ Production deployment requires planning
✅ Troubleshooting skills are essential

Happy Dockering! 🐳

Comments

Popular posts from this blog

CyberSecurity

VERTICAL SCALING 💋

prisma using in project idx 13.3