Skip to content

Docker Interview Questions

Docker Interview Questions: 30 Important Questions + 20 Scenario-Based Questions

Section titled “Docker Interview Questions: 30 Important Questions + 20 Scenario-Based Questions”

Part 1: 30 Important Interview Questions with Detailed Answers

Section titled “Part 1: 30 Important Interview Questions with Detailed Answers”

1. What is Docker and how does it differ from virtual machines?

Section titled “1. What is Docker and how does it differ from virtual machines?”
  • Answer:

    Docker is a containerization platform that packages applications and their dependencies into lightweight, portable containers that run on a shared host operating system kernel.

    Key Differences from VMs:

    AspectDocker ContainersVirtual Machines
    OSShares host OS kernelEach VM runs full guest OS
    SizeMBs (typically 10s-100s MB)GBs (typically 1-10+ GB)
    StartupSeconds (milliseconds for simple containers)Minutes
    Resource UsageMinimal overhead, efficientSignificant overhead per VM
    IsolationProcess-level (namespaces)Hardware-level (hypervisor)
    PortabilityHigh - runs anywhere with DockerMedium - requires hypervisor compatibility

    Why containers are faster: Containers don’t boot an OS; they’re just isolated processes. When you run docker run ubuntu, you’re not booting Ubuntu - you’re starting a process with Ubuntu’s filesystem, using your host’s Linux kernel.


2. Explain the difference between CMD and ENTRYPOINT in Dockerfile.

Section titled “2. Explain the difference between CMD and ENTRYPOINT in Dockerfile.”
  • Answer:

    Both define what command executes when a container starts, but they handle runtime arguments differently:

    CMD (Default Command - Overridable):

    • Provides defaults that can be completely replaced
    • If multiple CMD instructions exist, only the last one takes effect
    • Syntax: CMD ["executable", "param1", "param2"] (exec form) or CMD command param1 param2 (shell form)

    ENTRYPOINT (Fixed Executable - Not Overridable):

    • Defines the executable that will always run
    • Any docker run arguments are appended, not replaced
    • Best practice: Use ENTRYPOINT for the main executable, CMD for default arguments

    Practical Example:

    # Dockerfile
    FROM ubuntu
    ENTRYPOINT ["ping"]
    CMD ["google.com"]
    Terminal window
    # Uses default CMD: ping google.com
    docker run my-ping
    # Overrides CMD: ping yahoo.com
    docker run my-ping yahoo.com
    # Cannot replace ENTRYPOINT without --entrypoint flag
    docker run --entrypoint echo my-ping "hello"

    The Combined Pattern (Most Common):

    ENTRYPOINT ["python"]
    CMD ["app.py"]
    • docker run myapp → python app.py
    • docker run myapp script.py → python script.py

3. What are Docker namespaces and cgroups?

Section titled “3. What are Docker namespaces and cgroups?”
  • Answer:

    Namespaces (What a process can SEE):

    Gives illusion that it is isolated but use a PID to isolate. Provide isolation by giving containers their own view of the system:

    NamespacePurpose
    PIDProcess IDs - container sees only its own processes
    NETNetwork - each container gets its own network stack
    MNTMount points - isolated filesystem views
    UTSHostname - container can have its own hostname
    IPCInter-process communication isolation
    USERUser IDs - root in container ≠ root on host

    cgroups (What a process can USE): Limit hardware resources to prevent one container from starving others:

    Terminal window
    # Memory limit
    docker run -m 512m --memory-reservation 256m myapp
    # CPU limit (25% of one core)
    docker run --cpus=0.25 myapp
    # CPU pinning to specific cores
    docker run --cpuset-cpus="0,1" myapp
    # CPU shares (priority when under load)
    docker run --cpu-shares=2048 high-priority-app

    Why this matters: Understanding this helps debug “my container feels slow” issues - check if you’ve set appropriate limits or if cgroup constraints are too restrictive.


4. How do you persist data in Docker containers?

Section titled “4. How do you persist data in Docker containers?”
  • Answer:

    Docker provides three primary ways to persist data:

    1. Volumes (Recommended for Production): Managed entirely by Docker, stored in /var/lib/docker/volumes/

    Terminal window
    # Create named volume
    docker volume create postgres-data
    # Use with container
    docker run -d \
    --name postgres \
    --mount type=volume,source=postgres-data,target=/var/lib/postgresql/data \
    postgres:13
    # Volume management
    docker volume ls
    docker volume inspect postgres-data
    docker volume prune # Remove unused volumes

    2. Bind Mounts (Development/Config Files): Direct mapping from host filesystem

    Terminal window
    # Recommended syntax
    docker run -d \
    --name nginx \
    --mount type=bind,source=/host/config/nginx.conf,target=/etc/nginx/nginx.conf,readonly \
    nginx
    # Shortcut syntax
    docker run -v /host/data:/container/data:ro myapp

    3. tmpfs Mounts (In-Memory, Ephemeral): Data stored in RAM, lost when container stops

    Terminal window
    # Store sensitive data in memory
    docker run -d \
    --name secure-app \
    --mount type=tmpfs,target=/tmp/secrets,tmpfs-mode=0700 \
    myapp

    Best Practices:

    • Use volumes for databases and persistent application data
    • Use bind mounts for development and configuration files
    • Use tmpfs for secrets or temporary high-performance writes
    • Always backup volume data; containers are disposable


  • Answer

    Answer:

    Docker provides several network drivers with different use cases:

    1. Bridge (Default) Creates private internal network; containers communicate via IP or name

    Terminal window
    # Create custom bridge (better isolation)
    docker network create --driver bridge my-network
    # Run containers in custom network
    docker run -d --network my-network --name web nginx
    docker run -d --network my-network --name db postgres
    # Web can reach db via hostname "db"

    2. Host Removes network isolation; container uses host’s network directly

    Terminal window
    docker run -d --network host nginx
    # Access at http://localhost:80 (no port mapping needed)

    3. Overlay Multi-host networking for Docker Swarm/kubernetes

    4. Macvlan Assigns physical MAC address; container appears as physical device on network

    5. None Complete network isolation; only loopback

    Terminal window
    docker run --network none isolated-app

    Network Management Commands:

    Terminal window
    # List networks
    docker network ls
    # Connect running container to network
    docker network connect my-network web
    # Inspect network (see connected containers)
    docker network inspect my-network
    # Disconnect
    docker network disconnect my-network web

6. What’s the difference between COPY and ADD in Dockerfile?

Section titled “6. What’s the difference between COPY and ADD in Dockerfile?”
  • Answer:

    FeatureCOPYADD
    Local files✅ Yes✅ Yes
    Remote URLs❌ No✅ Yes (downloads)
    Auto-extract tar❌ No✅ Yes (.tar, .tar.gz, etc.)
    Best practicePreferredUse only when needed

    COPY (Simple, Transparent):

    COPY ./app /app
    COPY package.json /app/
    COPY --chown=node:node . /app

    ADD (Powerful but Unpredictable):

    # Downloads remote file (can break builds if URL unavailable)
    ADD https://example.com/file.tar.gz /tmp/
    # Auto-extracts tar (may have unintended behavior)
    ADD app.tar.gz /app/ # Extracts contents automatically
    # Preferred: Use wget/curl in RUN for more control
    RUN wget https://example.com/file.tar.gz && tar -xzf file.tar.gz

    Rule: Use COPY for local files; use ADD only when you specifically need URL fetching or automatic extraction. The unpredictability of ADD (especially extraction) makes builds less reproducible.



  • Answer:

    1. Use Alpine or Slim Base Images:

    # Instead of: FROM ubuntu:22.04 (77MB)
    FROM alpine:3.18 (7MB)
    # Or: FROM node:20-slim (better than full node)

    2. Multi-stage Builds:

    # Build stage (with build tools)
    FROM golang:1.20 AS builder
    WORKDIR /app
    COPY go.mod go.sum ./
    RUN go mod download
    COPY . .
    RUN CGO_ENABLED=0 go build -o myapp
    # Runtime stage (only binary, no build tools)
    FROM alpine:latest
    RUN apk --no-cache add ca-certificates
    COPY --from=builder /app/myapp /usr/local/bin/
    ENTRYPOINT ["myapp"]
    # Result: 500MB → 15MB

    3. Combine RUN Commands (Reduce Layers):

    # Bad: Creates 3 layers
    RUN apt-get update
    RUN apt-get install -y python3
    RUN apt-get clean
    # Good: Single layer
    RUN apt-get update && \
    apt-get install -y python3 && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

    4. Use .dockerignore:

    .git
    node_modules
    *.log
    __pycache__
    .DS_Store

    5. Clean Up After Package Managers:

    RUN pip install --no-cache-dir -r requirements.txt
    RUN apt-get install -y --no-install-recommends package

    6. Specific Version Tags Instead of Latest:

# Instead of: FROM node:latest
FROM node:18.17.0-alpine

8. Explain Docker Compose and its use cases.

Section titled “8. Explain Docker Compose and its use cases.”
  • Ans

    Answer:

    Docker Compose is a tool for defining and running multi-container Docker applications using YAML.

    Key Concepts:

    • Services: Define containers (web, db, redis)
    • Networks: Automatic DNS resolution between services
    • Volumes: Persistent storage for databases

    Example docker-compose.yml:

    version:'3.8'
    services:
    web:
    build: ./web
    ports:
    -"3000:3000"
    environment:
    - NODE_ENV=production
    - DB_HOST=postgres
    depends_on:
    - postgres
    volumes:
    - ./web:/app
    restart: unless-stopped
    postgres:
    image: postgres:15-alpine
    environment:
    POSTGRES_DB: myapp
    POSTGRES_USER: user
    POSTGRES_PASSWORD: secret
    volumes:
    - postgres-data:/var/lib/postgresql/data
    healthcheck:
    test:["CMD-SHELL","pg_isready -U user"]
    interval: 10s
    timeout: 5s
    retries:5
    redis:
    image: redis:7-alpine
    command: redis-server --appendonly yes
    volumes:
    - redis-data:/data
    volumes:
    postgres-data:
    redis-data:

    Common Commands:

    Terminal window
    # Start services in background
    docker compose up -d
    # View logs
    docker compose logs -f web
    # Scale a service
    docker compose up -d --scale web=3
    # Execute command in service
    docker compose exec web npm run migrate
    # Stop and remove everything
    docker compose down -v # -v removes volumes

    Use Cases:

    • Development environments with multiple services
    • CI/CD testing with dependencies
    • Production deployments (though Kubernetes often better for complex setups)

Answer:

For Docker Compose (BuildKit required):

version:'3.8'
services:
app:
build: .
secrets:
- db_password
- api_key
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
external:true # Created via `docker secret create`

For Swarm Mode (Production):

Terminal window
# Create secrets
echo "MySecurePassword123" | docker secret create db_password -
docker secret create api_key ./api_key.txt
# Use in service
docker service create \
--name app \
--secret db_password \
--secret api_key \
--publish 80:3000 \
myapp:latest

Environment Variables (Less Secure):

Terminal window
# Not recommended for production secrets
docker run -e DB_PASSWORD="secret123" myapp

Best Practices:

  1. Never hardcode secrets in Dockerfiles or images
  2. Use Docker secrets (Swarm) or external secret stores (HashiCorp Vault)
  3. Avoid passing secrets via environment variables (can be inspected)
  4. For development, use .env files with .gitignore
  5. Consider tools like Docker Secrets or Kubernetes Secrets for orchestration

10. What are the differences between Docker Swarm and Kubernetes?

Section titled “10. What are the differences between Docker Swarm and Kubernetes?”

Answer:

AspectDocker SwarmKubernetes
ComplexitySimple, easy to set upComplex, steep learning curve
InstallationBuilt into Docker (1-click)Requires separate installation
ScalingSimple commandMore complex but powerful
Service DiscoveryBuilt-in DNSDNS, also supports Ingress
Load BalancingBuilt-in (L4)L4 and L7 (Ingress)
Rolling UpdatesYes, simplerYes, more configurable
NetworkingOverlay networkCNI plugins (Calico, Flannel)
StorageVolume pluginsCSI (Container Storage Interface)
Self-healingBasicAdvanced (health probes)
Auto-scalingLimitedCPU/memory, custom metrics
Learning CurveLowHigh
Best ForSmall to medium deployments, startupsLarge-scale, complex microservices

Example Swarm vs K8s Deployment:

Docker Swarm:

Terminal window
# Initialize
docker swarm init
# Deploy stack
docker stack deploy -c docker-compose.yml myapp
# Scale
docker service scale myapp_web=5

Kubernetes (YAML):

apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas:5
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
-name: nginx
image: nginx:latest
ports:
-containerPort:80
---
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
-port:80
targetPort:80
type: LoadBalancer

11. Explain Docker’s layered filesystem and how it works.

Section titled “11. Explain Docker’s layered filesystem and how it works.”

Answer:

Docker images use Union Filesystems (AUFS, overlay2, etc.) with a layered architecture:

Layer Structure:

Container Layer (Read-Write) ← Changes here
├── Image Layer 3 (Read-Only)
├── Image Layer 2 (Read-Only)
└── Image Layer 1 (Read-Only) ← Base image

How it works:

  1. Each Dockerfile instruction creates a layer
  2. Layers are cached and reused across images
  3. When container runs, thin read-write layer added on top
  4. Copy-on-Write: When container modifies file, it’s copied from read-only to writable layer

Benefits:

  • Faster builds (cached layers)
  • Smaller storage (shared layers)
  • Efficient transfers (only missing layers pulled)

Example showing layers:

Terminal window
# Build image with layers
docker build -t myapp .
# View image layers
docker history myapp
# IMAGE CREATED CREATED BY SIZE
# abc123 2 mins ago CMD ["python","app.py"] 0B
# def456 2 mins ago RUN pip install -r req.txt 150MB
# ghi789 3 mins ago COPY requirements.txt . 1.2kB
# jkl012 3 mins ago WORKDIR /app 0B

Layer Caching in Practice:

# Optimized for caching - dependencies first
FROM python:3.9
WORKDIR /app
# Copy only requirements first (cached unless requirements changes)
COPY requirements.txt .
RUN pip install -r requirements.txt # This layer caches
# Copy application code (changes often)
COPY . .
CMD ["python", "app.py"]

12. What are the different container states in Docker?

Section titled “12. What are the different container states in Docker?”

Answer:

Container States:

StateDescriptionExample
CreatedContainer created but not starteddocker create nginx
RunningContainer actively executingdocker run -d nginx
PausedProcesses suspended (freezer cgroup)docker pause container
RestartingIn restart process--restart always
ExitedStopped with or without errorContainer finished or stopped
DeadFailed to stop properly (rare)Can be manually removed

Transitions:

Terminal window
# Create → Running
docker create nginx docker start nginx
# Running → Paused → Running
docker pause nginx docker unpause nginx
# Running → Exited
docker stop nginx # SIGTERM (graceful)
docker kill nginx # SIGKILL (immediate)
# Exited → Running
docker start nginx
# Any state → Removed
docker rm nginx

Checking State:

Terminal window
# List with status
docker ps -a
# Filter by status
docker ps --filter status=exited
docker ps --filter status=running
# Get detailed status
docker inspect container | jq '.[0].State.Status'

Why containers exit immediately:

Terminal window
# This exits because no foreground process
docker run ubuntu
# This runs 10 seconds then exits
docker run ubuntu sleep 10
# This stays running
docker run -d nginx # nginx runs in foreground

13. How do you troubleshoot a container that won’t start?

Section titled “13. How do you troubleshoot a container that won’t start?”

Answer:

Systematic Troubleshooting Approach:

1. Check Logs:

Terminal window
# Get last 100 lines
docker logs --tail 100 failing-container
# Follow logs in real-time
docker logs -f failing-container
# Logs from last 10 minutes
docker logs --since 10m failing-container

2. Inspect Container Details:

Terminal window
# Full JSON details
docker inspect failing-container
# Specific exit code
docker inspect --format='{{.State.ExitCode}}' failing-container
# Check error message
docker inspect --format='{{.State.Error}}' failing-container

3. Try Running Without Detach:

Terminal window
# Run in foreground to see immediate errors
docker run --rm myapp
# With interactive shell if possible
docker run -it myapp /bin/sh

4. Override Entrypoint/Command:

Terminal window
# Override to get shell access
docker run --rm -it --entrypoint /bin/sh myapp
# Or override command
docker run --rm myapp ls -la

5. Check Resource Limits:

Terminal window
# Container might be OOM killed
docker inspect container | grep -A 5 "OOMKilled"
# Check system logs
dmesg | grep -i kill
journalctl -u docker | grep -i error

6. Verify Configuration:

Terminal window
# Check port conflicts
docker run --rm -p 80:80 nginx # Port 80 already in use?
# Check volume permissions
ls -la /host/mount # Permissions must allow container user

7. Docker Daemon Logs:

Terminal window
# Linux
journalctl -u docker.service -f
# Mac/Windows
docker logs --tail 100 docker-desktop

14. Explain the difference between docker run, docker start, and docker exec.

Section titled “14. Explain the difference between docker run, docker start, and docker exec.”

Answer:

docker run = Create + Start

  • Creates a new container from image
  • Starts it immediately
  • Most common command
Terminal window
# Basic run
docker run nginx
# With options
docker run -d --name web -p 80:80 nginx
# One-off commands (container stops after)
docker run --rm ubuntu ls -la

docker start = Start existing

  • Restarts a previously created/stopped container
  • Preserves all configuration from original run
Terminal window
# Create but don't start
docker create --name web nginx
# Start it later
docker start web
# Start with attach
docker start -a web

docker exec = Execute in running

  • Runs command in already running container
  • Requires container to be running
  • Most common for debugging
Terminal window
# Get shell in running container
docker exec -it web /bin/bash
# Run single command
docker exec web cat /etc/hosts
# Run as different user
docker exec -u root web whoami

Comparison Table:

CommandCreates New ContainerRequires Running ContainerPreserves State
docker run✅ Yes❌ No❌ No (fresh)
docker start❌ No❌ No (can be stopped)✅ Yes
docker exec❌ No✅ Yes✅ Yes

Practical Scenario:

Terminal window
# 1. Run container in background
docker run -d --name redis redis
# 2. Execute command inside
docker exec redis redis-cli ping # Returns PONG
# 3. Stop container
docker stop redis
# 4. Start it again (same data)
docker start redis
# 5. Can't exec when stopped
docker exec redis redis-cli ping # Error: container not running

15. How do you implement health checks in Docker?

Section titled “15. How do you implement health checks in Docker?”

Answer:

Health Checks in Dockerfile:

FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
# Define health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node health.js || exit 1
EXPOSE 3000
CMD ["npm", "start"]

Health Check Options:

  • --interval: How often to check (default 30s)
  • --timeout: Max time for check to complete (default 30s)
  • --start-period: Time to wait before first check (default 0s)
  • --retries: Consecutive failures needed to mark unhealthy (default 3)

Docker Compose Health Check:

version:'3.8'
services:
postgres:
image: postgres:15
healthcheck:
test:["CMD-SHELL","pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries:5
start_period: 30s
app:
build: .
depends_on:
postgres:
condition: service_healthy

Check Health Status:

Terminal window
# View container health
docker ps
# CONTAINER ID IMAGE STATUS
# abc123 app Up 2 minutes (healthy)
# Detailed health status
docker inspect --format='{{json .State.Health}}' container | jq
# Wait for healthy container
docker wait --condition=healthy container

Custom Health Check Script (Node.js example):

health.js
const http = require('http');
const options = {
host: 'localhost',
port: 3000,
path: '/health',
timeout: 2000
};
const request = http.request(options, (res) => {
console.log(`STATUS:${res.statusCode}`);
if (res.statusCode === 200) {
process.exit(0); // Healthy
} else {
process.exit(1); // Unhealthy
}
});
request.on('error', (err) => {
console.error('Health check failed:', err);
process.exit(1);
});
request.end();

16. What are Docker image tags and how do you use them?

Section titled “16. What are Docker image tags and how do you use them?”

Answer:

Tags are identifiers that point to specific image versions, following the format: [registry/]repository[:tag]

Tag Structure:

Terminal window
# Full format
docker pull docker.io/library/nginx:1.21-alpine
# ^registry ^repo ^tag
# Common patterns
nginx:latest # Default, not recommended for production
nginx:1.21 # Major version
nginx:1.21.6 # Full version
nginx:1.21-alpine # Version with variant
myapp:v2.0.1 # Custom semantic version
myapp:abc1234 # Git commit hash

Tagging Images:

Terminal window
# Tag existing image
docker tag myapp:latest myapp:v1.0.0
docker tag myapp:latest myregistry.com/myapp:v1.0.0
# Build with tag
docker build -t myapp:2.0.0 .
# Multiple tags for same image
docker tag myapp:latest myapp:stable
docker tag myapp:latest myapp:2.0.0

Best Practices:

Terminal window
# Production - Use specific versions
FROM node:18.17.0-alpine3.18
# Never use 'latest' in production
# Bad: FROM node:latest
# CI/CD - Use commit hash or build ID
docker build -t myapp:${GIT_COMMIT} .
docker tag myapp:${GIT_COMMIT} myapp:latest
# Semantic versioning
myapp:1.0.0 # Specific version
myapp:1.0 # Minor version latest
myapp:1 # Major version latest

Tag Management:

Terminal window
# List tags (requires registry API)
curl -X GET https://registry.hub.docker.com/v2/repositories/library/nginx/tags
# Remove tag (untag)
docker rmi myapp:v1.0.0 # Removes tag, not the image if other tags exist
# Filter images by tag
docker images | grep myapp

17. Explain the Docker build cache and how to optimize it.

Section titled “17. Explain the Docker build cache and how to optimize it.”

Answer:

Docker caches each layer during build. If a layer hasn’t changed, Docker reuses the cached layer.

Cache Invalidation Rules:

  1. FROM - Always invalidates if base image changes
  2. COPY/ADD - Invalidates if file contents change
  3. RUN - Invalidates if command changes
  4. Previous layer changes cascade to all subsequent layers

Optimization Strategies:

1. Order Layers by Change Frequency:

# Optimized - dependencies first
FROM node:18
WORKDIR /app
# Rarely changes
COPY package*.json ./
RUN npm ci # Cached until package.json changes
# Changes frequently
COPY . .
RUN npm run build
# Better than:
# COPY . . # Any file change invalidates everything
# RUN npm ci

2. Combine Commands to Reduce Layers:

# Bad - multiple layers
RUN apt-get update
RUN apt-get install -y python3
RUN apt-get clean
# Good - single layer
RUN apt-get update && \
apt-get install -y python3 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

3. Use .dockerignore:

node_modules
.git
*.log
.env
.DS_Store

4. Multi-stage Builds:

# Build stage - large tools
FROM golang:1.20 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o myapp
# Runtime stage - minimal
FROM alpine:latest
RUN apk --no-cache add ca-certificates
COPY --from=builder /app/myapp /myapp
CMD ["/myapp"]

5. Leverage BuildKit for Better Caching:

Terminal window
# Enable BuildKit
export DOCKER_BUILDKIT=1
# Use cache mounts
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt

View Cache Usage:

Terminal window
# Show cache disk usage
docker system df
# Prune cache
docker builder prune -a
# Build without cache
docker build --no-cache -t myapp .

18. How do you implement logging in Docker containers?

Section titled “18. How do you implement logging in Docker containers?”
Terminal window
# Default json-file (stores on disk)
docker run --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 nginx
# syslog
docker run --log-driver syslog --log-opt syslog-address=udp://localhost:514 nginx
# fluentd
docker run --log-driver fluentd --log-opt fluentd-address=localhost:24224 nginx
# awslogs (CloudWatch)
docker run --log-driver awslogs --log-opt awslogs-region=us-east-1 nginx
# none (disable logging)
docker run --log-driver none nginx

Answer:

1. Docker Logging Drivers:

2. Container Logging Best Practices:

docker-compose.yml
services:
app:
image: myapp
logging:
driver:"json-file"
options:
max-size:"10m"
max-file:"3"
labels:"production"
env:"env"

3. Application Logging Patterns:

# Python - Log to stdout/stderr
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info("Application started") # Goes to docker logs

4. Log Management Commands:

Terminal window
# View logs
docker logs container
# Tail logs
docker logs -f --tail 100 container
# Logs since timestamp
docker logs --since 2023-01-01T10:00:00 container
# Filter by label
docker logs --filter label=production container

5. Centralized Logging Setup:

# docker-compose with ELK
version:'3.8'
services:
app:
image: myapp
logging:
driver:"fluentd"
options:
fluentd-address:"fluentd:24224"
tag:"app.{{.Name}}"
fluentd:
image: fluent/fluentd
volumes:
- ./fluentd.conf:/fluentd/etc/fluent.conf
ports:
-"24224:24224"
elasticsearch:
image: elasticsearch:7.17
environment:
- discovery.type=single-node
kibana:
image: kibana:7.17
ports:
-"5601:5601"

19. What are Docker contexts and how do you use them?

Section titled “19. What are Docker contexts and how do you use them?”

Answer:

Docker contexts allow you to manage multiple Docker environments (local, remote, cloud) from a single CLI.

Creating Contexts:

Terminal window
# List all contexts
docker context ls
# Create context for remote Docker daemon
docker context create remote \
--docker "host=ssh://user@remote-server"
# Create context for Docker Desktop
docker context create desktop \
--docker "host=unix:///var/run/docker.sock"
# Create context for cloud (AWS ECS)
docker context create ecs \
--description "AWS ECS" \
--from-env

Using Contexts:

Terminal window
# Switch context
docker context use remote
# Run commands on remote
docker ps # Shows remote containers
# Use context temporarily
docker --context remote ps
# Show current context
docker context show
# Remove context
docker context rm remote

Use Cases:

1. Multi-environment Management:

Terminal window
# Create contexts for different environments
docker context create staging --docker "host=ssh://staging-server"
docker context create production --docker "host=ssh://prod-server"
# Quick switches
docker context use staging
docker compose up -d
docker context use production
docker compose up -d

2. Docker Desktop Context:

Terminal window
# Docker Desktop includes default context
docker context use default
docker ps # Local containers

3. Cloud Integration:

Terminal window
# AWS ECS context (experimental)
docker context create ecs myecs
docker context use myecs
docker compose up # Deploys to ECS

4. CI/CD Pipeline:

Terminal window
# In CI pipeline
docker context create remote \
--docker "host=ssh://${DEPLOY_USER}@${DEPLOY_HOST}"
docker --context remote compose up -d

Context Configuration Location:

Terminal window
# Contexts stored in
~/.docker/contexts/

20. Explain Docker resource limits and monitoring.

Section titled “20. Explain Docker resource limits and monitoring.”

Answer:

Setting Resource Limits:

1. Memory Limits:

Terminal window
# Hard limit
docker run -m 512m --memory-reservation 256m myapp
# Swap limit
docker run -m 512m --memory-swap 1g myapp # 512M RAM + 512M swap
# Memory swappiness
docker run --memory-swappiness=60 myapp # 0-100, default 60

2. CPU Limits:

Terminal window
# CPU cores
docker run --cpus=1.5 myapp # 1.5 cores
# CPU shares (relative weight)
docker run --cpu-shares=2048 high-priority-app
docker run --cpu-shares=512 low-priority-app
# CPU pinning
docker run --cpuset-cpus="0,1" myapp # Use only cores 0 and 1
# CPU quota (CFS)
docker run --cpu-period=100000 --cpu-quota=25000 myapp # 25% of one core

3. I/O Limits:

Terminal window
# Device read/write speed
docker run --device-read-bps /dev/sda:1mb --device-write-bps /dev/sda:1mb myapp
# IOPS limits
docker run --device-read-iops /dev/sda:100 --device-write-iops /dev/sda:100 myapp

Monitoring Commands:

Terminal window
# Real-time stats
docker stats
# Format output
docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
# Specific container
docker stats container1 container2
# Get all container stats programmatically
docker stats --no-stream --format '{{json .}}'

Docker Compose Limits:

version:'3.8'
services:
app:
image: myapp
deploy:
resources:
limits:
cpus:'1.5'
memory: 512M
reservations:
cpus:'0.5'
memory: 256M

Monitoring with Prometheus:

Terminal window
# Install cAdvisor for container metrics
docker run -d \
--name=cadvisor \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--publish=8080:8080 \
gcr.io/cadvisor/cadvisor:latest

Check Resource Usage:

Terminal window
# Check disk usage
docker system df
# Detailed disk usage
docker system df -v
# Prune unused resources
docker system prune -a --volumes

Answer:

1. Image Security:

Terminal window
# Scan images for vulnerabilities
docker scan myapp
docker scan --accept-license myapp
# Use official/verified images
FROM node:18-alpine # Official
FROM myregistry.com/verified/node:18 # Verified
# Use specific versions
FROM node:18.17.0-alpine # Never 'latest'

2. Runtime Security:

Terminal window
# Drop all capabilities and add only needed
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx
# Read-only root filesystem
docker run --read-only myapp
# No new privileges
docker run --security-opt=no-new-privileges myapp
# Limit kernel capabilities
docker run --security-opt=seccomp=path/to/seccomp.json myapp
# User namespace remapping
docker run --userns=host myapp # Map root to non-root on host

3. Resource Restrictions:

Terminal window
# Prevent fork bombs
docker run --pids-limit=100 myapp
# Restrict devices
docker run --device-cgroup-rule='c 1:3 rmw' myapp

4. Network Security:

Terminal window
# Custom network with strict isolation
docker network create \
--driver bridge \
--opt com.docker.network.bridge.enable_icc=false \
secure-network
# Use internal network (no external access)
docker network create --internal internal-network
# Limit exposed ports
docker run -p 127.0.0.1:8080:80 myapp # Only localhost

5. Secrets Management:

Terminal window
# Never use environment variables for secrets
# Bad: -e DB_PASSWORD=secret
# Use Docker secrets (Swarm)
echo "secret" | docker secret create db_password -
docker service create --secret db_password myapp
# Or external vault
docker run -e VAULT_TOKEN=token myapp

6. Dockerfile Security:

# Use non-root user
FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001 -G nodejs
USER nodejs
# Copy files with correct ownership
COPY --chown=nodejs:nodejs . /app
# Avoid secrets in build
# Never: RUN echo "secret" > file.txt

7. Audit and Compliance:

Terminal window
# Docker Bench Security (CIS benchmarks)
docker run --net host --pid host --userns host \
--cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /var/lib:/var/lib \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/lib/systemd:/usr/lib/systemd \
-v /etc:/etc --label docker_bench_security \
docker/docker-bench-security

22. Explain multi-stage builds and their benefits.

Section titled “22. Explain multi-stage builds and their benefits.”

Answer:

Multi-stage builds use multiple FROM statements in a single Dockerfile, allowing you to copy artifacts between stages.

Basic Example:

# Stage 1: Build
FROM golang:1.20 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
# Stage 2: Runtime
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/main .
EXPOSE 8080
CMD ["./main"]

Benefits:

1. Smaller Images:

# Without multi-stage: ~500MB
FROM node:18
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
CMD ["node", "dist/index.js"]
# With multi-stage: ~50MB
FROM node:18 AS builder
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:18-alpine
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package*.json ./
RUN npm ci --only=production
CMD ["node", "dist/index.js"]

2. Multiple Build Contexts:

# Build frontend and backend in same Dockerfile
FROM node:18 AS frontend-builder
WORKDIR /frontend
COPY frontend/package*.json ./
RUN npm ci
COPY frontend/ .
RUN npm run build
FROM maven:3.8 AS backend-builder
WORKDIR /backend
COPY backend/pom.xml ./
RUN mvn dependency:go-offline
COPY backend/ .
RUN mvn package
FROM openjdk:11-jre-slim
COPY --from=backend-builder /backend/target/app.jar /app.jar
COPY --from=frontend-builder /frontend/dist /static
CMD ["java", "-jar", "/app.jar"]

3. Testing Stage:

# Development stage
FROM node:18 AS development
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["npm", "run", "dev"]
# Testing stage
FROM development AS testing
RUN npm run test
# Production stage
FROM node:18-alpine AS production
COPY --from=development /app/dist ./dist
COPY --from=development /app/package*.json ./
RUN npm ci --only=production
CMD ["node", "dist/index.js"]

4. Buildkit Optimizations:

# Leverage buildkit for better caching
FROM node:18 AS builder
RUN --mount=type=cache,target=/root/.npm \
npm ci
FROM builder AS test
RUN --mount=type=cache,target=/root/.npm \
npm run test
FROM builder AS prod
RUN npm run build

23. What are Docker volumes and how do they differ from bind mounts?

Section titled “23. What are Docker volumes and how do they differ from bind mounts?”

Answer:

AspectVolumesBind Mounts
LocationDocker-managed (/var/lib/docker/volumes/)User-managed (any path on host)
Creationdocker volume create or automaticallyManually or automatically
PortabilityHigh - works across hostsLow - depends on host path
BackupBuilt-in commands availableManual file copy
PerformanceGoodSame as host filesystem
PermissionsDocker managesHost permissions apply
Use CaseProduction data, databasesDevelopment, config files

Volumes Example:

Terminal window
# Create volume
docker volume create postgres-data
# Use volume
docker run -d \
--name postgres \
--mount type=volume,source=postgres-data,target=/var/lib/postgresql/data \
postgres:13
# Inspect volume
docker volume inspect postgres-data
# Shows mountpoint: /var/lib/docker/volumes/postgres-data/_data
# Backup volume
docker run --rm \
-v postgres-data:/source \
-v $(pwd):/backup \
alpine tar czf /backup/postgres-backup.tar.gz -C /source .
# Restore volume
docker run --rm \
-v postgres-data:/target \
-v $(pwd):/backup \
alpine tar xzf /backup/postgres-backup.tar.gz -C /target

Bind Mounts Example:

Terminal window
# Development with code sync
docker run -d \
--name dev \
--mount type=bind,source=$(pwd)/app,target=/app \
node:18 npm run dev
# Configuration files
docker run -d \
--name nginx \
--mount type=bind,source=/host/nginx.conf,target=/etc/nginx/nginx.conf,readonly \
nginx

Docker Compose:

version:'3.8'
services:
postgres:
image: postgres:13
volumes:
- postgres-data:/var/lib/postgresql/data # Named volume
app:
image: node:18
volumes:
- ./app:/app # Bind mount
- /app/node_modules # Anonymous volume (prevents overwrite)
volumes:
postgres-data: # Named volume declaration

Performance Considerations:

  • Volumes: ~same as host filesystem (native)
  • Bind mounts: ~same as host filesystem
  • For databases: Volumes often preferred for portability
  • For code: Bind mounts during development

24. How do you manage Docker logs and rotate them?

Section titled “24. How do you manage Docker logs and rotate them?”

Answer:

1. Docker Logging Driver Configuration:

Terminal window
# Default json-file with rotation
docker run \
--log-driver json-file \
--log-opt max-size=10m \
--log-opt max-file=3 \
--log-opt compress=true \
nginx
# Local driver (faster, less overhead)
docker run \
--log-driver local \
--log-opt max-size=10m \
--log-opt max-file=3 \
nginx

2. Docker Daemon Configuration (daemon.json):

{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production",
"env": "env"
},
"log-level": "info"
}

Location: /etc/docker/daemon.json (Linux) or Docker Desktop settings

3. Docker Compose Logging:

version:'3.8'
services:
app:
image: myapp
logging:
driver:"json-file"
options:
max-size:"10m"
max-file:"5"
tag:"{{.Name}}|{{.ImageName}}"
options:
max-size:"10m"
max-file:"3"
nginx:
image: nginx
logging:
driver:"syslog"
options:
syslog-address:"tcp://192.168.0.1:514"
syslog-facility:"daemon"
tag:"nginx"

4. External Log Rotation (logrotate):

/etc/logrotate.d/docker-containers
/var/lib/docker/containers/*/*.log {
rotate 7
daily
compress
delaycompress
missingok
copytruncate
maxsize 100M
}

5. Centralized Logging:

Terminal window
# Send all container logs to fluentd
docker run \
--log-driver fluentd \
--log-opt fluentd-address=localhost:24224 \
--log-opt tag="app.{{.Name}}" \
myapp

6. Manage Logs Programmatically:

Terminal window
# Clean logs for specific container
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
alpine sh -c "echo '' >$(docker inspect -f '{{.LogPath}}' container)"
# Prune logs for all stopped containers
docker container prune --filter "until=24h"
# List log sizes
docker ps -q | xargs -I {} sh -c "echo {}:$(docker inspect -f '{{.LogPath}}' {} | xargs du -h)"

7. Best Practices:

  • Always set log rotation limits (prevents disk full)
  • Use external log aggregation for production
  • Don’t log sensitive data
  • Log to stdout/stderr, not files in container
  • Consider log levels (debug vs info vs error)

25. Explain the concept of “Docker Hub” and private registries.

Section titled “25. Explain the concept of “Docker Hub” and private registries.”

Answer:

Docker Hub: Public registry with official images, automated builds, and webhooks.

Registry Types:

  • Public Registry: Docker Hub, Quay.io, Google Container Registry
  • Private Registry: Docker Registry, AWS ECR, Azure ACR, GCR

Using Docker Hub:

Terminal window
# Login
docker login
# Search
docker search nginx
# Pull
docker pull nginx:alpine
# Push (requires repository)
docker tag myapp:latest myusername/myapp:latest
docker push myusername/myapp:latest

Private Registry (Docker Registry):

Terminal window
# Run private registry
docker run -d -p 5000:5000 --name registry registry:2
# Push to private registry
docker tag myapp localhost:5000/myapp:latest
docker push localhost:5000/myapp:latest
# Pull from private registry
docker pull localhost:5000/myapp:latest
# Registry with authentication
docker run -d -p 5000:5000 \
-e REGISTRY_AUTH=htpasswd \
-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v /path/to/auth:/auth \
registry:2

Private Registry Storage:

# docker-compose.yml for registry
version:'3.8'
services:
registry:
image: registry:2
ports:
-"5000:5000"
volumes:
- registry-data:/var/lib/registry
- ./auth:/auth
- ./certs:/certs
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
registry-data:

AWS ECR Example:

Terminal window
# Authenticate
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
# Create repository
aws ecr create-repository --repository-name myapp
# Tag and push
docker tag myapp:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:latest
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:latest

Registry Management:

Terminal window
# List images in registry (via API)
curl -X GET http://localhost:5000/v2/_catalog
# Delete image (requires garbage collection)
curl -X DELETE http://localhost:5000/v2/myapp/manifests/sha256:...
# Registry garbage collection
docker exec registry bin/registry garbage-collect /etc/docker/registry/config.yml

26. What is Docker content trust and how do you use it?

Section titled “26. What is Docker content trust and how do you use it?”

Answer:

Docker Content Trust (DCT) enables digital signatures for images, ensuring authenticity and integrity.

Enable DCT:

Terminal window
# Enable globally
export DOCKER_CONTENT_TRUST=1
# Or enable for specific commands
DOCKER_CONTENT_TRUST=1 docker pull nginx:latest

Signing Images:

Terminal window
# Initialize delegation keys (first time)
docker trust key generate mykey
# Sign and push image
docker trust sign myregistry.com/myapp:latest
# Or push with signing
docker trust push myregistry.com/myapp:latest

Verifying Signed Images:

Terminal window
# Pull will verify signature
docker pull myregistry.com/myapp:latest
# View trust data
docker trust inspect myregistry.com/myapp:latest
# List all signed tags
docker trust view myregistry.com/myapp

Managing Delegations:

Terminal window
# Add signer delegation
docker trust signer add --key cert.pem myteam myregistry.com/myapp
# Remove signer
docker trust signer remove myteam myregistry.com/myapp
# Revoke delegation
docker trust revoke myregistry.com/myapp

Dockerfile with Trust:

# DOCKER_CONTENT_TRUST=1 docker build
FROM myregistry.com/trusted-base:1.0 # Will verify signature
RUN apt-get update && apt-get install -y curl

Notary Server (Advanced):

Terminal window
# Run notary server
docker run -d -p 4443:4443 \
-v notary-data:/var/lib/notary \
notary:server
# Configure Docker to use custom notary
{
"notary": {
"rootPassphrase": "your-root-passphrase",
"serverURL": "https://notary.example.com",
"delegationPassphrase": "your-delegation-passphrase"
}
}

Best Practices:

  • Enable DCT in CI/CD pipelines
  • Use separate keys for signing vs. delegation
  • Store keys securely (HSM, KMS)
  • Rotate keys regularly
  • Audit signing events

27. How do you perform rolling updates with Docker?

Section titled “27. How do you perform rolling updates with Docker?”

Answer:

Docker Swarm Rolling Updates:

Terminal window
# Deploy service with update config
docker service create \
--name web \
--replicas 5 \
--update-parallelism 2 \
--update-delay 10s \
--update-failure-action pause \
--update-monitor 30s \
--update-max-failure-ratio 0.3 \
nginx:1.21
# Update to new version
docker service update --image nginx:1.22 web
# Control update
docker service update --rollback web # Rollback
docker service update --detach web # Detach from update

Update Configuration Options:

Terminal window
--update-parallelism 2 # Update 2 replicas at once
--update-delay 10s # Wait 10s between batches
--update-failure-action pause # Stop on failure
--update-monitor 30s # Monitor for 30s after update
--update-max-failure-ratio 0.3 # Max 30% failure allowed

Docker Compose with Swarm:

version:'3.8'
services:
web:
image: nginx:1.21
deploy:
replicas:5
update_config:
parallelism:2
delay: 10s
failure_action: rollback
monitor: 30s
max_failure_ratio:0.3
rollback_config:
parallelism:1
delay: 5s

Health Checks for Updates:

services:
web:
image: myapp:latest
healthcheck:
test:["CMD","curl","-f","http://localhost/health"]
interval: 10s
timeout: 5s
retries:3
start_period: 30s
deploy:
update_config:
parallelism:1
delay: 30s
failure_action: rollback
monitor: 60s # Wait for health check after update

Blue-Green Deployment Pattern:

Terminal window
# Deploy blue (current) version
docker service create --name app-blue --label version=blue nginx:1.21
# Deploy green (new) version
docker service create --name app-green --label version=green nginx:1.22
# Switch load balancer
docker service update --label-add version=green app-lb
# Remove old version
docker service rm app-blue

Zero-Downtime Update Script:

#!/bin/bash
SERVICE_NAME="web"
OLD_TAG=$(docker service inspect -f '{{.Spec.TaskTemplate.ContainerSpec.Image}}' $SERVICE_NAME)
NEW_TAG="myapp:v2.0.0"
# Start update
docker service update \
--image $NEW_TAG \
--update-parallelism 1 \
--update-delay 10s \
--update-monitor 30s \
--update-failure-action rollback \
$SERVICE_NAME
# Monitor update
while true; do
UPDATED=$(docker service ps $SERVICE_NAME --filter "desired-state=running" | grep $NEW_TAG | wc -l)
TOTAL=$(docker service ps $SERVICE_NAME --filter "desired-state=running" | wc -l)
if [ $UPDATED -eq $TOTAL ]; then
echo "Update complete"
break
fi
echo "Updated:$UPDATED/$TOTAL replicas"
sleep 5
done

28. Explain Docker Swarm mode and its features.

Section titled “28. Explain Docker Swarm mode and its features.”

Answer:

Docker Swarm is Docker’s native clustering and orchestration solution.

Initialization:

Terminal window
# Initialize swarm on manager
docker swarm init --advertise-addr 192.168.1.10
# Add worker nodes
docker swarm join --token SWMTKN-1-xxx 192.168.1.10:2377
# Add manager nodes
docker swarm join-token manager
docker swarm join --token SWMTKN-1-yyy 192.168.1.10:2377

Key Concepts:

1. Nodes:

Terminal window
# List nodes
docker node ls
# Promote worker to manager
docker node promote node2
# Demote manager
docker node demote node3
# Add labels
docker node update --label-add environment=production node1

2. Services:

Terminal window
# Create service
docker service create \
--name web \
--replicas 3 \
--publish 80:80 \
--constraint "node.labels.environment==production" \
nginx:latest
# Scale service
docker service scale web=5
# Update service
docker service update --image nginx:1.22 web
# Rollback
docker service rollback web
# Remove service
docker service rm web

3. Stacks (Compose for Swarm):

docker-stack.yml
version:'3.8'
services:
web:
image: nginx:latest
ports:
-"80:80"
deploy:
replicas:3
placement:
constraints:
- node.role == manager
resources:
limits:
cpus:'0.5'
memory: 512M
restart_policy:
condition: on-failure
delay: 5s
max_attempts:3
db:
image: postgres:13
environment:
POSTGRES_PASSWORD: secret
volumes:
- db-data:/var/lib/postgresql/data
deploy:
placement:
constraints:
- node.role == worker
volumes:
db-data:
Terminal window
# Deploy stack
docker stack deploy -c docker-stack.yml myapp
# List stacks
docker stack ls
# List services in stack
docker stack services myapp
# Remove stack
docker stack rm myapp

4. Networking:

Terminal window
# Create overlay network
docker network create --driver overlay my-network
# Use in service
docker service create --network my-network nginx

5. Secrets and Configs:

Terminal window
# Create secret
echo "db_password" | docker secret create db_password -
# Use in service
docker service create \
--secret db_password \
--secret src=db_password,target=/run/secrets/db_pass \
postgres
# Config (non-sensitive data)
echo "config.json" | docker config create app_config -

Swarm Features:

  • Self-healing: Failed containers are rescheduled
  • Rolling updates: Zero-downtime updates
  • Load balancing: Built-in L4 load balancer
  • Service discovery: DNS-based service discovery
  • Encrypted overlay networks: Secure multi-host communication
  • Secrets management: Encrypted secrets storage
  • Multi-platform: Linux and Windows containers

29. How do you backup and restore Docker volumes?

Section titled “29. How do you backup and restore Docker volumes?”

Answer:

Volume Backup:

1. Using Alpine Container:

Terminal window
# Backup volume to tar
docker run --rm \
-v volume_name:/source \
-v $(pwd):/backup \
alpine \
tar czf /backup/volume_backup.tar.gz -C /source .
# Backup with timestamp
docker run --rm \
-v volume_name:/source \
-v $(pwd):/backup \
alpine \
tar czf /backup/volume_backup_$(date +%Y%m%d_%H%M%S).tar.gz -C /source .

2. Database-Specific Backup:

Terminal window
# PostgreSQL
docker exec postgres pg_dump -U postgres dbname > backup.sql
# MySQL
docker exec mysql mysqldump -u root -p database > backup.sql
# MongoDB
docker exec mongodb mongodump --out /data/backup
docker cp mongodb:/data/backup ./backup

3. Volume Restore:

Terminal window
# Restore from tar
docker run --rm \
-v volume_name:/target \
-v $(pwd):/backup \
alpine \
tar xzf /backup/volume_backup.tar.gz -C /target
# Restore with verification
docker run --rm \
-v volume_name:/target \
-v $(pwd):/backup \
alpine \
sh -c "tar xzf /backup/volume_backup.tar.gz -C /target && ls -la /target"

4. Incremental Backup Script:

backup-volumes.sh
#!/bin/bash
BACKUP_DIR="/backups"
DATE=$(date +%Y%m%d)
VOLUMES=$(docker volume ls -q)
for VOLUME in $VOLUMES; do
echo "Backing up$VOLUME..."
docker run --rm \
-v $VOLUME:/source \
-v $BACKUP_DIR:/backup \
alpine \
tar czf "/backup/${VOLUME}_${DATE}.tar.gz" -C /source .
done
# Clean old backups (keep 7 days)
find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -delete

5. Docker Compose Backup:

version:'3.8'
services:
backup:
image: alpine
volumes:
- postgres-data:/source:ro
- ./backups:/backup
command:>
sh -c "tar czf /backup/postgres_$(date +%Y%m%d_%H%M%S).tar.gz -C /source . &&
find /backup -name '*.tar.gz' -mtime +7 -delete"
restart:"no"
volumes:
postgres-data:

6. Automated Backup with Cron:

Terminal window
# Add to crontab
0 2 * * * /usr/local/bin/backup-volumes.sh

7. Volume Migration:

Terminal window
# Migrate volume to another host
# On source host
docker run --rm \
-v volume_name:/source \
-v $(pwd):/backup \
alpine \
tar czf /backup/volume.tar.gz -C /source .
# Transfer to destination host
scp volume.tar.gz user@destination:/tmp/
# On destination host
docker run --rm \
-v volume_name:/target \
-v /tmp:/backup \
alpine \
tar xzf /backup/volume.tar.gz -C /target

Best Practices:

  • Always test restore process
  • Use compression (gzip) for backups
  • Encrypt sensitive backups
  • Store backups off-host
  • Monitor backup success/failure
  • Document restore procedures

30. How do you debug Docker networking issues?

Section titled “30. How do you debug Docker networking issues?”

Answer:

Systematic Debugging Approach:

1. Check Network Connectivity:

Terminal window
# List networks
docker network ls
# Inspect network
docker network inspect bridge
docker network inspect my-network
# Check container IP
docker inspect container | grep IPAddress
# Test connectivity
docker exec container ping other-container
docker exec container ping 8.8.8.8

2. DNS Resolution:

Terminal window
# Check DNS resolution
docker exec container cat /etc/resolv.conf
docker exec container nslookup google.com
docker exec container nslookup other-container
# Test DNS server
docker exec container dig @8.8.8.8 google.com
# Override DNS
docker run --dns 8.8.8.8 --dns 8.8.4.4 myapp

3. Port Mapping Issues:

Terminal window
# Check port mappings
docker port container
docker inspect container | grep -A 10 PortBindings
# Verify host port availability
netstat -tulpn | grep :8080
lsof -i :8080
# Test from host
curl localhost:8080
telnet localhost 8080

4. Network Traffic Capture:

Terminal window
# Capture traffic on container
docker exec container tcpdump -i eth0 -w /tmp/capture.pcap
# Copy capture file
docker cp container:/tmp/capture.pcap .
# Analyze with wireshark or tcpdump
tcpdump -r capture.pcap
# Monitor specific port
docker exec container tcpdump -i eth0 port 80

5. Debug Network Namespace:

Terminal window
# Get container PID
docker inspect -f '{{.State.Pid}}' container
# Enter network namespace
nsenter -t <PID> -n
# View routes
ip route show
netstat -rn
# Check iptables
iptables -t nat -L -n

6. Common Issues and Solutions:

Issue: Containers can’t reach each other

Terminal window
# Check if on same network
docker inspect container1 | grep Networks -A 10
docker inspect container2 | grep Networks -A 10
# Connect to network
docker network connect my-network container1
docker network connect my-network container2
# Create custom network
docker network create --driver bridge my-network

Issue: Port already in use

Terminal window
# Find process using port
sudo lsof -i :8080
# Kill process or use different port
docker run -p 8081:80 nginx

Issue: No internet from container

Terminal window
# Check iptables rules
sudo iptables -L -n | grep DOCKER
# Enable IP forwarding
sudo sysctl net.ipv4.ip_forward=1
sudo sysctl -w net.ipv4.ip_forward=1
# Check Docker daemon config
cat /etc/docker/daemon.json

7. Network Debugging Container:

Terminal window
# Create debug container in same network
docker run -it --rm \
--network container:target-container \
--pid container:target-container \
nicolaka/netshoot \
/bin/bash
# Use network tools
# - nslookup, dig for DNS
# - curl, wget for HTTP
# - ping, traceroute for connectivity
# - netstat, ss for sockets
# - tcpdump for packet capture

8. Check Docker Daemon Logs:

Terminal window
# Linux
journalctl -u docker.service -f | grep -i network
# Mac/Windows
docker logs --tail 100 docker-desktop

Part 2: 20 Scenario-Based Questions with Answers

Section titled “Part 2: 20 Scenario-Based Questions with Answers”

1. Scenario: Production container keeps restarting with OOMKilled

Section titled “1. Scenario: Production container keeps restarting with OOMKilled”

Situation: Your container keeps restarting with OOMKilled status.

Analysis:

Terminal window
# Check container status
docker ps -a
docker inspect container | jq '.[0].State.OOMKilled'
# Check memory usage history
docker stats --no-stream container

Solution:

Terminal window
# 1. Increase memory limit
docker update --memory 2g --memory-swap 3g container
# 2. Set memory reservation (pre-allocation)
docker update --memory-reservation 1g container
# 3. For new container, set appropriate limits
docker run -d \
-m 2g \
--memory-reservation 1g \
--memory-swap 3g \
--oom-kill-disable=false \ # Better to kill than hang
myapp
# 4. Investigate memory leak in application
docker exec container jmap -heap <pid> # Java
docker exec container top -b -n 1 | head -20

2. Scenario: Docker build takes 20 minutes, how to optimize?

Section titled “2. Scenario: Docker build takes 20 minutes, how to optimize?”

Situation: Building your Docker image takes 20 minutes, slowing down CI/CD.

Analysis:

Terminal window
# View build time per layer
docker build --progress=plain -t myapp . 2>&1 | tee build.log

Optimizations:

# 1. Optimize layer order
FROM node:18
# Dependencies first (changes rarely)
COPY package*.json ./
RUN npm ci # Cached unless package.json changes
# Source code (changes often)
COPY . .
RUN npm run build
# 2. Use multi-stage builds
FROM node:18 AS builder
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:18-alpine
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm ci --only=production
# 3. Leverage BuildKit cache mounts
RUN --mount=type=cache,target=/root/.npm \
npm ci
# 4. Use .dockerignore
node_modules/
.git/
*.log

3. Scenario: Container can’t connect to database

Section titled “3. Scenario: Container can’t connect to database”

Situation: Your application container can’t connect to the PostgreSQL database.

Debugging Steps:

Terminal window
# 1. Check network connectivity
docker exec app ping db-container
# If fails, check network
docker network ls
docker network inspect app-network
# 2. Check DNS resolution
docker exec app nslookup db-container
# 3. Check database container status
docker ps | grep postgres
docker logs postgres
# 4. Test connection with dedicated tool
docker exec app psql -h db-container -U user -d dbname
# 5. Verify network isolation
docker run -it --rm --network app-network postgres:13 psql -h db-container -U user

Solutions:

Terminal window
# Create custom network
docker network create app-network
# Connect both containers
docker network connect app-network app
docker network connect app-network db
# Use service names in connection strings
# Instead of: localhost:5432
# Use: db-container:5432

Situation: Your server’s disk is full because of Docker images and containers.

Analysis:

Terminal window
# Check disk usage
df -h
docker system df -v
# Find large files
docker system df --verbose

Cleanup Commands:

Terminal window
# 1. Remove all unused resources
docker system prune -a --volumes
# 2. Clean specific resources
docker container prune --filter "until=24h"
docker image prune -a --filter "until=24h"
docker volume prune
docker builder prune
# 3. Remove all stopped containers
docker rm $(docker ps -a -q)
# 4. Remove dangling images
docker rmi $(docker images -f "dangling=true" -q)
# 5. Truncate logs
truncate -s 0 /var/lib/docker/containers/*/*.log
# 6. Set up log rotation in daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}

5. Scenario: Application needs to share data between containers

Section titled “5. Scenario: Application needs to share data between containers”

Situation: You have a web app and a backup service that need to access the same files.

Solution with Volumes:

Terminal window
# Create shared volume
docker volume create shared-data
# Web container (write)
docker run -d \
--name web \
--mount type=volume,source=shared-data,target=/data \
webapp
# Backup container (read-only)
docker run -d \
--name backup \
--mount type=volume,source=shared-data,target=/backup-data,readonly \
backup-service
# Verify permissions
docker exec web touch /data/test.txt
docker exec backup ls -la /backup-data

Docker Compose:

version:'3.8'
services:
web:
image: webapp
volumes:
- shared-data:/data
backup:
image: backup-service
volumes:
- shared-data:/backup-data:ro
volumes:
shared-data:

6. Scenario: Container running but not responding to requests

Section titled “6. Scenario: Container running but not responding to requests”

Situation: Your container is running but not responding to HTTP requests.

Debugging:

Terminal window
# 1. Check if container is actually running
docker ps
# 2. Check logs
docker logs --tail 50 container
# 3. Check port mapping
docker port container
docker inspect container | grep -A 5 PortBindings
# 4. Test inside container
docker exec container curl localhost:80
docker exec container netstat -tulpn
# 5. Check process inside
docker exec container ps aux
# 6. Check health status
docker inspect container | jq '.[0].State.Health'

Solutions:

  • Application not binding to all interfaces (0.0.0.0)
  • Port mapping incorrect
  • Firewall blocking
  • Application crashed but container still running

7. Scenario: Need to run one-time database migration

Section titled “7. Scenario: Need to run one-time database migration”

Situation: You need to run a database migration script before starting the main application.

Solution with Init Container:

version:'3.8'
services:
db:
image: postgres:13
environment:
POSTGRES_DB: app
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- db-data:/var/lib/postgresql/data
migration:
image: myapp
depends_on:
- db
environment:
DB_HOST: db
command:["npm","run","migrate"]
restart:"no"
app:
image: myapp
depends_on:
- db
environment:
DB_HOST: db
command:["npm","start"]

Manual Approach:

Terminal window
# Run migration container
docker run --rm \
--network app-network \
-e DB_HOST=db \
myapp npm run migrate
# Then start main container
docker start app

8. Scenario: Container has incorrect timezone

Section titled “8. Scenario: Container has incorrect timezone”

Situation: Your application logs show wrong timestamps due to UTC timezone.

Solutions:

Terminal window
# 1. Mount host's /etc/localtime
docker run -v /etc/localtime:/etc/localtime:ro myapp
# 2. Set environment variable
docker run -e TZ=America/New_York myapp
# 3. In Dockerfile
FROM alpine
RUN apk add --no-cache tzdata
ENV TZ=America/New_York
# 4. Docker Compose
services:
app:
environment:
- TZ=America/New_York
volumes:
- /etc/localtime:/etc/localtime:ro

9. Scenario: Multiple containers need to share environment variables

Section titled “9. Scenario: Multiple containers need to share environment variables”

Situation: You have multiple containers that need to share the same configuration (database credentials, API keys).

Solutions:

1. Use .env file:

.env
DB_HOST=postgres
DB_USER=admin
DB_PASSWORD=secret
API_KEY=123456
docker-compose.yml
version:'3.8'
services:
web:
image: myapp
env_file:
- .env
worker:
image: myapp-worker
env_file:
- .env

2. Docker secrets for sensitive data:

Terminal window
# Create secrets
echo "secret" | docker secret create db_password -
docker secret create api_key ./api_key.txt

3. Use Docker config for non-sensitive:

Terminal window
# Create config
docker config create app_config ./config.json

10. Scenario: Docker daemon not starting after reboot

Section titled “10. Scenario: Docker daemon not starting after reboot”

Situation: After server reboot, Docker daemon fails to start.

Debugging:

Terminal window
# Check service status
systemctl status docker
# Check logs
journalctl -u docker.service -n 50
# Check daemon config
cat /etc/docker/daemon.json
# Test daemon manually
dockerd --debug
# Check for corrupted overlay
ls -la /var/lib/docker/overlay2/

Common Fixes:

Terminal window
# 1. Clean corrupted overlay
sudo systemctl stop docker
sudo rm -rf /var/lib/docker/overlay2/*
sudo systemctl start docker
# 2. Fix permissions
sudo chown root:root /var/run/docker.sock
sudo chmod 666 /var/run/docker.sock
# 3. Check disk space
df -h /var/lib/docker
# 4. Re-enable service
sudo systemctl enable docker

11. Scenario: Need to debug application inside container

Section titled “11. Scenario: Need to debug application inside container”

Situation: Your application is behaving differently inside container than locally.

Debugging Approach:

Terminal window
# 1. Get shell access
docker exec -it container /bin/bash
docker exec -it container sh
# 2. Check environment variables
docker exec container env | sort
# 3. Copy files out for analysis
docker cp container:/app/logs ./logs
# 4. Run with debug flags
docker run -it --rm \
-e DEBUG=true \
-v $(pwd):/app \
myapp node --inspect-brk=0.0.0.0:9229 app.js
# 5. Compare with local environment
docker run -it --rm \
-v $(pwd):/app \
-w /app \
node:18 /bin/bash
# Then run your app

12. Scenario: Container running out of inodes

Section titled “12. Scenario: Container running out of inodes”

Situation: Your container keeps failing with “no space left on device” but disk usage is low.

Analysis:

Terminal window
# Check inode usage
df -i
# Check inside container
docker exec container df -i
# Find directories with many small files
docker exec container find / -type f | cut -d/ -f2 | sort | uniq -c | sort -nr

Solutions:

Terminal window
# Clean up old logs
docker exec container find /var/log -name "*.log" -mtime +7 -delete
# Clean Docker's overlay inodes
docker system prune -a --volumes
# Set inode limits for container
docker run --storage-opt size=10G myapp

13. Scenario: Need to run Docker commands without sudo

Section titled “13. Scenario: Need to run Docker commands without sudo”

Situation: You want to run Docker commands without sudo for development.

Solution:

Terminal window
# Add user to docker group
sudo usermod -aG docker $USER
# Verify group membership
groups $USER
# Logout and login again, or run
newgrp docker
# Test
docker ps
# Security implications: docker group gives root-equivalent access

14. Scenario: Container can’t write to mounted volume

Section titled “14. Scenario: Container can’t write to mounted volume”

Situation: Your container can’t write to bind-mounted volume due to permission issues.

Debugging:

Terminal window
# Check permissions on host
ls -la /host/path
# Check user inside container
docker exec container id
docker exec container whoami
# Check mount permissions
docker inspect container | grep -A 10 Mounts

Solutions:

Terminal window
# 1. Set permissions on host
sudo chown -R 1000:1000 /host/path # Match container user
# 2. Run container with specific user
docker run -u 1000:1000 -v /host/path:/data myapp
# 3. Use Docker volumes (better permissions handling)
docker volume create mydata
docker run -v mydata:/data myapp
# 4. Set SELinux context (if SELinux enabled)
chcon -Rt svirt_sandbox_file_t /host/path

15. Scenario: Need to limit network bandwidth for container

Section titled “15. Scenario: Need to limit network bandwidth for container”

Situation: A container is consuming too much network bandwidth affecting other services.

Solution with tc (traffic control):

Terminal window
# Create Docker network with bandwidth limit
docker network create \
--driver bridge \
--opt com.docker.network.bridge.name=br-limit \
--opt com.docker.network.driver.mtu=1500 \
limited-net
# Get container's interface
CONTAINER_PID=$(docker inspect -f '{{.State.Pid}}' container)
# Apply traffic control
sudo mkdir -p /var/run/netns
sudo ln -s /proc/$CONTAINER_PID/ns/net /var/run/netns/$CONTAINER_PID
# Limit to 1Mbps
sudo tc qdisc add dev veth-xxx root handle 1: htb default 30
sudo tc class add dev veth-xxx parent 1: classid 1:1 htb rate 1mbit
sudo tc filter add dev veth-xxx parent 1: protocol ip prio 1 u32 match ip dst 0.0.0.0/0 flowid 1:1

16. Scenario: Building images for multiple architectures

Section titled “16. Scenario: Building images for multiple architectures”

Situation: You need to build images for both amd64 and arm64 architectures.

Solution with Buildx:

Terminal window
# Create builder instance
docker buildx create --name multiarch-builder --use
docker buildx inspect --bootstrap
# Build for multiple architectures
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myregistry/myapp:latest \
--push \
.
# Check supported platforms
docker buildx ls

17. Scenario: Need to run GUI applications in Docker

Section titled “17. Scenario: Need to run GUI applications in Docker”

Situation: You need to run a GUI application (like Firefox, Chrome) in Docker.

Solution:

Terminal window
# Allow X11 connections
xhost +local:docker
# Run GUI container
docker run -it --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
-v $HOME/.Xauthority:/root/.Xauthority:ro \
--network host \
firefox
# For Wayland
docker run -it --rm \
-e WAYLAND_DISPLAY=$WAYLAND_DISPLAY \
-v $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY:/tmp/$WAYLAND_DISPLAY:rw \
--network host \
firefox

Situation: You need to expose Docker API for remote management securely.

Solution with TLS:

Terminal window
# Generate CA key and certificate
openssl genrsa -aes256 -out ca-key.pem 4096
openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
# Generate server key and certificate
openssl genrsa -out server-key.pem 4096
openssl req -subj "/CN=$HOST" -new -key server-key.pem -out server.csr
openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem
# Configure Docker daemon
{
"hosts": ["tcp://0.0.0.0:2376", "unix:///var/run/docker.sock"],
"tls": true,
"tlscert": "/etc/docker/server-cert.pem",
"tlskey": "/etc/docker/server-key.pem",
"tlscacert": "/etc/docker/ca.pem"
}

Situation: A deployment failed and you need to rollback quickly.

Solutions:

Docker Swarm:

Terminal window
# Check service status
docker service ps web --no-trunc
# Rollback to previous version
docker service rollback web
# View rollback history
docker service inspect web --pretty

Docker Compose:

Terminal window
# Keep previous images tagged
docker tag myapp:latest myapp:backup
docker tag myapp:v2.0 myapp:latest
docker compose up -d
# If fails, rollback
docker tag myapp:backup myapp:latest
docker compose up -d

Custom Script:

deploy-with-rollback.sh
#!/bin/bash
NEW_IMAGE="myapp:v2.0"
CURRENT_IMAGE=$(docker inspect -f '{{.Config.Image}}' app)
# Deploy new version
docker service update --image $NEW_IMAGE web
# Wait and check health
sleep 30
if [ $(docker service ps web --filter "desired-state=running" --format "{{.Image}}" | grep $NEW_IMAGE | wc -l) -eq 0 ]; then
echo "Deployment failed, rolling back..."
docker service update --image $CURRENT_IMAGE web
fi

20. Scenario: Cache Docker builds in CI/CD

Section titled “20. Scenario: Cache Docker builds in CI/CD”

Situation: CI/CD pipeline takes too long because Docker images are built from scratch each time.

Solutions:

1. Use build cache mounts:

# GitHub Actions
-name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-name: Build and push
uses: docker/build-push-action@v4
with:
push:true
tags: myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=max

2. Layer caching with registry:

Terminal window
# Pull latest version for cache
docker pull myregistry/myapp:latest || true
# Build with cache
docker build \
--cache-from myregistry/myapp:latest \
-t myregistry/myapp:latest \
.

3. Docker layer caching:

Terminal window
# GitLab CI
variables:
DOCKER_BUILDKIT: 1
build:
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

4. Use BuildKit registry cache:

Terminal window
# Build with remote cache
docker buildx build \
--cache-to type=registry,ref=myregistry/cache:build \
--cache-from type=registry,ref=myregistry/cache:build \
-t myapp:latest \
.

This comprehensive guide covers the essential Docker concepts, commands, and troubleshooting scenarios that will help you succeed in Docker interviews and real-world implementations.

image.png

image.png

image.png

image.png

image.png