Docker Interview Questions
Docker Interview Questions: 30 Important Questions + 20 Scenario-Based Questions
Section titled “Docker Interview Questions: 30 Important Questions + 20 Scenario-Based Questions”Part 1: 30 Important Interview Questions with Detailed Answers
Section titled “Part 1: 30 Important Interview Questions with Detailed Answers”1. What is Docker and how does it differ from virtual machines?
Section titled “1. What is Docker and how does it differ from virtual machines?”-
Answer:
Docker is a containerization platform that packages applications and their dependencies into lightweight, portable containers that run on a shared host operating system kernel.
Key Differences from VMs:
Aspect Docker Containers Virtual Machines OS Shares host OS kernel Each VM runs full guest OS Size MBs (typically 10s-100s MB) GBs (typically 1-10+ GB) Startup Seconds (milliseconds for simple containers) Minutes Resource Usage Minimal overhead, efficient Significant overhead per VM Isolation Process-level (namespaces) Hardware-level (hypervisor) Portability High - runs anywhere with Docker Medium - requires hypervisor compatibility Why containers are faster: Containers don’t boot an OS; they’re just isolated processes. When you run
docker run ubuntu, you’re not booting Ubuntu - you’re starting a process with Ubuntu’s filesystem, using your host’s Linux kernel.
2. Explain the difference between CMD and ENTRYPOINT in Dockerfile.
Section titled “2. Explain the difference between CMD and ENTRYPOINT in Dockerfile.”-
Answer:
Both define what command executes when a container starts, but they handle runtime arguments differently:
CMD (Default Command - Overridable):
- Provides defaults that can be completely replaced
- If multiple CMD instructions exist, only the last one takes effect
- Syntax:
CMD ["executable", "param1", "param2"](exec form) orCMD command param1 param2(shell form)
ENTRYPOINT (Fixed Executable - Not Overridable):
- Defines the executable that will always run
- Any
docker runarguments are appended, not replaced - Best practice: Use ENTRYPOINT for the main executable, CMD for default arguments
Practical Example:
# DockerfileFROM ubuntuENTRYPOINT ["ping"]CMD ["google.com"]Terminal window # Uses default CMD: ping google.comdocker run my-ping# Overrides CMD: ping yahoo.comdocker run my-ping yahoo.com# Cannot replace ENTRYPOINT without --entrypoint flagdocker run --entrypoint echo my-ping "hello"The Combined Pattern (Most Common):
ENTRYPOINT ["python"]CMD ["app.py"]docker run myapp→ python app.pydocker run myapp script.py→ python script.py
3. What are Docker namespaces and cgroups?
Section titled “3. What are Docker namespaces and cgroups?”-
Answer:
Namespaces (What a process can SEE):
Gives illusion that it is isolated but use a PID to isolate. Provide isolation by giving containers their own view of the system:
Namespace Purpose PID Process IDs - container sees only its own processes NET Network - each container gets its own network stack MNT Mount points - isolated filesystem views UTS Hostname - container can have its own hostname IPC Inter-process communication isolation USER User IDs - root in container ≠ root on host cgroups (What a process can USE): Limit hardware resources to prevent one container from starving others:
Terminal window # Memory limitdocker run -m 512m --memory-reservation 256m myapp# CPU limit (25% of one core)docker run --cpus=0.25 myapp# CPU pinning to specific coresdocker run --cpuset-cpus="0,1" myapp# CPU shares (priority when under load)docker run --cpu-shares=2048 high-priority-appWhy this matters: Understanding this helps debug “my container feels slow” issues - check if you’ve set appropriate limits or if cgroup constraints are too restrictive.
4. How do you persist data in Docker containers?
Section titled “4. How do you persist data in Docker containers?”-
Answer:
Docker provides three primary ways to persist data:
1. Volumes (Recommended for Production): Managed entirely by Docker, stored in
/var/lib/docker/volumes/Terminal window # Create named volumedocker volume create postgres-data# Use with containerdocker run -d \--name postgres \--mount type=volume,source=postgres-data,target=/var/lib/postgresql/data \postgres:13# Volume managementdocker volume lsdocker volume inspect postgres-datadocker volume prune # Remove unused volumes2. Bind Mounts (Development/Config Files): Direct mapping from host filesystem
Terminal window # Recommended syntaxdocker run -d \--name nginx \--mount type=bind,source=/host/config/nginx.conf,target=/etc/nginx/nginx.conf,readonly \nginx# Shortcut syntaxdocker run -v /host/data:/container/data:ro myapp3. tmpfs Mounts (In-Memory, Ephemeral): Data stored in RAM, lost when container stops
Terminal window # Store sensitive data in memorydocker run -d \--name secure-app \--mount type=tmpfs,target=/tmp/secrets,tmpfs-mode=0700 \myappBest Practices:
- Use volumes for databases and persistent application data
- Use bind mounts for development and configuration files
- Use tmpfs for secrets or temporary high-performance writes
- Always backup volume data; containers are disposable
5. Explain Docker networking modes.
Section titled “5. Explain Docker networking modes.”-
Answer
Answer:
Docker provides several network drivers with different use cases:
1. Bridge (Default) Creates private internal network; containers communicate via IP or name
Terminal window # Create custom bridge (better isolation)docker network create --driver bridge my-network# Run containers in custom networkdocker run -d --network my-network --name web nginxdocker run -d --network my-network --name db postgres# Web can reach db via hostname "db"2. Host Removes network isolation; container uses host’s network directly
Terminal window docker run -d --network host nginx# Access at http://localhost:80 (no port mapping needed)3. Overlay Multi-host networking for Docker Swarm/kubernetes
4. Macvlan Assigns physical MAC address; container appears as physical device on network
5. None Complete network isolation; only loopback
Terminal window docker run --network none isolated-appNetwork Management Commands:
Terminal window # List networksdocker network ls# Connect running container to networkdocker network connect my-network web# Inspect network (see connected containers)docker network inspect my-network# Disconnectdocker network disconnect my-network web
6. What’s the difference between COPY and ADD in Dockerfile?
Section titled “6. What’s the difference between COPY and ADD in Dockerfile?”-
Answer:
Feature COPY ADD Local files ✅ Yes ✅ Yes Remote URLs ❌ No ✅ Yes (downloads) Auto-extract tar ❌ No ✅ Yes (.tar, .tar.gz, etc.) Best practice Preferred Use only when needed COPY (Simple, Transparent):
COPY ./app /appCOPY package.json /app/COPY --chown=node:node . /appADD (Powerful but Unpredictable):
# Downloads remote file (can break builds if URL unavailable)ADD https://example.com/file.tar.gz /tmp/# Auto-extracts tar (may have unintended behavior)ADD app.tar.gz /app/ # Extracts contents automatically# Preferred: Use wget/curl in RUN for more controlRUN wget https://example.com/file.tar.gz && tar -xzf file.tar.gzRule: Use COPY for local files; use ADD only when you specifically need URL fetching or automatic extraction. The unpredictability of ADD (especially extraction) makes builds less reproducible.
7. How do you optimize Docker image size?
Section titled “7. How do you optimize Docker image size?”-
Answer:
1. Use Alpine or Slim Base Images:
# Instead of: FROM ubuntu:22.04 (77MB)FROM alpine:3.18 (7MB)# Or: FROM node:20-slim (better than full node)2. Multi-stage Builds:
# Build stage (with build tools)FROM golang:1.20 AS builderWORKDIR /appCOPY go.mod go.sum ./RUN go mod downloadCOPY . .RUN CGO_ENABLED=0 go build -o myapp# Runtime stage (only binary, no build tools)FROM alpine:latestRUN apk --no-cache add ca-certificatesCOPY --from=builder /app/myapp /usr/local/bin/ENTRYPOINT ["myapp"]# Result: 500MB → 15MB3. Combine RUN Commands (Reduce Layers):
# Bad: Creates 3 layersRUN apt-get updateRUN apt-get install -y python3RUN apt-get clean# Good: Single layerRUN apt-get update && \apt-get install -y python3 && \apt-get clean && \rm -rf /var/lib/apt/lists/*4. Use .dockerignore:
.gitnode_modules*.log__pycache__.DS_Store5. Clean Up After Package Managers:
RUN pip install --no-cache-dir -r requirements.txtRUN apt-get install -y --no-install-recommends package6. Specific Version Tags Instead of Latest:
# Instead of: FROM node:latestFROM node:18.17.0-alpine8. Explain Docker Compose and its use cases.
Section titled “8. Explain Docker Compose and its use cases.”-
Ans
Answer:
Docker Compose is a tool for defining and running multi-container Docker applications using YAML.
Key Concepts:
- Services: Define containers (web, db, redis)
- Networks: Automatic DNS resolution between services
- Volumes: Persistent storage for databases
Example docker-compose.yml:
version:'3.8'services:web:build: ./webports:-"3000:3000"environment:- NODE_ENV=production- DB_HOST=postgresdepends_on:- postgresvolumes:- ./web:/apprestart: unless-stoppedpostgres:image: postgres:15-alpineenvironment:POSTGRES_DB: myappPOSTGRES_USER: userPOSTGRES_PASSWORD: secretvolumes:- postgres-data:/var/lib/postgresql/datahealthcheck:test:["CMD-SHELL","pg_isready -U user"]interval: 10stimeout: 5sretries:5redis:image: redis:7-alpinecommand: redis-server --appendonly yesvolumes:- redis-data:/datavolumes:postgres-data:redis-data:Common Commands:
Terminal window # Start services in backgrounddocker compose up -d# View logsdocker compose logs -f web# Scale a servicedocker compose up -d --scale web=3# Execute command in servicedocker compose exec web npm run migrate# Stop and remove everythingdocker compose down -v # -v removes volumesUse Cases:
- Development environments with multiple services
- CI/CD testing with dependencies
- Production deployments (though Kubernetes often better for complex setups)
9. How do you handle secrets in Docker?
Section titled “9. How do you handle secrets in Docker?”Answer:
For Docker Compose (BuildKit required):
version:'3.8'
services:app:build: .secrets:- db_password- api_key
secrets:db_password:file: ./secrets/db_password.txtapi_key:external:true # Created via `docker secret create`For Swarm Mode (Production):
# Create secretsecho "MySecurePassword123" | docker secret create db_password -docker secret create api_key ./api_key.txt
# Use in servicedocker service create \ --name app \ --secret db_password \ --secret api_key \ --publish 80:3000 \ myapp:latestEnvironment Variables (Less Secure):
# Not recommended for production secretsdocker run -e DB_PASSWORD="secret123" myappBest Practices:
- Never hardcode secrets in Dockerfiles or images
- Use Docker secrets (Swarm) or external secret stores (HashiCorp Vault)
- Avoid passing secrets via environment variables (can be inspected)
- For development, use
.envfiles with.gitignore - Consider tools like Docker Secrets or Kubernetes Secrets for orchestration
10. What are the differences between Docker Swarm and Kubernetes?
Section titled “10. What are the differences between Docker Swarm and Kubernetes?”Answer:
| Aspect | Docker Swarm | Kubernetes |
|---|---|---|
| Complexity | Simple, easy to set up | Complex, steep learning curve |
| Installation | Built into Docker (1-click) | Requires separate installation |
| Scaling | Simple command | More complex but powerful |
| Service Discovery | Built-in DNS | DNS, also supports Ingress |
| Load Balancing | Built-in (L4) | L4 and L7 (Ingress) |
| Rolling Updates | Yes, simpler | Yes, more configurable |
| Networking | Overlay network | CNI plugins (Calico, Flannel) |
| Storage | Volume plugins | CSI (Container Storage Interface) |
| Self-healing | Basic | Advanced (health probes) |
| Auto-scaling | Limited | CPU/memory, custom metrics |
| Learning Curve | Low | High |
| Best For | Small to medium deployments, startups | Large-scale, complex microservices |
Example Swarm vs K8s Deployment:
Docker Swarm:
# Initializedocker swarm init
# Deploy stackdocker stack deploy -c docker-compose.yml myapp
# Scaledocker service scale myapp_web=5Kubernetes (YAML):
apiVersion: apps/v1kind: Deploymentmetadata:name: webspec:replicas:5selector:matchLabels:app: webtemplate:metadata:labels:app: webspec:containers:-name: nginximage: nginx:latestports:-containerPort:80---apiVersion: v1kind: Servicemetadata:name: web-servicespec:selector:app: webports:-port:80targetPort:80type: LoadBalancer11. Explain Docker’s layered filesystem and how it works.
Section titled “11. Explain Docker’s layered filesystem and how it works.”Answer:
Docker images use Union Filesystems (AUFS, overlay2, etc.) with a layered architecture:
Layer Structure:
Container Layer (Read-Write) ← Changes here├── Image Layer 3 (Read-Only)├── Image Layer 2 (Read-Only)└── Image Layer 1 (Read-Only) ← Base imageHow it works:
- Each Dockerfile instruction creates a layer
- Layers are cached and reused across images
- When container runs, thin read-write layer added on top
- Copy-on-Write: When container modifies file, it’s copied from read-only to writable layer
Benefits:
- Faster builds (cached layers)
- Smaller storage (shared layers)
- Efficient transfers (only missing layers pulled)
Example showing layers:
# Build image with layersdocker build -t myapp .
# View image layersdocker history myapp# IMAGE CREATED CREATED BY SIZE# abc123 2 mins ago CMD ["python","app.py"] 0B# def456 2 mins ago RUN pip install -r req.txt 150MB# ghi789 3 mins ago COPY requirements.txt . 1.2kB# jkl012 3 mins ago WORKDIR /app 0BLayer Caching in Practice:
# Optimized for caching - dependencies firstFROM python:3.9WORKDIR /app
# Copy only requirements first (cached unless requirements changes)COPY requirements.txt .RUN pip install -r requirements.txt # This layer caches
# Copy application code (changes often)COPY . .CMD ["python", "app.py"]12. What are the different container states in Docker?
Section titled “12. What are the different container states in Docker?”Answer:
Container States:
| State | Description | Example |
|---|---|---|
| Created | Container created but not started | docker create nginx |
| Running | Container actively executing | docker run -d nginx |
| Paused | Processes suspended (freezer cgroup) | docker pause container |
| Restarting | In restart process | --restart always |
| Exited | Stopped with or without error | Container finished or stopped |
| Dead | Failed to stop properly (rare) | Can be manually removed |
Transitions:
# Create → Runningdocker create nginx → docker start nginx
# Running → Paused → Runningdocker pause nginx → docker unpause nginx
# Running → Exiteddocker stop nginx # SIGTERM (graceful)docker kill nginx # SIGKILL (immediate)
# Exited → Runningdocker start nginx
# Any state → Removeddocker rm nginxChecking State:
# List with statusdocker ps -a
# Filter by statusdocker ps --filter status=exiteddocker ps --filter status=running
# Get detailed statusdocker inspect container | jq '.[0].State.Status'Why containers exit immediately:
# This exits because no foreground processdocker run ubuntu
# This runs 10 seconds then exitsdocker run ubuntu sleep 10
# This stays runningdocker run -d nginx # nginx runs in foreground13. How do you troubleshoot a container that won’t start?
Section titled “13. How do you troubleshoot a container that won’t start?”Answer:
Systematic Troubleshooting Approach:
1. Check Logs:
# Get last 100 linesdocker logs --tail 100 failing-container
# Follow logs in real-timedocker logs -f failing-container
# Logs from last 10 minutesdocker logs --since 10m failing-container2. Inspect Container Details:
# Full JSON detailsdocker inspect failing-container
# Specific exit codedocker inspect --format='{{.State.ExitCode}}' failing-container
# Check error messagedocker inspect --format='{{.State.Error}}' failing-container3. Try Running Without Detach:
# Run in foreground to see immediate errorsdocker run --rm myapp
# With interactive shell if possibledocker run -it myapp /bin/sh4. Override Entrypoint/Command:
# Override to get shell accessdocker run --rm -it --entrypoint /bin/sh myapp
# Or override commanddocker run --rm myapp ls -la5. Check Resource Limits:
# Container might be OOM killeddocker inspect container | grep -A 5 "OOMKilled"
# Check system logsdmesg | grep -i killjournalctl -u docker | grep -i error6. Verify Configuration:
# Check port conflictsdocker run --rm -p 80:80 nginx # Port 80 already in use?
# Check volume permissionsls -la /host/mount # Permissions must allow container user7. Docker Daemon Logs:
# Linuxjournalctl -u docker.service -f
# Mac/Windowsdocker logs --tail 100 docker-desktop14. Explain the difference between docker run, docker start, and docker exec.
Section titled “14. Explain the difference between docker run, docker start, and docker exec.”Answer:
docker run = Create + Start
- Creates a new container from image
- Starts it immediately
- Most common command
# Basic rundocker run nginx
# With optionsdocker run -d --name web -p 80:80 nginx
# One-off commands (container stops after)docker run --rm ubuntu ls -ladocker start = Start existing
- Restarts a previously created/stopped container
- Preserves all configuration from original run
# Create but don't startdocker create --name web nginx
# Start it laterdocker start web
# Start with attachdocker start -a webdocker exec = Execute in running
- Runs command in already running container
- Requires container to be running
- Most common for debugging
# Get shell in running containerdocker exec -it web /bin/bash
# Run single commanddocker exec web cat /etc/hosts
# Run as different userdocker exec -u root web whoamiComparison Table:
| Command | Creates New Container | Requires Running Container | Preserves State |
|---|---|---|---|
docker run | ✅ Yes | ❌ No | ❌ No (fresh) |
docker start | ❌ No | ❌ No (can be stopped) | ✅ Yes |
docker exec | ❌ No | ✅ Yes | ✅ Yes |
Practical Scenario:
# 1. Run container in backgrounddocker run -d --name redis redis
# 2. Execute command insidedocker exec redis redis-cli ping # Returns PONG
# 3. Stop containerdocker stop redis
# 4. Start it again (same data)docker start redis
# 5. Can't exec when stoppeddocker exec redis redis-cli ping # Error: container not running15. How do you implement health checks in Docker?
Section titled “15. How do you implement health checks in Docker?”Answer:
Health Checks in Dockerfile:
FROM node:18-alpine
WORKDIR /appCOPY package*.json ./RUN npm installCOPY . .
# Define health checkHEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD node health.js || exit 1
EXPOSE 3000CMD ["npm", "start"]Health Check Options:
--interval: How often to check (default 30s)--timeout: Max time for check to complete (default 30s)--start-period: Time to wait before first check (default 0s)--retries: Consecutive failures needed to mark unhealthy (default 3)
Docker Compose Health Check:
version:'3.8'
services:postgres:image: postgres:15healthcheck:test:["CMD-SHELL","pg_isready -U postgres"]interval: 10stimeout: 5sretries:5start_period: 30s
app:build: .depends_on:postgres:condition: service_healthyCheck Health Status:
# View container healthdocker ps# CONTAINER ID IMAGE STATUS# abc123 app Up 2 minutes (healthy)
# Detailed health statusdocker inspect --format='{{json .State.Health}}' container | jq
# Wait for healthy containerdocker wait --condition=healthy containerCustom Health Check Script (Node.js example):
const http = require('http');
const options = { host: 'localhost', port: 3000, path: '/health', timeout: 2000};
const request = http.request(options, (res) => { console.log(`STATUS:${res.statusCode}`); if (res.statusCode === 200) { process.exit(0); // Healthy } else { process.exit(1); // Unhealthy }});
request.on('error', (err) => { console.error('Health check failed:', err); process.exit(1);});
request.end();16. What are Docker image tags and how do you use them?
Section titled “16. What are Docker image tags and how do you use them?”Answer:
Tags are identifiers that point to specific image versions, following the format: [registry/]repository[:tag]
Tag Structure:
# Full formatdocker pull docker.io/library/nginx:1.21-alpine# ^registry ^repo ^tag
# Common patternsnginx:latest # Default, not recommended for productionnginx:1.21 # Major versionnginx:1.21.6 # Full versionnginx:1.21-alpine # Version with variantmyapp:v2.0.1 # Custom semantic versionmyapp:abc1234 # Git commit hashTagging Images:
# Tag existing imagedocker tag myapp:latest myapp:v1.0.0docker tag myapp:latest myregistry.com/myapp:v1.0.0
# Build with tagdocker build -t myapp:2.0.0 .
# Multiple tags for same imagedocker tag myapp:latest myapp:stabledocker tag myapp:latest myapp:2.0.0Best Practices:
# Production - Use specific versionsFROM node:18.17.0-alpine3.18
# Never use 'latest' in production# Bad: FROM node:latest
# CI/CD - Use commit hash or build IDdocker build -t myapp:${GIT_COMMIT} .docker tag myapp:${GIT_COMMIT} myapp:latest
# Semantic versioningmyapp:1.0.0 # Specific versionmyapp:1.0 # Minor version latestmyapp:1 # Major version latestTag Management:
# List tags (requires registry API)curl -X GET https://registry.hub.docker.com/v2/repositories/library/nginx/tags
# Remove tag (untag)docker rmi myapp:v1.0.0 # Removes tag, not the image if other tags exist
# Filter images by tagdocker images | grep myapp17. Explain the Docker build cache and how to optimize it.
Section titled “17. Explain the Docker build cache and how to optimize it.”Answer:
Docker caches each layer during build. If a layer hasn’t changed, Docker reuses the cached layer.
Cache Invalidation Rules:
FROM- Always invalidates if base image changesCOPY/ADD- Invalidates if file contents changeRUN- Invalidates if command changes- Previous layer changes cascade to all subsequent layers
Optimization Strategies:
1. Order Layers by Change Frequency:
# Optimized - dependencies firstFROM node:18WORKDIR /app
# Rarely changesCOPY package*.json ./RUN npm ci # Cached until package.json changes
# Changes frequentlyCOPY . .RUN npm run build
# Better than:# COPY . . # Any file change invalidates everything# RUN npm ci2. Combine Commands to Reduce Layers:
# Bad - multiple layersRUN apt-get updateRUN apt-get install -y python3RUN apt-get clean
# Good - single layerRUN apt-get update && \ apt-get install -y python3 && \ apt-get clean && \ rm -rf /var/lib/apt/lists/*3. Use .dockerignore:
node_modules.git*.log.env.DS_Store4. Multi-stage Builds:
# Build stage - large toolsFROM golang:1.20 AS builderWORKDIR /appCOPY go.mod go.sum ./RUN go mod downloadCOPY . .RUN go build -o myapp
# Runtime stage - minimalFROM alpine:latestRUN apk --no-cache add ca-certificatesCOPY --from=builder /app/myapp /myappCMD ["/myapp"]5. Leverage BuildKit for Better Caching:
# Enable BuildKitexport DOCKER_BUILDKIT=1
# Use cache mountsRUN --mount=type=cache,target=/root/.cache/pip \ pip install -r requirements.txtView Cache Usage:
# Show cache disk usagedocker system df
# Prune cachedocker builder prune -a
# Build without cachedocker build --no-cache -t myapp .18. How do you implement logging in Docker containers?
Section titled “18. How do you implement logging in Docker containers?”# Default json-file (stores on disk)docker run --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 nginx
# syslogdocker run --log-driver syslog --log-opt syslog-address=udp://localhost:514 nginx
# fluentddocker run --log-driver fluentd --log-opt fluentd-address=localhost:24224 nginx
# awslogs (CloudWatch)docker run --log-driver awslogs --log-opt awslogs-region=us-east-1 nginx
# none (disable logging)docker run --log-driver none nginxAnswer:
1. Docker Logging Drivers:
2. Container Logging Best Practices:
services:app:image: myapplogging:driver:"json-file"options:max-size:"10m"max-file:"3"labels:"production"env:"env"3. Application Logging Patterns:
# Python - Log to stdout/stderrimport loggingimport sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)logger = logging.getLogger(__name__)
logger.info("Application started") # Goes to docker logs4. Log Management Commands:
# View logsdocker logs container
# Tail logsdocker logs -f --tail 100 container
# Logs since timestampdocker logs --since 2023-01-01T10:00:00 container
# Filter by labeldocker logs --filter label=production container5. Centralized Logging Setup:
# docker-compose with ELKversion:'3.8'
services:app:image: myapplogging:driver:"fluentd"options:fluentd-address:"fluentd:24224"tag:"app.{{.Name}}"
fluentd:image: fluent/fluentdvolumes:- ./fluentd.conf:/fluentd/etc/fluent.confports:-"24224:24224"
elasticsearch:image: elasticsearch:7.17environment:- discovery.type=single-node
kibana:image: kibana:7.17ports:-"5601:5601"19. What are Docker contexts and how do you use them?
Section titled “19. What are Docker contexts and how do you use them?”Answer:
Docker contexts allow you to manage multiple Docker environments (local, remote, cloud) from a single CLI.
Creating Contexts:
# List all contextsdocker context ls
# Create context for remote Docker daemondocker context create remote \ --docker "host=ssh://user@remote-server"
# Create context for Docker Desktopdocker context create desktop \ --docker "host=unix:///var/run/docker.sock"
# Create context for cloud (AWS ECS)docker context create ecs \ --description "AWS ECS" \ --from-envUsing Contexts:
# Switch contextdocker context use remote
# Run commands on remotedocker ps # Shows remote containers
# Use context temporarilydocker --context remote ps
# Show current contextdocker context show
# Remove contextdocker context rm remoteUse Cases:
1. Multi-environment Management:
# Create contexts for different environmentsdocker context create staging --docker "host=ssh://staging-server"docker context create production --docker "host=ssh://prod-server"
# Quick switchesdocker context use stagingdocker compose up -ddocker context use productiondocker compose up -d2. Docker Desktop Context:
# Docker Desktop includes default contextdocker context use defaultdocker ps # Local containers3. Cloud Integration:
# AWS ECS context (experimental)docker context create ecs myecsdocker context use myecsdocker compose up # Deploys to ECS4. CI/CD Pipeline:
# In CI pipelinedocker context create remote \ --docker "host=ssh://${DEPLOY_USER}@${DEPLOY_HOST}"
docker --context remote compose up -dContext Configuration Location:
# Contexts stored in~/.docker/contexts/20. Explain Docker resource limits and monitoring.
Section titled “20. Explain Docker resource limits and monitoring.”Answer:
Setting Resource Limits:
1. Memory Limits:
# Hard limitdocker run -m 512m --memory-reservation 256m myapp
# Swap limitdocker run -m 512m --memory-swap 1g myapp # 512M RAM + 512M swap
# Memory swappinessdocker run --memory-swappiness=60 myapp # 0-100, default 602. CPU Limits:
# CPU coresdocker run --cpus=1.5 myapp # 1.5 cores
# CPU shares (relative weight)docker run --cpu-shares=2048 high-priority-appdocker run --cpu-shares=512 low-priority-app
# CPU pinningdocker run --cpuset-cpus="0,1" myapp # Use only cores 0 and 1
# CPU quota (CFS)docker run --cpu-period=100000 --cpu-quota=25000 myapp # 25% of one core3. I/O Limits:
# Device read/write speeddocker run --device-read-bps /dev/sda:1mb --device-write-bps /dev/sda:1mb myapp
# IOPS limitsdocker run --device-read-iops /dev/sda:100 --device-write-iops /dev/sda:100 myappMonitoring Commands:
# Real-time statsdocker stats
# Format outputdocker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
# Specific containerdocker stats container1 container2
# Get all container stats programmaticallydocker stats --no-stream --format '{{json .}}'Docker Compose Limits:
version:'3.8'
services:app:image: myappdeploy:resources:limits:cpus:'1.5'memory: 512Mreservations:cpus:'0.5'memory: 256MMonitoring with Prometheus:
# Install cAdvisor for container metricsdocker run -d \ --name=cadvisor \ --volume=/:/rootfs:ro \ --volume=/var/run:/var/run:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:ro \ --publish=8080:8080 \ gcr.io/cadvisor/cadvisor:latestCheck Resource Usage:
# Check disk usagedocker system df
# Detailed disk usagedocker system df -v
# Prune unused resourcesdocker system prune -a --volumes21. How do you secure Docker containers?
Section titled “21. How do you secure Docker containers?”Answer:
1. Image Security:
# Scan images for vulnerabilitiesdocker scan myappdocker scan --accept-license myapp
# Use official/verified imagesFROM node:18-alpine # OfficialFROM myregistry.com/verified/node:18 # Verified
# Use specific versionsFROM node:18.17.0-alpine # Never 'latest'2. Runtime Security:
# Drop all capabilities and add only neededdocker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx
# Read-only root filesystemdocker run --read-only myapp
# No new privilegesdocker run --security-opt=no-new-privileges myapp
# Limit kernel capabilitiesdocker run --security-opt=seccomp=path/to/seccomp.json myapp
# User namespace remappingdocker run --userns=host myapp # Map root to non-root on host3. Resource Restrictions:
# Prevent fork bombsdocker run --pids-limit=100 myapp
# Restrict devicesdocker run --device-cgroup-rule='c 1:3 rmw' myapp4. Network Security:
# Custom network with strict isolationdocker network create \ --driver bridge \ --opt com.docker.network.bridge.enable_icc=false \ secure-network
# Use internal network (no external access)docker network create --internal internal-network
# Limit exposed portsdocker run -p 127.0.0.1:8080:80 myapp # Only localhost5. Secrets Management:
# Never use environment variables for secrets# Bad: -e DB_PASSWORD=secret
# Use Docker secrets (Swarm)echo "secret" | docker secret create db_password -docker service create --secret db_password myapp
# Or external vaultdocker run -e VAULT_TOKEN=token myapp6. Dockerfile Security:
# Use non-root userFROM node:18-alpineRUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 -G nodejsUSER nodejs
# Copy files with correct ownershipCOPY --chown=nodejs:nodejs . /app
# Avoid secrets in build# Never: RUN echo "secret" > file.txt7. Audit and Compliance:
# Docker Bench Security (CIS benchmarks)docker run --net host --pid host --userns host \ --cap-add audit_control \ -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \ -v /var/lib:/var/lib \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /usr/lib/systemd:/usr/lib/systemd \ -v /etc:/etc --label docker_bench_security \ docker/docker-bench-security22. Explain multi-stage builds and their benefits.
Section titled “22. Explain multi-stage builds and their benefits.”Answer:
Multi-stage builds use multiple FROM statements in a single Dockerfile, allowing you to copy artifacts between stages.
Basic Example:
# Stage 1: BuildFROM golang:1.20 AS builderWORKDIR /appCOPY go.mod go.sum ./RUN go mod downloadCOPY . .RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
# Stage 2: RuntimeFROM alpine:latestRUN apk --no-cache add ca-certificatesWORKDIR /root/COPY --from=builder /app/main .EXPOSE 8080CMD ["./main"]Benefits:
1. Smaller Images:
# Without multi-stage: ~500MBFROM node:18COPY package*.json ./RUN npm ciCOPY . .RUN npm run buildCMD ["node", "dist/index.js"]
# With multi-stage: ~50MBFROM node:18 AS builderCOPY package*.json ./RUN npm ciCOPY . .RUN npm run build
FROM node:18-alpineCOPY --from=builder /app/dist ./distCOPY --from=builder /app/package*.json ./RUN npm ci --only=productionCMD ["node", "dist/index.js"]2. Multiple Build Contexts:
# Build frontend and backend in same DockerfileFROM node:18 AS frontend-builderWORKDIR /frontendCOPY frontend/package*.json ./RUN npm ciCOPY frontend/ .RUN npm run build
FROM maven:3.8 AS backend-builderWORKDIR /backendCOPY backend/pom.xml ./RUN mvn dependency:go-offlineCOPY backend/ .RUN mvn package
FROM openjdk:11-jre-slimCOPY --from=backend-builder /backend/target/app.jar /app.jarCOPY --from=frontend-builder /frontend/dist /staticCMD ["java", "-jar", "/app.jar"]3. Testing Stage:
# Development stageFROM node:18 AS developmentWORKDIR /appCOPY package*.json ./RUN npm ciCOPY . .CMD ["npm", "run", "dev"]
# Testing stageFROM development AS testingRUN npm run test
# Production stageFROM node:18-alpine AS productionCOPY --from=development /app/dist ./distCOPY --from=development /app/package*.json ./RUN npm ci --only=productionCMD ["node", "dist/index.js"]4. Buildkit Optimizations:
# Leverage buildkit for better cachingFROM node:18 AS builderRUN --mount=type=cache,target=/root/.npm \ npm ci
FROM builder AS testRUN --mount=type=cache,target=/root/.npm \ npm run test
FROM builder AS prodRUN npm run build23. What are Docker volumes and how do they differ from bind mounts?
Section titled “23. What are Docker volumes and how do they differ from bind mounts?”Answer:
| Aspect | Volumes | Bind Mounts |
|---|---|---|
| Location | Docker-managed (/var/lib/docker/volumes/) | User-managed (any path on host) |
| Creation | docker volume create or automatically | Manually or automatically |
| Portability | High - works across hosts | Low - depends on host path |
| Backup | Built-in commands available | Manual file copy |
| Performance | Good | Same as host filesystem |
| Permissions | Docker manages | Host permissions apply |
| Use Case | Production data, databases | Development, config files |
Volumes Example:
# Create volumedocker volume create postgres-data
# Use volumedocker run -d \ --name postgres \ --mount type=volume,source=postgres-data,target=/var/lib/postgresql/data \ postgres:13
# Inspect volumedocker volume inspect postgres-data# Shows mountpoint: /var/lib/docker/volumes/postgres-data/_data
# Backup volumedocker run --rm \ -v postgres-data:/source \ -v $(pwd):/backup \ alpine tar czf /backup/postgres-backup.tar.gz -C /source .
# Restore volumedocker run --rm \ -v postgres-data:/target \ -v $(pwd):/backup \ alpine tar xzf /backup/postgres-backup.tar.gz -C /targetBind Mounts Example:
# Development with code syncdocker run -d \ --name dev \ --mount type=bind,source=$(pwd)/app,target=/app \ node:18 npm run dev
# Configuration filesdocker run -d \ --name nginx \ --mount type=bind,source=/host/nginx.conf,target=/etc/nginx/nginx.conf,readonly \ nginxDocker Compose:
version:'3.8'
services:postgres:image: postgres:13volumes:- postgres-data:/var/lib/postgresql/data # Named volume
app:image: node:18volumes:- ./app:/app # Bind mount- /app/node_modules # Anonymous volume (prevents overwrite)
volumes:postgres-data: # Named volume declarationPerformance Considerations:
- Volumes: ~same as host filesystem (native)
- Bind mounts: ~same as host filesystem
- For databases: Volumes often preferred for portability
- For code: Bind mounts during development
24. How do you manage Docker logs and rotate them?
Section titled “24. How do you manage Docker logs and rotate them?”Answer:
1. Docker Logging Driver Configuration:
# Default json-file with rotationdocker run \ --log-driver json-file \ --log-opt max-size=10m \ --log-opt max-file=3 \ --log-opt compress=true \ nginx
# Local driver (faster, less overhead)docker run \ --log-driver local \ --log-opt max-size=10m \ --log-opt max-file=3 \ nginx2. Docker Daemon Configuration (daemon.json):
{ "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3", "labels": "production", "env": "env" }, "log-level": "info"}Location: /etc/docker/daemon.json (Linux) or Docker Desktop settings
3. Docker Compose Logging:
version:'3.8'
services:app:image: myapplogging:driver:"json-file"options:max-size:"10m"max-file:"5"tag:"{{.Name}}|{{.ImageName}}"options:max-size:"10m"max-file:"3"
nginx:image: nginxlogging:driver:"syslog"options:syslog-address:"tcp://192.168.0.1:514"syslog-facility:"daemon"tag:"nginx"4. External Log Rotation (logrotate):
/var/lib/docker/containers/*/*.log { rotate 7 daily compress delaycompress missingok copytruncate maxsize 100M}5. Centralized Logging:
# Send all container logs to fluentddocker run \ --log-driver fluentd \ --log-opt fluentd-address=localhost:24224 \ --log-opt tag="app.{{.Name}}" \ myapp6. Manage Logs Programmatically:
# Clean logs for specific containerdocker run --rm -v /var/run/docker.sock:/var/run/docker.sock \ alpine sh -c "echo '' >$(docker inspect -f '{{.LogPath}}' container)"
# Prune logs for all stopped containersdocker container prune --filter "until=24h"
# List log sizesdocker ps -q | xargs -I {} sh -c "echo {}:$(docker inspect -f '{{.LogPath}}' {} | xargs du -h)"7. Best Practices:
- Always set log rotation limits (prevents disk full)
- Use external log aggregation for production
- Don’t log sensitive data
- Log to stdout/stderr, not files in container
- Consider log levels (debug vs info vs error)
25. Explain the concept of “Docker Hub” and private registries.
Section titled “25. Explain the concept of “Docker Hub” and private registries.”Answer:
Docker Hub: Public registry with official images, automated builds, and webhooks.
Registry Types:
- Public Registry: Docker Hub, Quay.io, Google Container Registry
- Private Registry: Docker Registry, AWS ECR, Azure ACR, GCR
Using Docker Hub:
# Logindocker login
# Searchdocker search nginx
# Pulldocker pull nginx:alpine
# Push (requires repository)docker tag myapp:latest myusername/myapp:latestdocker push myusername/myapp:latestPrivate Registry (Docker Registry):
# Run private registrydocker run -d -p 5000:5000 --name registry registry:2
# Push to private registrydocker tag myapp localhost:5000/myapp:latestdocker push localhost:5000/myapp:latest
# Pull from private registrydocker pull localhost:5000/myapp:latest
# Registry with authenticationdocker run -d -p 5000:5000 \ -e REGISTRY_AUTH=htpasswd \ -e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \ -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ -v /path/to/auth:/auth \ registry:2Private Registry Storage:
# docker-compose.yml for registryversion:'3.8'
services:registry:image: registry:2ports:-"5000:5000"volumes:- registry-data:/var/lib/registry- ./auth:/auth- ./certs:/certsenvironment:REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crtREGISTRY_HTTP_TLS_KEY: /certs/domain.keyREGISTRY_AUTH: htpasswdREGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswdREGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:registry-data:AWS ECR Example:
# Authenticateaws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
# Create repositoryaws ecr create-repository --repository-name myapp
# Tag and pushdocker tag myapp:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:latestdocker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:latestRegistry Management:
# List images in registry (via API)curl -X GET http://localhost:5000/v2/_catalog
# Delete image (requires garbage collection)curl -X DELETE http://localhost:5000/v2/myapp/manifests/sha256:...
# Registry garbage collectiondocker exec registry bin/registry garbage-collect /etc/docker/registry/config.yml26. What is Docker content trust and how do you use it?
Section titled “26. What is Docker content trust and how do you use it?”Answer:
Docker Content Trust (DCT) enables digital signatures for images, ensuring authenticity and integrity.
Enable DCT:
# Enable globallyexport DOCKER_CONTENT_TRUST=1
# Or enable for specific commandsDOCKER_CONTENT_TRUST=1 docker pull nginx:latestSigning Images:
# Initialize delegation keys (first time)docker trust key generate mykey
# Sign and push imagedocker trust sign myregistry.com/myapp:latest
# Or push with signingdocker trust push myregistry.com/myapp:latestVerifying Signed Images:
# Pull will verify signaturedocker pull myregistry.com/myapp:latest
# View trust datadocker trust inspect myregistry.com/myapp:latest
# List all signed tagsdocker trust view myregistry.com/myappManaging Delegations:
# Add signer delegationdocker trust signer add --key cert.pem myteam myregistry.com/myapp
# Remove signerdocker trust signer remove myteam myregistry.com/myapp
# Revoke delegationdocker trust revoke myregistry.com/myappDockerfile with Trust:
# DOCKER_CONTENT_TRUST=1 docker buildFROM myregistry.com/trusted-base:1.0 # Will verify signatureRUN apt-get update && apt-get install -y curlNotary Server (Advanced):
# Run notary serverdocker run -d -p 4443:4443 \ -v notary-data:/var/lib/notary \ notary:server
# Configure Docker to use custom notary{ "notary": { "rootPassphrase": "your-root-passphrase", "serverURL": "https://notary.example.com", "delegationPassphrase": "your-delegation-passphrase" }}Best Practices:
- Enable DCT in CI/CD pipelines
- Use separate keys for signing vs. delegation
- Store keys securely (HSM, KMS)
- Rotate keys regularly
- Audit signing events
27. How do you perform rolling updates with Docker?
Section titled “27. How do you perform rolling updates with Docker?”Answer:
Docker Swarm Rolling Updates:
# Deploy service with update configdocker service create \ --name web \ --replicas 5 \ --update-parallelism 2 \ --update-delay 10s \ --update-failure-action pause \ --update-monitor 30s \ --update-max-failure-ratio 0.3 \ nginx:1.21
# Update to new versiondocker service update --image nginx:1.22 web
# Control updatedocker service update --rollback web # Rollbackdocker service update --detach web # Detach from updateUpdate Configuration Options:
--update-parallelism 2 # Update 2 replicas at once--update-delay 10s # Wait 10s between batches--update-failure-action pause # Stop on failure--update-monitor 30s # Monitor for 30s after update--update-max-failure-ratio 0.3 # Max 30% failure allowedDocker Compose with Swarm:
version:'3.8'
services:web:image: nginx:1.21deploy:replicas:5update_config:parallelism:2delay: 10sfailure_action: rollbackmonitor: 30smax_failure_ratio:0.3rollback_config:parallelism:1delay: 5sHealth Checks for Updates:
services:web:image: myapp:latesthealthcheck:test:["CMD","curl","-f","http://localhost/health"]interval: 10stimeout: 5sretries:3start_period: 30sdeploy:update_config:parallelism:1delay: 30sfailure_action: rollbackmonitor: 60s # Wait for health check after updateBlue-Green Deployment Pattern:
# Deploy blue (current) versiondocker service create --name app-blue --label version=blue nginx:1.21
# Deploy green (new) versiondocker service create --name app-green --label version=green nginx:1.22
# Switch load balancerdocker service update --label-add version=green app-lb
# Remove old versiondocker service rm app-blueZero-Downtime Update Script:
#!/bin/bashSERVICE_NAME="web"OLD_TAG=$(docker service inspect -f '{{.Spec.TaskTemplate.ContainerSpec.Image}}' $SERVICE_NAME)NEW_TAG="myapp:v2.0.0"
# Start updatedocker service update \ --image $NEW_TAG \ --update-parallelism 1 \ --update-delay 10s \ --update-monitor 30s \ --update-failure-action rollback \ $SERVICE_NAME
# Monitor updatewhile true; do UPDATED=$(docker service ps $SERVICE_NAME --filter "desired-state=running" | grep $NEW_TAG | wc -l) TOTAL=$(docker service ps $SERVICE_NAME --filter "desired-state=running" | wc -l)
if [ $UPDATED -eq $TOTAL ]; then echo "Update complete" break fi
echo "Updated:$UPDATED/$TOTAL replicas" sleep 5done28. Explain Docker Swarm mode and its features.
Section titled “28. Explain Docker Swarm mode and its features.”Answer:
Docker Swarm is Docker’s native clustering and orchestration solution.
Initialization:
# Initialize swarm on managerdocker swarm init --advertise-addr 192.168.1.10
# Add worker nodesdocker swarm join --token SWMTKN-1-xxx 192.168.1.10:2377
# Add manager nodesdocker swarm join-token managerdocker swarm join --token SWMTKN-1-yyy 192.168.1.10:2377Key Concepts:
1. Nodes:
# List nodesdocker node ls
# Promote worker to managerdocker node promote node2
# Demote managerdocker node demote node3
# Add labelsdocker node update --label-add environment=production node12. Services:
# Create servicedocker service create \ --name web \ --replicas 3 \ --publish 80:80 \ --constraint "node.labels.environment==production" \ nginx:latest
# Scale servicedocker service scale web=5
# Update servicedocker service update --image nginx:1.22 web
# Rollbackdocker service rollback web
# Remove servicedocker service rm web3. Stacks (Compose for Swarm):
version:'3.8'
services:web:image: nginx:latestports:-"80:80"deploy:replicas:3placement:constraints:- node.role == managerresources:limits:cpus:'0.5'memory: 512Mrestart_policy:condition: on-failuredelay: 5smax_attempts:3
db:image: postgres:13environment:POSTGRES_PASSWORD: secretvolumes:- db-data:/var/lib/postgresql/datadeploy:placement:constraints:- node.role == worker
volumes:db-data:# Deploy stackdocker stack deploy -c docker-stack.yml myapp
# List stacksdocker stack ls
# List services in stackdocker stack services myapp
# Remove stackdocker stack rm myapp4. Networking:
# Create overlay networkdocker network create --driver overlay my-network
# Use in servicedocker service create --network my-network nginx5. Secrets and Configs:
# Create secretecho "db_password" | docker secret create db_password -
# Use in servicedocker service create \ --secret db_password \ --secret src=db_password,target=/run/secrets/db_pass \ postgres
# Config (non-sensitive data)echo "config.json" | docker config create app_config -Swarm Features:
- Self-healing: Failed containers are rescheduled
- Rolling updates: Zero-downtime updates
- Load balancing: Built-in L4 load balancer
- Service discovery: DNS-based service discovery
- Encrypted overlay networks: Secure multi-host communication
- Secrets management: Encrypted secrets storage
- Multi-platform: Linux and Windows containers
29. How do you backup and restore Docker volumes?
Section titled “29. How do you backup and restore Docker volumes?”Answer:
Volume Backup:
1. Using Alpine Container:
# Backup volume to tardocker run --rm \ -v volume_name:/source \ -v $(pwd):/backup \ alpine \ tar czf /backup/volume_backup.tar.gz -C /source .
# Backup with timestampdocker run --rm \ -v volume_name:/source \ -v $(pwd):/backup \ alpine \ tar czf /backup/volume_backup_$(date +%Y%m%d_%H%M%S).tar.gz -C /source .2. Database-Specific Backup:
# PostgreSQLdocker exec postgres pg_dump -U postgres dbname > backup.sql
# MySQLdocker exec mysql mysqldump -u root -p database > backup.sql
# MongoDBdocker exec mongodb mongodump --out /data/backupdocker cp mongodb:/data/backup ./backup3. Volume Restore:
# Restore from tardocker run --rm \ -v volume_name:/target \ -v $(pwd):/backup \ alpine \ tar xzf /backup/volume_backup.tar.gz -C /target
# Restore with verificationdocker run --rm \ -v volume_name:/target \ -v $(pwd):/backup \ alpine \ sh -c "tar xzf /backup/volume_backup.tar.gz -C /target && ls -la /target"4. Incremental Backup Script:
#!/bin/bashBACKUP_DIR="/backups"DATE=$(date +%Y%m%d)VOLUMES=$(docker volume ls -q)
for VOLUME in $VOLUMES; do echo "Backing up$VOLUME..." docker run --rm \ -v $VOLUME:/source \ -v $BACKUP_DIR:/backup \ alpine \ tar czf "/backup/${VOLUME}_${DATE}.tar.gz" -C /source .done
# Clean old backups (keep 7 days)find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -delete5. Docker Compose Backup:
version:'3.8'
services:backup:image: alpinevolumes:- postgres-data:/source:ro- ./backups:/backup command:> sh -c "tar czf /backup/postgres_$(date +%Y%m%d_%H%M%S).tar.gz -C /source . && find /backup -name '*.tar.gz' -mtime +7 -delete"restart:"no"
volumes:postgres-data:6. Automated Backup with Cron:
# Add to crontab0 2 * * * /usr/local/bin/backup-volumes.sh7. Volume Migration:
# Migrate volume to another host# On source hostdocker run --rm \ -v volume_name:/source \ -v $(pwd):/backup \ alpine \ tar czf /backup/volume.tar.gz -C /source .
# Transfer to destination hostscp volume.tar.gz user@destination:/tmp/
# On destination hostdocker run --rm \ -v volume_name:/target \ -v /tmp:/backup \ alpine \ tar xzf /backup/volume.tar.gz -C /targetBest Practices:
- Always test restore process
- Use compression (gzip) for backups
- Encrypt sensitive backups
- Store backups off-host
- Monitor backup success/failure
- Document restore procedures
30. How do you debug Docker networking issues?
Section titled “30. How do you debug Docker networking issues?”Answer:
Systematic Debugging Approach:
1. Check Network Connectivity:
# List networksdocker network ls
# Inspect networkdocker network inspect bridgedocker network inspect my-network
# Check container IPdocker inspect container | grep IPAddress
# Test connectivitydocker exec container ping other-containerdocker exec container ping 8.8.8.82. DNS Resolution:
# Check DNS resolutiondocker exec container cat /etc/resolv.confdocker exec container nslookup google.comdocker exec container nslookup other-container
# Test DNS serverdocker exec container dig @8.8.8.8 google.com
# Override DNSdocker run --dns 8.8.8.8 --dns 8.8.4.4 myapp3. Port Mapping Issues:
# Check port mappingsdocker port containerdocker inspect container | grep -A 10 PortBindings
# Verify host port availabilitynetstat -tulpn | grep :8080lsof -i :8080
# Test from hostcurl localhost:8080telnet localhost 80804. Network Traffic Capture:
# Capture traffic on containerdocker exec container tcpdump -i eth0 -w /tmp/capture.pcap
# Copy capture filedocker cp container:/tmp/capture.pcap .
# Analyze with wireshark or tcpdumptcpdump -r capture.pcap
# Monitor specific portdocker exec container tcpdump -i eth0 port 805. Debug Network Namespace:
# Get container PIDdocker inspect -f '{{.State.Pid}}' container
# Enter network namespacensenter -t <PID> -n
# View routesip route shownetstat -rn
# Check iptablesiptables -t nat -L -n6. Common Issues and Solutions:
Issue: Containers can’t reach each other
# Check if on same networkdocker inspect container1 | grep Networks -A 10docker inspect container2 | grep Networks -A 10
# Connect to networkdocker network connect my-network container1docker network connect my-network container2
# Create custom networkdocker network create --driver bridge my-networkIssue: Port already in use
# Find process using portsudo lsof -i :8080
# Kill process or use different portdocker run -p 8081:80 nginxIssue: No internet from container
# Check iptables rulessudo iptables -L -n | grep DOCKER
# Enable IP forwardingsudo sysctl net.ipv4.ip_forward=1sudo sysctl -w net.ipv4.ip_forward=1
# Check Docker daemon configcat /etc/docker/daemon.json7. Network Debugging Container:
# Create debug container in same networkdocker run -it --rm \ --network container:target-container \ --pid container:target-container \ nicolaka/netshoot \ /bin/bash
# Use network tools# - nslookup, dig for DNS# - curl, wget for HTTP# - ping, traceroute for connectivity# - netstat, ss for sockets# - tcpdump for packet capture8. Check Docker Daemon Logs:
# Linuxjournalctl -u docker.service -f | grep -i network
# Mac/Windowsdocker logs --tail 100 docker-desktopPart 2: 20 Scenario-Based Questions with Answers
Section titled “Part 2: 20 Scenario-Based Questions with Answers”1. Scenario: Production container keeps restarting with OOMKilled
Section titled “1. Scenario: Production container keeps restarting with OOMKilled”Situation: Your container keeps restarting with OOMKilled status.
Analysis:
# Check container statusdocker ps -adocker inspect container | jq '.[0].State.OOMKilled'
# Check memory usage historydocker stats --no-stream containerSolution:
# 1. Increase memory limitdocker update --memory 2g --memory-swap 3g container
# 2. Set memory reservation (pre-allocation)docker update --memory-reservation 1g container
# 3. For new container, set appropriate limitsdocker run -d \ -m 2g \ --memory-reservation 1g \ --memory-swap 3g \ --oom-kill-disable=false \ # Better to kill than hang myapp
# 4. Investigate memory leak in applicationdocker exec container jmap -heap <pid> # Javadocker exec container top -b -n 1 | head -202. Scenario: Docker build takes 20 minutes, how to optimize?
Section titled “2. Scenario: Docker build takes 20 minutes, how to optimize?”Situation: Building your Docker image takes 20 minutes, slowing down CI/CD.
Analysis:
# View build time per layerdocker build --progress=plain -t myapp . 2>&1 | tee build.logOptimizations:
# 1. Optimize layer orderFROM node:18
# Dependencies first (changes rarely)COPY package*.json ./RUN npm ci # Cached unless package.json changes
# Source code (changes often)COPY . .RUN npm run build
# 2. Use multi-stage buildsFROM node:18 AS builderCOPY package*.json ./RUN npm ciCOPY . .RUN npm run build
FROM node:18-alpineCOPY --from=builder /app/dist ./distCOPY package*.json ./RUN npm ci --only=production
# 3. Leverage BuildKit cache mountsRUN --mount=type=cache,target=/root/.npm \ npm ci
# 4. Use .dockerignorenode_modules/.git/*.log3. Scenario: Container can’t connect to database
Section titled “3. Scenario: Container can’t connect to database”Situation: Your application container can’t connect to the PostgreSQL database.
Debugging Steps:
# 1. Check network connectivitydocker exec app ping db-container# If fails, check networkdocker network lsdocker network inspect app-network
# 2. Check DNS resolutiondocker exec app nslookup db-container
# 3. Check database container statusdocker ps | grep postgresdocker logs postgres
# 4. Test connection with dedicated tooldocker exec app psql -h db-container -U user -d dbname
# 5. Verify network isolationdocker run -it --rm --network app-network postgres:13 psql -h db-container -U userSolutions:
# Create custom networkdocker network create app-network
# Connect both containersdocker network connect app-network appdocker network connect app-network db
# Use service names in connection strings# Instead of: localhost:5432# Use: db-container:54324. Scenario: Disk full due to Docker
Section titled “4. Scenario: Disk full due to Docker”Situation: Your server’s disk is full because of Docker images and containers.
Analysis:
# Check disk usagedf -hdocker system df -v
# Find large filesdocker system df --verboseCleanup Commands:
# 1. Remove all unused resourcesdocker system prune -a --volumes
# 2. Clean specific resourcesdocker container prune --filter "until=24h"docker image prune -a --filter "until=24h"docker volume prunedocker builder prune
# 3. Remove all stopped containersdocker rm $(docker ps -a -q)
# 4. Remove dangling imagesdocker rmi $(docker images -f "dangling=true" -q)
# 5. Truncate logstruncate -s 0 /var/lib/docker/containers/*/*.log
# 6. Set up log rotation in daemon.json{ "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" }}5. Scenario: Application needs to share data between containers
Section titled “5. Scenario: Application needs to share data between containers”Situation: You have a web app and a backup service that need to access the same files.
Solution with Volumes:
# Create shared volumedocker volume create shared-data
# Web container (write)docker run -d \ --name web \ --mount type=volume,source=shared-data,target=/data \ webapp
# Backup container (read-only)docker run -d \ --name backup \ --mount type=volume,source=shared-data,target=/backup-data,readonly \ backup-service
# Verify permissionsdocker exec web touch /data/test.txtdocker exec backup ls -la /backup-dataDocker Compose:
version:'3.8'
services:web:image: webappvolumes:- shared-data:/data
backup:image: backup-servicevolumes:- shared-data:/backup-data:ro
volumes:shared-data:6. Scenario: Container running but not responding to requests
Section titled “6. Scenario: Container running but not responding to requests”Situation: Your container is running but not responding to HTTP requests.
Debugging:
# 1. Check if container is actually runningdocker ps
# 2. Check logsdocker logs --tail 50 container
# 3. Check port mappingdocker port containerdocker inspect container | grep -A 5 PortBindings
# 4. Test inside containerdocker exec container curl localhost:80docker exec container netstat -tulpn
# 5. Check process insidedocker exec container ps aux
# 6. Check health statusdocker inspect container | jq '.[0].State.Health'Solutions:
- Application not binding to all interfaces (0.0.0.0)
- Port mapping incorrect
- Firewall blocking
- Application crashed but container still running
7. Scenario: Need to run one-time database migration
Section titled “7. Scenario: Need to run one-time database migration”Situation: You need to run a database migration script before starting the main application.
Solution with Init Container:
version:'3.8'
services:db:image: postgres:13environment:POSTGRES_DB: appPOSTGRES_USER: userPOSTGRES_PASSWORD: passvolumes:- db-data:/var/lib/postgresql/data
migration:image: myappdepends_on:- dbenvironment:DB_HOST: dbcommand:["npm","run","migrate"]restart:"no"
app:image: myappdepends_on:- dbenvironment:DB_HOST: dbcommand:["npm","start"]Manual Approach:
# Run migration containerdocker run --rm \ --network app-network \ -e DB_HOST=db \ myapp npm run migrate
# Then start main containerdocker start app8. Scenario: Container has incorrect timezone
Section titled “8. Scenario: Container has incorrect timezone”Situation: Your application logs show wrong timestamps due to UTC timezone.
Solutions:
# 1. Mount host's /etc/localtimedocker run -v /etc/localtime:/etc/localtime:ro myapp
# 2. Set environment variabledocker run -e TZ=America/New_York myapp
# 3. In DockerfileFROM alpineRUN apk add --no-cache tzdataENV TZ=America/New_York
# 4. Docker Composeservices: app: environment: - TZ=America/New_York volumes: - /etc/localtime:/etc/localtime:ro9. Scenario: Multiple containers need to share environment variables
Section titled “9. Scenario: Multiple containers need to share environment variables”Situation: You have multiple containers that need to share the same configuration (database credentials, API keys).
Solutions:
1. Use .env file:
DB_HOST=postgresDB_USER=adminDB_PASSWORD=secretAPI_KEY=123456version:'3.8'
services:web:image: myappenv_file:- .env
worker:image: myapp-workerenv_file:- .env2. Docker secrets for sensitive data:
# Create secretsecho "secret" | docker secret create db_password -docker secret create api_key ./api_key.txt3. Use Docker config for non-sensitive:
# Create configdocker config create app_config ./config.json10. Scenario: Docker daemon not starting after reboot
Section titled “10. Scenario: Docker daemon not starting after reboot”Situation: After server reboot, Docker daemon fails to start.
Debugging:
# Check service statussystemctl status docker
# Check logsjournalctl -u docker.service -n 50
# Check daemon configcat /etc/docker/daemon.json
# Test daemon manuallydockerd --debug
# Check for corrupted overlayls -la /var/lib/docker/overlay2/Common Fixes:
# 1. Clean corrupted overlaysudo systemctl stop dockersudo rm -rf /var/lib/docker/overlay2/*sudo systemctl start docker
# 2. Fix permissionssudo chown root:root /var/run/docker.socksudo chmod 666 /var/run/docker.sock
# 3. Check disk spacedf -h /var/lib/docker
# 4. Re-enable servicesudo systemctl enable docker11. Scenario: Need to debug application inside container
Section titled “11. Scenario: Need to debug application inside container”Situation: Your application is behaving differently inside container than locally.
Debugging Approach:
# 1. Get shell accessdocker exec -it container /bin/bashdocker exec -it container sh
# 2. Check environment variablesdocker exec container env | sort
# 3. Copy files out for analysisdocker cp container:/app/logs ./logs
# 4. Run with debug flagsdocker run -it --rm \ -e DEBUG=true \ -v $(pwd):/app \ myapp node --inspect-brk=0.0.0.0:9229 app.js
# 5. Compare with local environmentdocker run -it --rm \ -v $(pwd):/app \ -w /app \ node:18 /bin/bash# Then run your app12. Scenario: Container running out of inodes
Section titled “12. Scenario: Container running out of inodes”Situation: Your container keeps failing with “no space left on device” but disk usage is low.
Analysis:
# Check inode usagedf -i
# Check inside containerdocker exec container df -i
# Find directories with many small filesdocker exec container find / -type f | cut -d/ -f2 | sort | uniq -c | sort -nrSolutions:
# Clean up old logsdocker exec container find /var/log -name "*.log" -mtime +7 -delete
# Clean Docker's overlay inodesdocker system prune -a --volumes
# Set inode limits for containerdocker run --storage-opt size=10G myapp13. Scenario: Need to run Docker commands without sudo
Section titled “13. Scenario: Need to run Docker commands without sudo”Situation: You want to run Docker commands without sudo for development.
Solution:
# Add user to docker groupsudo usermod -aG docker $USER
# Verify group membershipgroups $USER
# Logout and login again, or runnewgrp docker
# Testdocker ps
# Security implications: docker group gives root-equivalent access14. Scenario: Container can’t write to mounted volume
Section titled “14. Scenario: Container can’t write to mounted volume”Situation: Your container can’t write to bind-mounted volume due to permission issues.
Debugging:
# Check permissions on hostls -la /host/path
# Check user inside containerdocker exec container iddocker exec container whoami
# Check mount permissionsdocker inspect container | grep -A 10 MountsSolutions:
# 1. Set permissions on hostsudo chown -R 1000:1000 /host/path # Match container user
# 2. Run container with specific userdocker run -u 1000:1000 -v /host/path:/data myapp
# 3. Use Docker volumes (better permissions handling)docker volume create mydatadocker run -v mydata:/data myapp
# 4. Set SELinux context (if SELinux enabled)chcon -Rt svirt_sandbox_file_t /host/path15. Scenario: Need to limit network bandwidth for container
Section titled “15. Scenario: Need to limit network bandwidth for container”Situation: A container is consuming too much network bandwidth affecting other services.
Solution with tc (traffic control):
# Create Docker network with bandwidth limitdocker network create \ --driver bridge \ --opt com.docker.network.bridge.name=br-limit \ --opt com.docker.network.driver.mtu=1500 \ limited-net
# Get container's interfaceCONTAINER_PID=$(docker inspect -f '{{.State.Pid}}' container)
# Apply traffic controlsudo mkdir -p /var/run/netnssudo ln -s /proc/$CONTAINER_PID/ns/net /var/run/netns/$CONTAINER_PID
# Limit to 1Mbpssudo tc qdisc add dev veth-xxx root handle 1: htb default 30sudo tc class add dev veth-xxx parent 1: classid 1:1 htb rate 1mbitsudo tc filter add dev veth-xxx parent 1: protocol ip prio 1 u32 match ip dst 0.0.0.0/0 flowid 1:116. Scenario: Building images for multiple architectures
Section titled “16. Scenario: Building images for multiple architectures”Situation: You need to build images for both amd64 and arm64 architectures.
Solution with Buildx:
# Create builder instancedocker buildx create --name multiarch-builder --usedocker buildx inspect --bootstrap
# Build for multiple architecturesdocker buildx build \ --platform linux/amd64,linux/arm64 \ -t myregistry/myapp:latest \ --push \ .
# Check supported platformsdocker buildx ls17. Scenario: Need to run GUI applications in Docker
Section titled “17. Scenario: Need to run GUI applications in Docker”Situation: You need to run a GUI application (like Firefox, Chrome) in Docker.
Solution:
# Allow X11 connectionsxhost +local:docker
# Run GUI containerdocker run -it --rm \ -e DISPLAY=$DISPLAY \ -v /tmp/.X11-unix:/tmp/.X11-unix:rw \ -v $HOME/.Xauthority:/root/.Xauthority:ro \ --network host \ firefox
# For Waylanddocker run -it --rm \ -e WAYLAND_DISPLAY=$WAYLAND_DISPLAY \ -v $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY:/tmp/$WAYLAND_DISPLAY:rw \ --network host \ firefox18. Scenario: Secure Docker API access
Section titled “18. Scenario: Secure Docker API access”Situation: You need to expose Docker API for remote management securely.
Solution with TLS:
# Generate CA key and certificateopenssl genrsa -aes256 -out ca-key.pem 4096openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
# Generate server key and certificateopenssl genrsa -out server-key.pem 4096openssl req -subj "/CN=$HOST" -new -key server-key.pem -out server.csropenssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem
# Configure Docker daemon{ "hosts": ["tcp://0.0.0.0:2376", "unix:///var/run/docker.sock"], "tls": true, "tlscert": "/etc/docker/server-cert.pem", "tlskey": "/etc/docker/server-key.pem", "tlscacert": "/etc/docker/ca.pem"}19. Scenario: Rollback failed deployment
Section titled “19. Scenario: Rollback failed deployment”Situation: A deployment failed and you need to rollback quickly.
Solutions:
Docker Swarm:
# Check service statusdocker service ps web --no-trunc
# Rollback to previous versiondocker service rollback web
# View rollback historydocker service inspect web --prettyDocker Compose:
# Keep previous images taggeddocker tag myapp:latest myapp:backupdocker tag myapp:v2.0 myapp:latestdocker compose up -d
# If fails, rollbackdocker tag myapp:backup myapp:latestdocker compose up -dCustom Script:
#!/bin/bashNEW_IMAGE="myapp:v2.0"CURRENT_IMAGE=$(docker inspect -f '{{.Config.Image}}' app)
# Deploy new versiondocker service update --image $NEW_IMAGE web
# Wait and check healthsleep 30if [ $(docker service ps web --filter "desired-state=running" --format "{{.Image}}" | grep $NEW_IMAGE | wc -l) -eq 0 ]; then echo "Deployment failed, rolling back..." docker service update --image $CURRENT_IMAGE webfi20. Scenario: Cache Docker builds in CI/CD
Section titled “20. Scenario: Cache Docker builds in CI/CD”Situation: CI/CD pipeline takes too long because Docker images are built from scratch each time.
Solutions:
1. Use build cache mounts:
# GitHub Actions-name: Set up Docker Buildxuses: docker/setup-buildx-action@v2
-name: Build and pushuses: docker/build-push-action@v4with:push:truetags: myapp:latestcache-from: type=ghacache-to: type=gha,mode=max2. Layer caching with registry:
# Pull latest version for cachedocker pull myregistry/myapp:latest || true
# Build with cachedocker build \ --cache-from myregistry/myapp:latest \ -t myregistry/myapp:latest \ .3. Docker layer caching:
# GitLab CIvariables: DOCKER_BUILDKIT: 1
build: before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker pull $CI_REGISTRY_IMAGE:latest || true - docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA4. Use BuildKit registry cache:
# Build with remote cachedocker buildx build \ --cache-to type=registry,ref=myregistry/cache:build \ --cache-from type=registry,ref=myregistry/cache:build \ -t myapp:latest \ .This comprehensive guide covers the essential Docker concepts, commands, and troubleshooting scenarios that will help you succeed in Docker interviews and real-world implementations.




