Thursday, April 2, 2026

Top 50 Docker & Kubernetes Interview Questions and Answers – Beginner to Advanced

Top 50 Docker & Kubernetes Interview Questions and Answers – Beginner to Advanced

📅 Published: April 2026  |  ⏱ Reading Time: ~22 minutes  |  🏷️ DockerKubernetesDevOpsContainersInterview

📌 TL;DR: This article covers the 50 most asked Docker and Kubernetes interview questions for 2026 — from Docker basics like images, containers, and Dockerfiles, to Kubernetes concepts like pods, deployments, services, ingress, ConfigMaps, Helm charts, and production best practices. Every question includes a clear explanation, real YAML/CLI examples, and architecture diagrams. Whether you are a developer, DevOps engineer, or SRE preparing for an interview, this guide covers everything you need.

Introduction

Containerization has fundamentally changed how software is built, shipped, and run. Docker and Kubernetes are now standard tools in almost every tech company's infrastructure stack — and knowledge of them is increasingly required even for developers, not just DevOps engineers.

According to the 2024 Stack Overflow Developer Survey, Docker is the #1 most used tool among developers for the third consecutive year, and Kubernetes adoption continues to grow rapidly in enterprise environments. If you are interviewing for any backend, DevOps, SRE, or cloud engineering role, expect these questions.

This guide covers 50 carefully selected questions with detailed answers, real YAML configurations, CLI commands, and architecture diagrams. Difficulty labels guide you through what level each question targets.

💡 How to use this guide: Install Docker Desktop and minikube locally and run every command yourself. Hands-on experience with these tools is what separates candidates who just read about containers from those who actually work with them.

Section 1 – Docker Fundamentals

These questions establish whether you understand what containers are, why they exist, and how Docker's core components work together. Expect at least 5–6 of these in any DevOps interview.
Q1. What is Docker and why is it used? Beginner

Docker is an open-source platform that enables developers to package applications and their dependencies into lightweight, portable units called containers. Containers run consistently across any environment — development, testing, staging, or production — regardless of the underlying host OS.

Why Docker is used:

  • Eliminates "works on my machine" problems — the container includes everything the app needs
  • Fast startup — containers start in milliseconds vs. minutes for VMs
  • Resource efficiency — containers share the host OS kernel, using far less memory than VMs
  • Consistency — same container runs identically in dev, CI/CD, and production
  • Microservices enablement — each service runs in its own container, independently deployable
Q2. What is the difference between a Container and a Virtual Machine? Beginner
VM Architecture: ┌──────────────────────────────────────────┐ │ Host Hardware │ ├──────────────────────────────────────────┤ │ Host OS (Linux/Windows) │ ├──────────────────────────────────────────┤ │ Hypervisor (VMware/VirtualBox) │ ├────────────┬─────────────┬───────────────┤ │ Guest OS │ Guest OS │ Guest OS │ │ (Linux) │ (Windows) │ (Linux) │ │ App + Lib │ App + Lib │ App + Lib │ └────────────┴─────────────┴───────────────┘ Container Architecture: ┌──────────────────────────────────────────┐ │ Host Hardware │ ├──────────────────────────────────────────┤ │ Host OS (Linux/Windows) │ ├──────────────────────────────────────────┤ │ Docker Engine │ ├────────────┬─────────────┬───────────────┤ │ Container │ Container │ Container │ │ App + Libs │ App + Libs │ App + Libs │ │ (no OS!) │ (no OS!) │ (no OS!) │ └────────────┴─────────────┴───────────────┘
FeatureVirtual MachineContainer
OSFull guest OS per VMShares host OS kernel
Startup timeMinutesMilliseconds
SizeGBs (includes full OS)MBs (app + libs only)
IsolationStrong (hardware-level)Process-level (namespace)
PerformanceNear-native but overheadNear-native, very low overhead
PortabilityLess portable (OS-dependent)Highly portable
Q3. What is the difference between a Docker Image and a Docker Container? Beginner

This is one of the most commonly asked Docker basics questions. The relationship is similar to a class and an object in OOP:

  • Docker Image — a read-only, immutable template that defines what the container will contain. It is built from a Dockerfile and stored in a registry. Think of it as a blueprint.
  • Docker Container — a running instance of an image. You can run multiple containers from the same image, each isolated from the others. Think of it as the house built from the blueprint.
# Pull an image from Docker Hub
docker pull nginx:latest

# Run a container from that image
docker run -d -p 8080:80 --name my-nginx nginx:latest

# Same image, different container
docker run -d -p 8081:80 --name my-nginx-2 nginx:latest

# List running containers
docker ps

# List all images
docker images
Q4. What is Docker Hub and what are alternatives? Beginner

Docker Hub is the default public registry for Docker images — a cloud-based repository where you can find, store, and share container images. When you run docker pull nginx, Docker fetches the image from Docker Hub by default.

Popular alternatives:

RegistryProviderBest For
Docker HubDockerPublic images, open source projects
Amazon ECRAWSAWS-deployed applications
Google Artifact RegistryGCPGCP/GKE deployments
Azure Container RegistryMicrosoftAzure/AKS deployments
GitHub Container RegistryGitHubOpen source, GitHub Actions CI/CD
HarborCNCF (self-hosted)On-premises, private registries
Q5. What are Docker Volumes and why are they important? Beginner

By default, data written inside a container is ephemeral — it is lost when the container stops or is removed. Docker Volumes provide persistent storage that exists outside the container lifecycle.

Three types of storage in Docker:

TypeDescriptionBest For
VolumeManaged by Docker, stored in Docker's storage areaDatabases, persistent app data (recommended)
Bind MountMaps a host directory directly into the containerDevelopment (live code reload)
tmpfsStored in host memory only, not on diskSensitive data, temporary files
# Create a named volume
docker volume create my-db-data

# Run a MySQL container with persistent volume
docker run -d \
  --name mysql-db \
  -e MYSQL_ROOT_PASSWORD=secret \
  -e MYSQL_DATABASE=myapp \
  -v my-db-data:/var/lib/mysql \
  mysql:8.0

# Bind mount for development (live reload)
docker run -d \
  --name my-app \
  -v $(pwd)/src:/app/src \
  -p 3000:3000 \
  my-node-app

# List volumes
docker volume ls

# Inspect a volume
docker volume inspect my-db-data
💡 Always use named volumes for databases in production. Never store critical data only inside a container — it will be lost when the container is removed.
Q6. What are Docker Networks and what are the network drivers? Intermediate

Docker networks allow containers to communicate with each other and with external systems. Docker provides several network drivers:

DriverDescriptionUse Case
bridgeDefault for standalone containers. Creates a private internal network.Most single-host container communication
hostContainer shares the host's network stack directlyHigh-performance, no network isolation needed
noneContainer has no network accessMaximum isolation, batch jobs
overlaySpans multiple Docker hosts (Docker Swarm)Multi-host distributed applications
macvlanContainer gets its own MAC address on the networkLegacy apps needing direct LAN access
# Create a custom bridge network
docker network create my-app-network

# Connect containers on the same network — they can reach each other by name
docker run -d --name api-server --network my-app-network my-api
docker run -d --name db-server --network my-app-network mysql:8.0

# Inside api-server, you can reach the DB by its container name:
# mysql -h db-server -u root -p

# List networks
docker network ls

# Inspect a network
docker network inspect my-app-network
Q7. What is the difference between EXPOSE and -p in Docker? Beginner
  • EXPOSE in a Dockerfile — documentation only. It declares which port the container intends to listen on. It does NOT actually publish the port to the host. Other containers on the same network can still reach it.
  • -p host:container at docker run — actually publishes the port, mapping a host port to a container port so external traffic can reach it.
# Dockerfile — EXPOSE is documentation only
FROM node:20-alpine
EXPOSE 3000
CMD ["node", "server.js"]

# At runtime, -p actually opens the port to the host
docker run -p 8080:3000 my-app
# Now host port 8080 → container port 3000

# -P (uppercase) auto-maps all EXPOSE'd ports to random host ports
docker run -P my-app
docker ps  # shows the random port assigned
Q8. What are the most important Docker CLI commands? Beginner
# ── Images ──────────────────────────────────────────
docker pull nginx:latest          # Download image from registry
docker images                     # List local images
docker build -t my-app:1.0 .      # Build image from Dockerfile
docker tag my-app:1.0 repo/my-app:1.0  # Tag image for pushing
docker push repo/my-app:1.0       # Push to registry
docker rmi my-app:1.0             # Remove image
docker image prune                # Remove unused images

# ── Containers ──────────────────────────────────────
docker run -d -p 8080:80 --name web nginx    # Run container (detached)
docker ps                         # List running containers
docker ps -a                      # List all containers (including stopped)
docker stop web                   # Gracefully stop container
docker start web                  # Start stopped container
docker restart web                # Restart container
docker rm web                     # Remove container
docker logs web -f                # Follow container logs
docker exec -it web bash          # Open shell inside running container
docker inspect web                # Detailed container info (JSON)
docker stats                      # Live resource usage

# ── System ──────────────────────────────────────────
docker system prune               # Remove all unused resources
docker system df                  # Disk usage by Docker
Q9. What is the difference between docker stop and docker kill? Beginner
  • docker stop — sends a SIGTERM signal to the container's main process, giving it time (default 10 seconds) to gracefully shut down. If it doesn't stop in time, sends SIGKILL.
  • docker kill — immediately sends SIGKILL (or a specified signal), terminating the process immediately without a graceful shutdown.
💡 Always prefer docker stop in production. Applications should handle SIGTERM to close database connections, finish in-flight requests, and flush logs before exiting.
Q10. What is Docker Layer Caching and how does it affect build performance? Intermediate

Docker builds images in layers — each instruction in a Dockerfile creates a new layer. Docker caches these layers and reuses them in subsequent builds if the instruction and its context haven't changed. This dramatically speeds up builds.

The cache is invalidated for a layer and all layers after it whenever the instruction or its input changes. This means layer order in your Dockerfile matters enormously for build performance.

# ❌ BAD — copies all source code before installing dependencies
# Any code change invalidates the npm install layer
FROM node:20-alpine
WORKDIR /app
COPY . .                    # ← cache busted on every code change
RUN npm install             # ← runs npm install every single time!
CMD ["node", "server.js"]

# ✅ GOOD — install dependencies first (they change rarely)
# Only re-runs npm install when package.json changes
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./       # ← only copy package files first
RUN npm install             # ← cached unless package.json changes
COPY . .                    # ← cache busted on code change (cheap operation)
CMD ["node", "server.js"]
💡 Rule: Put instructions that change most frequently (your source code) at the bottom of the Dockerfile. Put slow, rarely-changing steps (dependency installation) near the top.
Q11. What is a Docker Registry and how do you push and pull images? Beginner

A Docker Registry is a storage and distribution system for Docker images. Docker Hub is the default public registry. Here is the full workflow of building, tagging, pushing, and pulling an image:

# 1. Build your image
docker build -t my-aspnet-app:1.0 .

# 2. Tag it with your registry and repository
docker tag my-aspnet-app:1.0 yourusername/my-aspnet-app:1.0

# 3. Log in to Docker Hub
docker login

# 4. Push the image
docker push yourusername/my-aspnet-app:1.0

# 5. On another machine, pull and run it
docker pull yourusername/my-aspnet-app:1.0
docker run -d -p 8080:80 yourusername/my-aspnet-app:1.0

# For a private registry (e.g., AWS ECR):
aws ecr get-login-password | docker login --username AWS \
  --password-stdin 123456789.dkr.ecr.ap-southeast-1.amazonaws.com
docker tag my-app:1.0 123456789.dkr.ecr.ap-southeast-1.amazonaws.com/my-app:1.0
docker push 123456789.dkr.ecr.ap-southeast-1.amazonaws.com/my-app:1.0
Q12. What is docker exec used for and how is it different from docker attach? Beginner
  • docker exec — runs a new command inside an already running container. This is what you use to open a shell, run a diagnostic command, or inspect files inside a container.
  • docker attach — attaches your terminal to the container's main process (PID 1). Exiting can stop the container if the main process exits.
# Open an interactive bash shell inside a running container
docker exec -it my-container bash

# Or sh for Alpine-based containers (no bash)
docker exec -it my-container sh

# Run a one-off command
docker exec my-container cat /etc/nginx/nginx.conf

# Check environment variables
docker exec my-container env

# Inspect file contents
docker exec my-container ls -la /app
💡 Use docker exec for debugging and diagnostics in production. Use docker attach only when you specifically need to interact with the main process.

Section 2 – Dockerfile & Image Optimization

Writing efficient Dockerfiles is a skill that separates beginners from professionals. These questions cover best practices that directly impact image size, build speed, and security.
Q13. What is a Dockerfile and what are its key instructions? Beginner

A Dockerfile is a text file containing instructions to build a Docker image. Each instruction creates a layer in the image.

InstructionPurposeExample
FROMBase image to build fromFROM node:20-alpine
WORKDIRSet working directoryWORKDIR /app
COPYCopy files from host to imageCOPY . .
ADDLike COPY but supports URLs and tar extractionADD app.tar.gz /app
RUNExecute a command during buildRUN npm install
ENVSet environment variablesENV NODE_ENV=production
ARGBuild-time variable (not in final image)ARG BUILD_VERSION
EXPOSEDocument port the container usesEXPOSE 3000
VOLUMECreate mount point for volumesVOLUME /data
CMDDefault command when container starts (overridable)CMD ["node", "server.js"]
ENTRYPOINTFixed command that always runs (not easily overridden)ENTRYPOINT ["dotnet", "app.dll"]
HEALTHCHECKDefine a health check commandHEALTHCHECK CMD curl -f http://localhost/health
USERSet user for subsequent instructionsUSER appuser
Q14. What is the difference between CMD and ENTRYPOINT? Intermediate
FeatureCMDENTRYPOINT
Overridable at runtimeYes (with docker run image CMD)Not easily (needs --entrypoint flag)
PurposeDefault command or argsFixed executable that always runs
Combined useProvides default args to ENTRYPOINTExecutable; CMD provides default args
# CMD only — full command, easily overridden
FROM node:20-alpine
CMD ["node", "server.js"]
# docker run my-image                → runs: node server.js
# docker run my-image node other.js  → runs: node other.js (override)

# ENTRYPOINT only — always runs node
FROM node:20-alpine
ENTRYPOINT ["node"]
# docker run my-image server.js  → runs: node server.js
# docker run my-image other.js   → runs: node other.js

# ENTRYPOINT + CMD — best pattern for flexible containers
FROM node:20-alpine
ENTRYPOINT ["node"]
CMD ["server.js"]             # default arg, overridable
# docker run my-image          → runs: node server.js
# docker run my-image other.js → runs: node other.js
Q15. What is Multi-Stage Build in Docker and why is it important? Intermediate

Multi-stage builds allow you to use multiple FROM instructions in a single Dockerfile. Each stage can copy artifacts from the previous one, letting you compile/build in a full environment and then copy only the output into a lean final image — drastically reducing image size.

# Multi-stage build for an ASP.NET Core application
# ── Stage 1: Build ───────────────────────────────────
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src

# Copy and restore dependencies
COPY *.csproj ./
RUN dotnet restore

# Copy source and build
COPY . .
RUN dotnet publish -c Release -o /app/publish

# ── Stage 2: Runtime ─────────────────────────────────
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime
WORKDIR /app

# Only copy the published output from the build stage
COPY --from=build /app/publish .

EXPOSE 8080
ENTRYPOINT ["dotnet", "MyApp.dll"]

# Result:
# Build stage image: ~800MB (full SDK)
# Final image: ~200MB (runtime only — no SDK, no source code)
# Multi-stage for a Node.js app
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

FROM node:20-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
USER node
CMD ["node", "server.js"]
💡 Multi-stage builds are a must-know for production. They reduce attack surface, improve security (no build tools in production image), and cut image size by 60–80% in most cases.
Q16. What is a .dockerignore file? Beginner

A .dockerignore file tells Docker which files and directories to exclude when building an image — similar to .gitignore for Git. This reduces build context size, speeds up builds, and prevents sensitive files from being copied into images.

# .dockerignore for a Node.js project
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.env.*
*.test.js
coverage/
.nyc_output
dist/
.DS_Store

# .dockerignore for an ASP.NET Core project
bin/
obj/
*.user
.vs/
.vscode/
*.md
.git/
.gitignore
**/*.env
⚠️ Never forget to add .env files to .dockerignore. Secrets and API keys baked into a Docker image are a serious security vulnerability, especially if pushed to a public registry.
Q17. How do you reduce Docker image size? Intermediate

Large images slow down CI/CD pipelines, increase storage costs, and expand the attack surface. Key techniques to reduce size:

  • Use Alpine-based imagesnode:20-alpine is ~50MB vs node:20 at ~1GB
  • Use multi-stage builds — only copy production artifacts to the final stage
  • Combine RUN commands — each RUN creates a layer; chain commands to minimize layers
  • Clean up in the same RUN — removing files in a subsequent RUN doesn't actually save space (the layer already has the data)
  • Use .dockerignore — exclude unnecessary files from the build context
  • Use specific base image tags — avoid :latest, use exact versions
# ❌ BAD — each RUN creates a separate layer
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*   # this does NOT reduce size — too late!

# ✅ GOOD — single layer, cleanup in same command
RUN apt-get update && \
    apt-get install -y curl && \
    rm -rf /var/lib/apt/lists/*   # cleanup in same layer = smaller image

# ✅ Alpine is much smaller
FROM node:20-alpine    # ~130MB
# vs
FROM node:20           # ~1.1GB
Q18. What is the difference between COPY and ADD in a Dockerfile? Beginner

Both copy files into the image, but ADD has two extra capabilities that can cause unexpected behavior:

  • ADD can accept a URL as the source and download it directly
  • ADD automatically extracts tar archives (.tar, .tar.gz, etc.)
💡 Best Practice: Always use COPY unless you specifically need ADD's tar extraction or URL feature. COPY is explicit and predictable — you always know exactly what it does.
Q19. How do you write a production-ready Dockerfile for an ASP.NET Core application? Intermediate
# Production-ready ASP.NET Core 8 Dockerfile
# ── Stage 1: Build ───────────────────────────────────
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS build
WORKDIR /src

# Copy project files and restore (cached layer)
COPY ["MyApp/MyApp.csproj", "MyApp/"]
RUN dotnet restore "MyApp/MyApp.csproj"

# Copy everything and build
COPY . .
WORKDIR /src/MyApp
RUN dotnet build "MyApp.csproj" -c Release -o /app/build

# ── Stage 2: Publish ─────────────────────────────────
FROM build AS publish
RUN dotnet publish "MyApp.csproj" -c Release -o /app/publish \
    --no-restore \
    /p:UseAppHost=false

# ── Stage 3: Final Runtime ───────────────────────────
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS final
WORKDIR /app

# Security: create and use non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

# Copy only published output
COPY --from=publish --chown=appuser:appgroup /app/publish .

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:8080/health || exit 1

EXPOSE 8080
ENV ASPNETCORE_URLS=http://+:8080
ENTRYPOINT ["dotnet", "MyApp.dll"]

Section 3 – Docker Compose & Networking

Q20. What is Docker Compose and when would you use it? Beginner

Docker Compose is a tool for defining and running multi-container applications. You describe your entire application stack (web server, database, cache, message queue) in a single docker-compose.yml file and start everything with one command.

version: "3.9"

services:
  # ASP.NET Core API
  api:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8080:8080"
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ConnectionStrings__Default=Server=db;Database=myapp;User=sa;Password=YourPass123!
    depends_on:
      db:
        condition: service_healthy
    networks:
      - app-network
    restart: unless-stopped

  # SQL Server database
  db:
    image: mcr.microsoft.com/mssql/server:2022-latest
    environment:
      - ACCEPT_EULA=Y
      - SA_PASSWORD=YourPass123!
      - MSSQL_DB=myapp
    volumes:
      - db-data:/var/opt/mssql
    networks:
      - app-network
    healthcheck:
      test: /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "YourPass123!" -Q "SELECT 1"
      interval: 10s
      retries: 5
      start_period: 30s

  # Redis cache
  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data
    networks:
      - app-network
    command: redis-server --appendonly yes

volumes:
  db-data:
  redis-data:

networks:
  app-network:
    driver: bridge
# Start all services
docker compose up -d

# View logs
docker compose logs -f api

# Stop all services
docker compose down

# Stop and remove volumes
docker compose down -v

# Rebuild and restart
docker compose up -d --build
Q21. What is the difference between docker compose up and docker compose start? Beginner
  • docker compose up — creates and starts containers. If they don't exist, it creates them. Pulls missing images and builds if needed.
  • docker compose start — starts already-created but stopped containers. Will not create new containers.
  • docker compose down — stops and removes containers, networks (but not volumes by default)
  • docker compose stop — stops containers but does NOT remove them

Section 4 – Kubernetes Fundamentals

Kubernetes (K8s) is the industry standard for container orchestration. These questions cover the core building blocks every Kubernetes user must know deeply.
Q22. What is Kubernetes and what problems does it solve? Beginner

Kubernetes is an open-source container orchestration platform originally developed by Google. It automates the deployment, scaling, and management of containerized applications across clusters of machines.

Problems Kubernetes solves:

  • Self-healing — automatically restarts crashed containers, replaces failed nodes
  • Auto-scaling — scales pods up/down based on CPU, memory, or custom metrics
  • Load balancing — distributes traffic across healthy pods automatically
  • Rolling updates — deploy new versions with zero downtime
  • Service discovery — pods find each other by name without hardcoded IPs
  • Secret management — stores and injects credentials securely
  • Storage orchestration — automatically mounts the storage the app needs
Q23. What is the Kubernetes architecture? Intermediate
Kubernetes Cluster Architecture: ┌─────────────────────────────────────────────────────────┐ │ Control Plane │ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │ API Server │ │ etcd │ │ Controller │ │ │ │ (kubectl │ │ (cluster │ │ Manager │ │ │ │ gateway) │ │ state DB) │ │ │ │ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │ ┌──────────────┐ │ │ │ Scheduler │ (assigns pods to nodes) │ │ └──────────────┘ │ └─────────────────────────────────────────────────────────┘ │ │ │ ┌──────────▼───┐ ┌────────▼─────┐ ┌─────▼────────┐ │ Worker Node │ │ Worker Node │ │ Worker Node │ │ ┌──────────┐ │ │ ┌──────────┐ │ │ ┌──────────┐ │ │ │ kubelet │ │ │ │ kubelet │ │ │ │ kubelet │ │ │ ├──────────┤ │ │ ├──────────┤ │ │ ├──────────┤ │ │ │kube-proxy│ │ │ │kube-proxy│ │ │ │kube-proxy│ │ │ ├──────────┤ │ │ ├──────────┤ │ │ ├──────────┤ │ │ │Container │ │ │ │Container │ │ │ │Container │ │ │ │ Runtime │ │ │ │ Runtime │ │ │ │ Runtime │ │ │ │(containerd)│ │ │(containerd)│ │ │(containerd)│ │ │ ├──┬───┬───┤ │ │ ├──┬───┬───┤ │ │ ├──┬───┬───┤ │ │ │P1│P2 │P3 │ │ │ │P4│P5 │ │ │ │ │P6│ │ │ │ │ └──┴───┴───┘ │ │ └──┴───┴───┘ │ │ └──┴───┴───┘ │ └──────────────┘ └──────────────┘ └──────────────┘ P = Pod

Control Plane components:

  • API Server — the gateway for all operations; all kubectl commands talk to this
  • etcd — distributed key-value store that holds the entire cluster state
  • Scheduler — assigns new pods to appropriate worker nodes based on resources and constraints
  • Controller Manager — runs controllers that maintain desired state (ReplicaSet, Deployment, etc.)

Worker Node components:

  • kubelet — agent on each node that communicates with the API server and manages pods
  • kube-proxy — maintains network rules for pod communication
  • Container Runtime — runs containers (containerd, CRI-O)
Q24. What is a Pod in Kubernetes? Beginner

A Pod is the smallest deployable unit in Kubernetes. It represents one or more containers that share the same network namespace (same IP address and port space) and storage volumes. Containers within a Pod communicate via localhost.

In practice, most pods run a single container. The multi-container pattern (sidecar, ambassador, adapter) is used for specific architectural needs.

# pod.yaml — simple single-container pod
apiVersion: v1
kind: Pod
metadata:
  name: my-api-pod
  labels:
    app: my-api
    tier: backend
spec:
  containers:
    - name: api
      image: myrepo/my-api:1.0
      ports:
        - containerPort: 8080
      env:
        - name: ASPNETCORE_ENVIRONMENT
          value: Production
      resources:
        requests:
          memory: "128Mi"
          cpu: "100m"
        limits:
          memory: "256Mi"
          cpu: "500m"
      livenessProbe:
        httpGet:
          path: /health
          port: 8080
        initialDelaySeconds: 10
        periodSeconds: 10
      readinessProbe:
        httpGet:
          path: /ready
          port: 8080
        initialDelaySeconds: 5
        periodSeconds: 5
# Apply the pod
kubectl apply -f pod.yaml

# Get pod status
kubectl get pods

# Describe a pod (events, conditions)
kubectl describe pod my-api-pod

# View logs
kubectl logs my-api-pod -f

# Execute a command inside the pod
kubectl exec -it my-api-pod -- bash
Q25. What is the difference between a Deployment and a ReplicaSet? Intermediate
  • ReplicaSet — ensures a specified number of identical pod replicas are running at all times. If a pod dies, ReplicaSet creates a new one. But it cannot do rolling updates.
  • Deployment — manages ReplicaSets and adds rolling update and rollback capabilities. You almost never create a ReplicaSet directly — you create a Deployment which creates and manages the ReplicaSet for you.
# deployment.yaml — the standard way to deploy applications in K8s
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-api
  namespace: production
spec:
  replicas: 3                        # run 3 pods
  selector:
    matchLabels:
      app: my-api
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1                    # max 1 extra pod during update
      maxUnavailable: 0              # never go below 3 running pods
  template:
    metadata:
      labels:
        app: my-api
    spec:
      containers:
        - name: api
          image: myrepo/my-api:2.0
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
# Deploy
kubectl apply -f deployment.yaml

# Check rollout status
kubectl rollout status deployment/my-api

# View rollout history
kubectl rollout history deployment/my-api

# Roll back to previous version
kubectl rollout undo deployment/my-api

# Scale manually
kubectl scale deployment my-api --replicas=5
Q26. What are Kubernetes Services and what are the different types? Intermediate

A Service provides a stable network endpoint (IP + DNS name) to access a set of pods. Since pods are ephemeral and their IPs change, Services abstract that away with a stable address.

TypeAccessible FromUse Case
ClusterIPWithin the cluster onlyInternal service communication (default)
NodePortExternal via NodeIP:PortDevelopment, simple external access
LoadBalancerExternal via cloud load balancerProduction external access on cloud (AWS/GCP/Azure)
ExternalNameMaps to external DNS nameAccess external services by K8s name
# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-api-service
spec:
  type: ClusterIP           # internal only
  selector:
    app: my-api             # routes to pods with this label
  ports:
    - protocol: TCP
      port: 80              # service port
      targetPort: 8080      # pod port

---
# LoadBalancer service for production external access
apiVersion: v1
kind: Service
metadata:
  name: my-api-lb
spec:
  type: LoadBalancer
  selector:
    app: my-api
  ports:
    - port: 80
      targetPort: 8080
Q27. What is a Kubernetes Ingress? Intermediate

Ingress is a Kubernetes resource that manages external HTTP/HTTPS access to services within the cluster. It provides path-based and host-based routing, TLS termination, and virtual hosting — all through a single load balancer instead of one per service.

# ingress.yaml — route traffic based on path
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - api.triksbuddy.com
      secretName: triksbuddy-tls
  rules:
    - host: api.triksbuddy.com
      http:
        paths:
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: my-api-service
                port:
                  number: 80
          - path: /admin
            pathType: Prefix
            backend:
              service:
                name: admin-service
                port:
                  number: 80
ℹ️ Ingress requires an Ingress Controller to be installed in the cluster (e.g., NGINX Ingress Controller, Traefik, HAProxy). The Ingress resource alone does nothing without a controller.
Q28. What are ConfigMaps and Secrets in Kubernetes? Intermediate

Both decouple configuration from container images, but serve different purposes:

  • ConfigMap — stores non-sensitive configuration data as key-value pairs (app settings, feature flags, config files)
  • Secret — stores sensitive data (passwords, API keys, TLS certificates). Values are base64-encoded (not encrypted by default — use encryption at rest in production)
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  APP_ENV: "production"
  LOG_LEVEL: "info"
  MAX_CONNECTIONS: "100"
  appsettings.json: |
    {
      "Logging": { "LogLevel": { "Default": "Information" } }
    }

---
# Secret (values must be base64 encoded)
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  DB_PASSWORD: c2VjcmV0MTIz        # base64 of "secret123"
  JWT_SECRET: bXlqd3RzZWNyZXQ=    # base64 of "myjwtsecret"

---
# Use them in a Pod
spec:
  containers:
    - name: api
      image: myrepo/api:1.0
      envFrom:
        - configMapRef:
            name: app-config        # inject all ConfigMap keys as env vars
        - secretRef:
            name: app-secrets       # inject all Secret keys as env vars
      volumeMounts:
        - name: config-volume
          mountPath: /app/config    # mount config file into container
  volumes:
    - name: config-volume
      configMap:
        name: app-config
        items:
          - key: appsettings.json
            path: appsettings.json

Section 5 – Kubernetes Advanced Topics

Q29. What is Horizontal Pod Autoscaler (HPA)? Intermediate

HPA automatically scales the number of pods in a Deployment or ReplicaSet based on observed CPU/memory utilization or custom metrics. It ensures your application scales out under load and scales in when load decreases, optimizing resource usage and cost.

# hpa.yaml — scale between 2 and 10 replicas based on CPU
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-api-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-api
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70    # scale up when avg CPU > 70%
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80
# Check HPA status
kubectl get hpa
kubectl describe hpa my-api-hpa
💡 HPA requires the Metrics Server to be installed in the cluster to collect CPU and memory metrics. For custom metrics (requests per second, queue depth), you need Prometheus + KEDA or a custom metrics adapter.
Q30. What is the difference between Liveness, Readiness, and Startup Probes? Intermediate
ProbePurposeAction on Failure
LivenessIs the container still running correctly?Restart the container
ReadinessIs the container ready to serve traffic?Remove from Service endpoints (no restart)
StartupHas the container finished starting up?Restart the container (disables liveness during startup)
containers:
  - name: api
    image: myrepo/api:1.0

    # Startup probe — give slow-starting apps time to initialize
    startupProbe:
      httpGet:
        path: /health
        port: 8080
      failureThreshold: 30    # 30 * 10s = 5 minutes to start
      periodSeconds: 10

    # Liveness probe — restart if the app deadlocks or crashes
    livenessProbe:
      httpGet:
        path: /health/live
        port: 8080
      initialDelaySeconds: 0
      periodSeconds: 10
      failureThreshold: 3

    # Readiness probe — only send traffic when app is ready
    readinessProbe:
      httpGet:
        path: /health/ready
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5
      failureThreshold: 3     # remove from load balancer after 3 failures
Q31. What are Kubernetes Namespaces and why are they used? Beginner

Namespaces provide a mechanism for isolating groups of resources within a single cluster. They are used to divide cluster resources between multiple users, teams, or environments.

# Create namespaces for different environments
kubectl create namespace development
kubectl create namespace staging
kubectl create namespace production

# Deploy to a specific namespace
kubectl apply -f deployment.yaml -n production

# View resources in a namespace
kubectl get pods -n production
kubectl get all -n production

# Set default namespace for your kubectl context
kubectl config set-context --current --namespace=production
💡 A common pattern is to run all environments (dev/staging/prod) in the same cluster but in different namespaces, with Resource Quotas to limit each namespace's resource consumption.
Q32. What is Helm and why is it used? Intermediate

Helm is the package manager for Kubernetes — often called "the apt/yum of Kubernetes." It bundles Kubernetes YAML files into reusable packages called charts, with templating support for variable substitution. This makes deploying complex applications (and managing differences between environments) much simpler.

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Add a chart repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Search for charts
helm search repo redis

# Install a chart with custom values
helm install my-redis bitnami/redis \
  --namespace production \
  --set auth.password=secretpassword \
  --set replica.replicaCount=3

# Create your own chart
helm create my-app-chart

# Install your chart
helm install my-app ./my-app-chart \
  --namespace production \
  -f values-production.yaml

# Upgrade
helm upgrade my-app ./my-app-chart -f values-production.yaml

# Roll back
helm rollback my-app 1

# List releases
helm list -n production
Q33. What is a StatefulSet and how is it different from a Deployment? Advanced
FeatureDeploymentStatefulSet
Pod identityRandom names (my-app-abc123)Stable, ordered names (my-app-0, my-app-1)
StorageShared or ephemeralEach pod gets its own persistent volume
Scaling orderRandomOrdered (0 → 1 → 2)
Use caseStateless apps (APIs, web servers)Stateful apps (databases, Kafka, Elasticsearch)
# StatefulSet for a database cluster
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "mysql"
  replicas: 3
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:8.0
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: password
          volumeMounts:
            - name: mysql-data
              mountPath: /var/lib/mysql
  volumeClaimTemplates:             # each pod gets its own PVC
    - metadata:
        name: mysql-data
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 10Gi

Section 6 – Production, Security & Best Practices

Q34. What are Kubernetes Resource Requests and Limits? Intermediate

Resource requests and limits tell Kubernetes how much CPU and memory a container needs and how much it is allowed to use:

  • Requests — the minimum guaranteed resources. The scheduler uses this to decide which node a pod goes on.
  • Limits — the maximum resources a container can use. If it exceeds memory limits, it is OOMKilled. If it exceeds CPU limits, it is throttled.
resources:
  requests:
    memory: "128Mi"   # guaranteed 128MB RAM
    cpu: "100m"       # guaranteed 0.1 CPU core (100 millicores)
  limits:
    memory: "256Mi"   # never use more than 256MB RAM
    cpu: "500m"       # never use more than 0.5 CPU cores
⚠️ Always set resource requests and limits in production. Without them, a single misbehaving pod can consume all node resources and evict other pods — the "noisy neighbor" problem.
Q35. What is a Kubernetes NetworkPolicy? Advanced

By default, all pods in a Kubernetes cluster can communicate with each other. NetworkPolicy lets you restrict which pods can talk to which other pods — implementing a zero-trust network model.

# Deny all ingress traffic by default, then allow only what's needed
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-network-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: my-api           # apply to my-api pods
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              role: frontend    # only allow ingress from frontend pods
      ports:
        - protocol: TCP
          port: 8080
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: database     # only allow egress to database pods
      ports:
        - protocol: TCP
          port: 5432
Q36. What are Kubernetes best practices for production deployments? Advanced
  • Always use specific image tags — never :latest in production. Use semantic versioned tags like :2.1.0.
  • Run containers as non-root — set runAsNonRoot: true and readOnlyRootFilesystem: true in security context
  • Set resource requests and limits — every container must have them
  • Configure liveness and readiness probes — essential for zero-downtime deployments
  • Use namespaces — separate environments and teams
  • Enable RBAC — principle of least privilege for all service accounts
  • Use secrets management — External Secrets Operator with AWS Secrets Manager or HashiCorp Vault instead of plain K8s Secrets
  • Implement Pod Disruption Budgets — prevent too many pods from being unavailable during maintenance
  • Enable audit logging — track all API server calls for security compliance
  • Use GitOps — ArgoCD or Flux for declarative, git-driven deployments
# Security context best practices
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    fsGroup: 2000
  containers:
    - name: api
      securityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        capabilities:
          drop:
            - ALL

⚡ Quick Command Cheat Sheet

Docker Commands

docker build -t app:1.0 .              # Build image
docker run -d -p 8080:80 app:1.0       # Run container
docker ps                              # Running containers
docker logs -f container-name          # Stream logs
docker exec -it container-name bash    # Open shell
docker system prune -af                # Clean everything

Kubernetes Commands

kubectl get pods -n production         # List pods
kubectl describe pod my-pod            # Pod details and events
kubectl logs my-pod -f                 # Stream pod logs
kubectl exec -it my-pod -- bash        # Shell into pod
kubectl apply -f manifest.yaml         # Apply configuration
kubectl delete -f manifest.yaml        # Delete resources
kubectl get events --sort-by=.lastTimestamp  # Cluster events
kubectl top pods                       # CPU/memory usage
kubectl rollout restart deployment/my-app    # Rolling restart
kubectl port-forward pod/my-pod 8080:80      # Local port forward

💼 Interview Tips for Docker & Kubernetes Roles

  • Know the difference between images and containers cold — this is the first question in almost every Docker interview.
  • Be ready to write a Dockerfile from scratch — multi-stage builds, correct layer ordering, and .dockerignore show you go beyond the basics.
  • Understand why pods are ephemeral and how Services abstract away changing pod IPs — this demonstrates you understand K8s' core design philosophy.
  • Know the three probe types — liveness vs readiness is a very common interview trap. Know that readiness failure removes from load balancer, liveness failure restarts the pod.
  • Mention security practices unprompted — running as non-root, not using :latest, secrets management — this signals production maturity.
  • Understand HPA vs manual scaling — being able to explain resource requests/limits and how HPA uses them shows you understand K8s scheduling.
  • Have a story about a real container problem you solved — debugging a crashlooping pod, fixing an OOMKilled container, or troubleshooting networking makes your answers concrete.

❓ Frequently Asked Questions

What is the difference between Docker Swarm and Kubernetes?

Docker Swarm is Docker's own native clustering and orchestration solution — simpler to set up and manage, but with fewer features. Kubernetes is more complex but far more powerful, feature-rich, and is the industry standard for production container orchestration. For new projects and production systems, Kubernetes is the overwhelming choice. Docker Swarm is primarily used by teams that need something simpler and already have Docker expertise.

What is the difference between a DaemonSet and a Deployment?

A Deployment runs a specified number of pod replicas anywhere in the cluster. A DaemonSet ensures that exactly one pod runs on every node (or a selected subset of nodes). DaemonSets are used for node-level infrastructure — log collectors (Fluentd), monitoring agents (Prometheus Node Exporter), or network plugins that must run on every machine.

How do you debug a CrashLoopBackOff pod?

CrashLoopBackOff means the container starts, crashes, and Kubernetes keeps restarting it. Debug steps: (1) kubectl describe pod pod-name to see events and last exit code. (2) kubectl logs pod-name --previous to see logs from the last crashed instance. (3) Check resource limits — OOMKilled (exit code 137) means out of memory. (4) Check the container's startup command and environment variables. (5) Try running the container locally with docker run to reproduce outside K8s.

What is the difference between kubectl apply and kubectl create?

kubectl create creates a resource and fails if it already exists. kubectl apply creates the resource if it doesn't exist, or updates it if it does — it performs a declarative merge. In CI/CD pipelines, always use kubectl apply so the same command works whether you are creating for the first time or updating.

Is Docker required to use Kubernetes?

No — not anymore. Kubernetes uses the Container Runtime Interface (CRI) to work with any compatible runtime. The most common runtime today is containerd (which Docker itself uses under the hood). Docker as a tool is still used to build images, but Kubernetes no longer requires Docker to be installed on nodes. Kubernetes deprecated direct Docker support (dockershim) in version 1.24.

✅ Key Takeaways

  • Containers share the host OS kernel — they are faster and lighter than VMs but provide process-level (not hardware-level) isolation
  • Docker images are immutable blueprints; containers are running instances — like classes and objects
  • Multi-stage builds are essential for production — they separate build environment from runtime, dramatically reducing image size and attack surface
  • Layer order in Dockerfiles matters — put slow, rarely-changing steps first to maximize cache hits
  • Pods are ephemeral — never hardcode pod IPs. Always use Services for stable network endpoints
  • Deployments manage ReplicaSets and add rolling update and rollback capabilities — never create ReplicaSets directly
  • Readiness probe failure removes a pod from the load balancer without restarting it. Liveness probe failure triggers a container restart
  • Always set resource requests and limits, run containers as non-root, and never use :latest in production
  • Helm makes managing complex Kubernetes applications across environments far simpler through templating and versioned releases

Found this guide useful? Share it with a developer preparing for their next DevOps interview. Have a question not covered here? Drop it in the comments below — we read and respond to every one.

No comments:

Post a Comment

Please keep your comments relevant.
Comments with external links and adult words will be filtered.