Thursday, April 2, 2026

Top 50 Docker & Kubernetes Interview Questions and Answers – Beginner to Advanced

Top 50 Docker & Kubernetes Interview Questions and Answers – Beginner to Advanced

📅 Published: April 2026  |  ⏱ Reading Time: ~22 minutes  |  🏷️ DockerKubernetesDevOpsContainersInterview

📌 TL;DR: This article covers the 50 most asked Docker and Kubernetes interview questions for 2026 — from Docker basics like images, containers, and Dockerfiles, to Kubernetes concepts like pods, deployments, services, ingress, ConfigMaps, Helm charts, and production best practices. Every question includes a clear explanation, real YAML/CLI examples, and architecture diagrams. Whether you are a developer, DevOps engineer, or SRE preparing for an interview, this guide covers everything you need.

Introduction

Containerization has fundamentally changed how software is built, shipped, and run. Docker and Kubernetes are now standard tools in almost every tech company's infrastructure stack — and knowledge of them is increasingly required even for developers, not just DevOps engineers.

According to the 2024 Stack Overflow Developer Survey, Docker is the #1 most used tool among developers for the third consecutive year, and Kubernetes adoption continues to grow rapidly in enterprise environments. If you are interviewing for any backend, DevOps, SRE, or cloud engineering role, expect these questions.

This guide covers 50 carefully selected questions with detailed answers, real YAML configurations, CLI commands, and architecture diagrams. Difficulty labels guide you through what level each question targets.

💡 How to use this guide: Install Docker Desktop and minikube locally and run every command yourself. Hands-on experience with these tools is what separates candidates who just read about containers from those who actually work with them.

Section 1 – Docker Fundamentals

These questions establish whether you understand what containers are, why they exist, and how Docker's core components work together. Expect at least 5–6 of these in any DevOps interview.
Q1. What is Docker and why is it used? Beginner

Docker is an open-source platform that enables developers to package applications and their dependencies into lightweight, portable units called containers. Containers run consistently across any environment — development, testing, staging, or production — regardless of the underlying host OS.

Why Docker is used:

  • Eliminates "works on my machine" problems — the container includes everything the app needs
  • Fast startup — containers start in milliseconds vs. minutes for VMs
  • Resource efficiency — containers share the host OS kernel, using far less memory than VMs
  • Consistency — same container runs identically in dev, CI/CD, and production
  • Microservices enablement — each service runs in its own container, independently deployable
Q2. What is the difference between a Container and a Virtual Machine? Beginner
VM Architecture: ┌──────────────────────────────────────────┐ │ Host Hardware │ ├──────────────────────────────────────────┤ │ Host OS (Linux/Windows) │ ├──────────────────────────────────────────┤ │ Hypervisor (VMware/VirtualBox) │ ├────────────┬─────────────┬───────────────┤ │ Guest OS │ Guest OS │ Guest OS │ │ (Linux) │ (Windows) │ (Linux) │ │ App + Lib │ App + Lib │ App + Lib │ └────────────┴─────────────┴───────────────┘ Container Architecture: ┌──────────────────────────────────────────┐ │ Host Hardware │ ├──────────────────────────────────────────┤ │ Host OS (Linux/Windows) │ ├──────────────────────────────────────────┤ │ Docker Engine │ ├────────────┬─────────────┬───────────────┤ │ Container │ Container │ Container │ │ App + Libs │ App + Libs │ App + Libs │ │ (no OS!) │ (no OS!) │ (no OS!) │ └────────────┴─────────────┴───────────────┘
FeatureVirtual MachineContainer
OSFull guest OS per VMShares host OS kernel
Startup timeMinutesMilliseconds
SizeGBs (includes full OS)MBs (app + libs only)
IsolationStrong (hardware-level)Process-level (namespace)
PerformanceNear-native but overheadNear-native, very low overhead
PortabilityLess portable (OS-dependent)Highly portable
Q3. What is the difference between a Docker Image and a Docker Container? Beginner

This is one of the most commonly asked Docker basics questions. The relationship is similar to a class and an object in OOP:

  • Docker Image — a read-only, immutable template that defines what the container will contain. It is built from a Dockerfile and stored in a registry. Think of it as a blueprint.
  • Docker Container — a running instance of an image. You can run multiple containers from the same image, each isolated from the others. Think of it as the house built from the blueprint.
# Pull an image from Docker Hub
docker pull nginx:latest

# Run a container from that image
docker run -d -p 8080:80 --name my-nginx nginx:latest

# Same image, different container
docker run -d -p 8081:80 --name my-nginx-2 nginx:latest

# List running containers
docker ps

# List all images
docker images
Q4. What is Docker Hub and what are alternatives? Beginner

Docker Hub is the default public registry for Docker images — a cloud-based repository where you can find, store, and share container images. When you run docker pull nginx, Docker fetches the image from Docker Hub by default.

Popular alternatives:

RegistryProviderBest For
Docker HubDockerPublic images, open source projects
Amazon ECRAWSAWS-deployed applications
Google Artifact RegistryGCPGCP/GKE deployments
Azure Container RegistryMicrosoftAzure/AKS deployments
GitHub Container RegistryGitHubOpen source, GitHub Actions CI/CD
HarborCNCF (self-hosted)On-premises, private registries
Q5. What are Docker Volumes and why are they important? Beginner

By default, data written inside a container is ephemeral — it is lost when the container stops or is removed. Docker Volumes provide persistent storage that exists outside the container lifecycle.

Three types of storage in Docker:

TypeDescriptionBest For
VolumeManaged by Docker, stored in Docker's storage areaDatabases, persistent app data (recommended)
Bind MountMaps a host directory directly into the containerDevelopment (live code reload)
tmpfsStored in host memory only, not on diskSensitive data, temporary files
# Create a named volume
docker volume create my-db-data

# Run a MySQL container with persistent volume
docker run -d \
  --name mysql-db \
  -e MYSQL_ROOT_PASSWORD=secret \
  -e MYSQL_DATABASE=myapp \
  -v my-db-data:/var/lib/mysql \
  mysql:8.0

# Bind mount for development (live reload)
docker run -d \
  --name my-app \
  -v $(pwd)/src:/app/src \
  -p 3000:3000 \
  my-node-app

# List volumes
docker volume ls

# Inspect a volume
docker volume inspect my-db-data
💡 Always use named volumes for databases in production. Never store critical data only inside a container — it will be lost when the container is removed.
Q6. What are Docker Networks and what are the network drivers? Intermediate

Docker networks allow containers to communicate with each other and with external systems. Docker provides several network drivers:

DriverDescriptionUse Case
bridgeDefault for standalone containers. Creates a private internal network.Most single-host container communication
hostContainer shares the host's network stack directlyHigh-performance, no network isolation needed
noneContainer has no network accessMaximum isolation, batch jobs
overlaySpans multiple Docker hosts (Docker Swarm)Multi-host distributed applications
macvlanContainer gets its own MAC address on the networkLegacy apps needing direct LAN access
# Create a custom bridge network
docker network create my-app-network

# Connect containers on the same network — they can reach each other by name
docker run -d --name api-server --network my-app-network my-api
docker run -d --name db-server --network my-app-network mysql:8.0

# Inside api-server, you can reach the DB by its container name:
# mysql -h db-server -u root -p

# List networks
docker network ls

# Inspect a network
docker network inspect my-app-network
Q7. What is the difference between EXPOSE and -p in Docker? Beginner
  • EXPOSE in a Dockerfile — documentation only. It declares which port the container intends to listen on. It does NOT actually publish the port to the host. Other containers on the same network can still reach it.
  • -p host:container at docker run — actually publishes the port, mapping a host port to a container port so external traffic can reach it.
# Dockerfile — EXPOSE is documentation only
FROM node:20-alpine
EXPOSE 3000
CMD ["node", "server.js"]

# At runtime, -p actually opens the port to the host
docker run -p 8080:3000 my-app
# Now host port 8080 → container port 3000

# -P (uppercase) auto-maps all EXPOSE'd ports to random host ports
docker run -P my-app
docker ps  # shows the random port assigned
Q8. What are the most important Docker CLI commands? Beginner
# ── Images ──────────────────────────────────────────
docker pull nginx:latest          # Download image from registry
docker images                     # List local images
docker build -t my-app:1.0 .      # Build image from Dockerfile
docker tag my-app:1.0 repo/my-app:1.0  # Tag image for pushing
docker push repo/my-app:1.0       # Push to registry
docker rmi my-app:1.0             # Remove image
docker image prune                # Remove unused images

# ── Containers ──────────────────────────────────────
docker run -d -p 8080:80 --name web nginx    # Run container (detached)
docker ps                         # List running containers
docker ps -a                      # List all containers (including stopped)
docker stop web                   # Gracefully stop container
docker start web                  # Start stopped container
docker restart web                # Restart container
docker rm web                     # Remove container
docker logs web -f                # Follow container logs
docker exec -it web bash          # Open shell inside running container
docker inspect web                # Detailed container info (JSON)
docker stats                      # Live resource usage

# ── System ──────────────────────────────────────────
docker system prune               # Remove all unused resources
docker system df                  # Disk usage by Docker
Q9. What is the difference between docker stop and docker kill? Beginner
  • docker stop — sends a SIGTERM signal to the container's main process, giving it time (default 10 seconds) to gracefully shut down. If it doesn't stop in time, sends SIGKILL.
  • docker kill — immediately sends SIGKILL (or a specified signal), terminating the process immediately without a graceful shutdown.
💡 Always prefer docker stop in production. Applications should handle SIGTERM to close database connections, finish in-flight requests, and flush logs before exiting.
Q10. What is Docker Layer Caching and how does it affect build performance? Intermediate

Docker builds images in layers — each instruction in a Dockerfile creates a new layer. Docker caches these layers and reuses them in subsequent builds if the instruction and its context haven't changed. This dramatically speeds up builds.

The cache is invalidated for a layer and all layers after it whenever the instruction or its input changes. This means layer order in your Dockerfile matters enormously for build performance.

# ❌ BAD — copies all source code before installing dependencies
# Any code change invalidates the npm install layer
FROM node:20-alpine
WORKDIR /app
COPY . .                    # ← cache busted on every code change
RUN npm install             # ← runs npm install every single time!
CMD ["node", "server.js"]

# ✅ GOOD — install dependencies first (they change rarely)
# Only re-runs npm install when package.json changes
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./       # ← only copy package files first
RUN npm install             # ← cached unless package.json changes
COPY . .                    # ← cache busted on code change (cheap operation)
CMD ["node", "server.js"]
💡 Rule: Put instructions that change most frequently (your source code) at the bottom of the Dockerfile. Put slow, rarely-changing steps (dependency installation) near the top.
Q11. What is a Docker Registry and how do you push and pull images? Beginner

A Docker Registry is a storage and distribution system for Docker images. Docker Hub is the default public registry. Here is the full workflow of building, tagging, pushing, and pulling an image:

# 1. Build your image
docker build -t my-aspnet-app:1.0 .

# 2. Tag it with your registry and repository
docker tag my-aspnet-app:1.0 yourusername/my-aspnet-app:1.0

# 3. Log in to Docker Hub
docker login

# 4. Push the image
docker push yourusername/my-aspnet-app:1.0

# 5. On another machine, pull and run it
docker pull yourusername/my-aspnet-app:1.0
docker run -d -p 8080:80 yourusername/my-aspnet-app:1.0

# For a private registry (e.g., AWS ECR):
aws ecr get-login-password | docker login --username AWS \
  --password-stdin 123456789.dkr.ecr.ap-southeast-1.amazonaws.com
docker tag my-app:1.0 123456789.dkr.ecr.ap-southeast-1.amazonaws.com/my-app:1.0
docker push 123456789.dkr.ecr.ap-southeast-1.amazonaws.com/my-app:1.0
Q12. What is docker exec used for and how is it different from docker attach? Beginner
  • docker exec — runs a new command inside an already running container. This is what you use to open a shell, run a diagnostic command, or inspect files inside a container.
  • docker attach — attaches your terminal to the container's main process (PID 1). Exiting can stop the container if the main process exits.
# Open an interactive bash shell inside a running container
docker exec -it my-container bash

# Or sh for Alpine-based containers (no bash)
docker exec -it my-container sh

# Run a one-off command
docker exec my-container cat /etc/nginx/nginx.conf

# Check environment variables
docker exec my-container env

# Inspect file contents
docker exec my-container ls -la /app
💡 Use docker exec for debugging and diagnostics in production. Use docker attach only when you specifically need to interact with the main process.

Section 2 – Dockerfile & Image Optimization

Writing efficient Dockerfiles is a skill that separates beginners from professionals. These questions cover best practices that directly impact image size, build speed, and security.
Q13. What is a Dockerfile and what are its key instructions? Beginner

A Dockerfile is a text file containing instructions to build a Docker image. Each instruction creates a layer in the image.

InstructionPurposeExample
FROMBase image to build fromFROM node:20-alpine
WORKDIRSet working directoryWORKDIR /app
COPYCopy files from host to imageCOPY . .
ADDLike COPY but supports URLs and tar extractionADD app.tar.gz /app
RUNExecute a command during buildRUN npm install
ENVSet environment variablesENV NODE_ENV=production
ARGBuild-time variable (not in final image)ARG BUILD_VERSION
EXPOSEDocument port the container usesEXPOSE 3000
VOLUMECreate mount point for volumesVOLUME /data
CMDDefault command when container starts (overridable)CMD ["node", "server.js"]
ENTRYPOINTFixed command that always runs (not easily overridden)ENTRYPOINT ["dotnet", "app.dll"]
HEALTHCHECKDefine a health check commandHEALTHCHECK CMD curl -f http://localhost/health
USERSet user for subsequent instructionsUSER appuser
Q14. What is the difference between CMD and ENTRYPOINT? Intermediate
FeatureCMDENTRYPOINT
Overridable at runtimeYes (with docker run image CMD)Not easily (needs --entrypoint flag)
PurposeDefault command or argsFixed executable that always runs
Combined useProvides default args to ENTRYPOINTExecutable; CMD provides default args
# CMD only — full command, easily overridden
FROM node:20-alpine
CMD ["node", "server.js"]
# docker run my-image                → runs: node server.js
# docker run my-image node other.js  → runs: node other.js (override)

# ENTRYPOINT only — always runs node
FROM node:20-alpine
ENTRYPOINT ["node"]
# docker run my-image server.js  → runs: node server.js
# docker run my-image other.js   → runs: node other.js

# ENTRYPOINT + CMD — best pattern for flexible containers
FROM node:20-alpine
ENTRYPOINT ["node"]
CMD ["server.js"]             # default arg, overridable
# docker run my-image          → runs: node server.js
# docker run my-image other.js → runs: node other.js
Q15. What is Multi-Stage Build in Docker and why is it important? Intermediate

Multi-stage builds allow you to use multiple FROM instructions in a single Dockerfile. Each stage can copy artifacts from the previous one, letting you compile/build in a full environment and then copy only the output into a lean final image — drastically reducing image size.

# Multi-stage build for an ASP.NET Core application
# ── Stage 1: Build ───────────────────────────────────
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src

# Copy and restore dependencies
COPY *.csproj ./
RUN dotnet restore

# Copy source and build
COPY . .
RUN dotnet publish -c Release -o /app/publish

# ── Stage 2: Runtime ─────────────────────────────────
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime
WORKDIR /app

# Only copy the published output from the build stage
COPY --from=build /app/publish .

EXPOSE 8080
ENTRYPOINT ["dotnet", "MyApp.dll"]

# Result:
# Build stage image: ~800MB (full SDK)
# Final image: ~200MB (runtime only — no SDK, no source code)
# Multi-stage for a Node.js app
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

FROM node:20-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
USER node
CMD ["node", "server.js"]
💡 Multi-stage builds are a must-know for production. They reduce attack surface, improve security (no build tools in production image), and cut image size by 60–80% in most cases.
Q16. What is a .dockerignore file? Beginner

A .dockerignore file tells Docker which files and directories to exclude when building an image — similar to .gitignore for Git. This reduces build context size, speeds up builds, and prevents sensitive files from being copied into images.

# .dockerignore for a Node.js project
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.env.*
*.test.js
coverage/
.nyc_output
dist/
.DS_Store

# .dockerignore for an ASP.NET Core project
bin/
obj/
*.user
.vs/
.vscode/
*.md
.git/
.gitignore
**/*.env
⚠️ Never forget to add .env files to .dockerignore. Secrets and API keys baked into a Docker image are a serious security vulnerability, especially if pushed to a public registry.
Q17. How do you reduce Docker image size? Intermediate

Large images slow down CI/CD pipelines, increase storage costs, and expand the attack surface. Key techniques to reduce size:

  • Use Alpine-based imagesnode:20-alpine is ~50MB vs node:20 at ~1GB
  • Use multi-stage builds — only copy production artifacts to the final stage
  • Combine RUN commands — each RUN creates a layer; chain commands to minimize layers
  • Clean up in the same RUN — removing files in a subsequent RUN doesn't actually save space (the layer already has the data)
  • Use .dockerignore — exclude unnecessary files from the build context
  • Use specific base image tags — avoid :latest, use exact versions
# ❌ BAD — each RUN creates a separate layer
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*   # this does NOT reduce size — too late!

# ✅ GOOD — single layer, cleanup in same command
RUN apt-get update && \
    apt-get install -y curl && \
    rm -rf /var/lib/apt/lists/*   # cleanup in same layer = smaller image

# ✅ Alpine is much smaller
FROM node:20-alpine    # ~130MB
# vs
FROM node:20           # ~1.1GB
Q18. What is the difference between COPY and ADD in a Dockerfile? Beginner

Both copy files into the image, but ADD has two extra capabilities that can cause unexpected behavior:

  • ADD can accept a URL as the source and download it directly
  • ADD automatically extracts tar archives (.tar, .tar.gz, etc.)
💡 Best Practice: Always use COPY unless you specifically need ADD's tar extraction or URL feature. COPY is explicit and predictable — you always know exactly what it does.
Q19. How do you write a production-ready Dockerfile for an ASP.NET Core application? Intermediate
# Production-ready ASP.NET Core 8 Dockerfile
# ── Stage 1: Build ───────────────────────────────────
FROM mcr.microsoft.com/dotnet/sdk:8.0-alpine AS build
WORKDIR /src

# Copy project files and restore (cached layer)
COPY ["MyApp/MyApp.csproj", "MyApp/"]
RUN dotnet restore "MyApp/MyApp.csproj"

# Copy everything and build
COPY . .
WORKDIR /src/MyApp
RUN dotnet build "MyApp.csproj" -c Release -o /app/build

# ── Stage 2: Publish ─────────────────────────────────
FROM build AS publish
RUN dotnet publish "MyApp.csproj" -c Release -o /app/publish \
    --no-restore \
    /p:UseAppHost=false

# ── Stage 3: Final Runtime ───────────────────────────
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine AS final
WORKDIR /app

# Security: create and use non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

# Copy only published output
COPY --from=publish --chown=appuser:appgroup /app/publish .

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:8080/health || exit 1

EXPOSE 8080
ENV ASPNETCORE_URLS=http://+:8080
ENTRYPOINT ["dotnet", "MyApp.dll"]

Section 3 – Docker Compose & Networking

Q20. What is Docker Compose and when would you use it? Beginner

Docker Compose is a tool for defining and running multi-container applications. You describe your entire application stack (web server, database, cache, message queue) in a single docker-compose.yml file and start everything with one command.

version: "3.9"

services:
  # ASP.NET Core API
  api:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8080:8080"
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ConnectionStrings__Default=Server=db;Database=myapp;User=sa;Password=YourPass123!
    depends_on:
      db:
        condition: service_healthy
    networks:
      - app-network
    restart: unless-stopped

  # SQL Server database
  db:
    image: mcr.microsoft.com/mssql/server:2022-latest
    environment:
      - ACCEPT_EULA=Y
      - SA_PASSWORD=YourPass123!
      - MSSQL_DB=myapp
    volumes:
      - db-data:/var/opt/mssql
    networks:
      - app-network
    healthcheck:
      test: /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P "YourPass123!" -Q "SELECT 1"
      interval: 10s
      retries: 5
      start_period: 30s

  # Redis cache
  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data
    networks:
      - app-network
    command: redis-server --appendonly yes

volumes:
  db-data:
  redis-data:

networks:
  app-network:
    driver: bridge
# Start all services
docker compose up -d

# View logs
docker compose logs -f api

# Stop all services
docker compose down

# Stop and remove volumes
docker compose down -v

# Rebuild and restart
docker compose up -d --build
Q21. What is the difference between docker compose up and docker compose start? Beginner
  • docker compose up — creates and starts containers. If they don't exist, it creates them. Pulls missing images and builds if needed.
  • docker compose start — starts already-created but stopped containers. Will not create new containers.
  • docker compose down — stops and removes containers, networks (but not volumes by default)
  • docker compose stop — stops containers but does NOT remove them

Section 4 – Kubernetes Fundamentals

Kubernetes (K8s) is the industry standard for container orchestration. These questions cover the core building blocks every Kubernetes user must know deeply.
Q22. What is Kubernetes and what problems does it solve? Beginner

Kubernetes is an open-source container orchestration platform originally developed by Google. It automates the deployment, scaling, and management of containerized applications across clusters of machines.

Problems Kubernetes solves:

  • Self-healing — automatically restarts crashed containers, replaces failed nodes
  • Auto-scaling — scales pods up/down based on CPU, memory, or custom metrics
  • Load balancing — distributes traffic across healthy pods automatically
  • Rolling updates — deploy new versions with zero downtime
  • Service discovery — pods find each other by name without hardcoded IPs
  • Secret management — stores and injects credentials securely
  • Storage orchestration — automatically mounts the storage the app needs
Q23. What is the Kubernetes architecture? Intermediate
Kubernetes Cluster Architecture: ┌─────────────────────────────────────────────────────────┐ │ Control Plane │ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ │ API Server │ │ etcd │ │ Controller │ │ │ │ (kubectl │ │ (cluster │ │ Manager │ │ │ │ gateway) │ │ state DB) │ │ │ │ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │ ┌──────────────┐ │ │ │ Scheduler │ (assigns pods to nodes) │ │ └──────────────┘ │ └─────────────────────────────────────────────────────────┘ │ │ │ ┌──────────▼───┐ ┌────────▼─────┐ ┌─────▼────────┐ │ Worker Node │ │ Worker Node │ │ Worker Node │ │ ┌──────────┐ │ │ ┌──────────┐ │ │ ┌──────────┐ │ │ │ kubelet │ │ │ │ kubelet │ │ │ │ kubelet │ │ │ ├──────────┤ │ │ ├──────────┤ │ │ ├──────────┤ │ │ │kube-proxy│ │ │ │kube-proxy│ │ │ │kube-proxy│ │ │ ├──────────┤ │ │ ├──────────┤ │ │ ├──────────┤ │ │ │Container │ │ │ │Container │ │ │ │Container │ │ │ │ Runtime │ │ │ │ Runtime │ │ │ │ Runtime │ │ │ │(containerd)│ │ │(containerd)│ │ │(containerd)│ │ │ ├──┬───┬───┤ │ │ ├──┬───┬───┤ │ │ ├──┬───┬───┤ │ │ │P1│P2 │P3 │ │ │ │P4│P5 │ │ │ │ │P6│ │ │ │ │ └──┴───┴───┘ │ │ └──┴───┴───┘ │ │ └──┴───┴───┘ │ └──────────────┘ └──────────────┘ └──────────────┘ P = Pod

Control Plane components:

  • API Server — the gateway for all operations; all kubectl commands talk to this
  • etcd — distributed key-value store that holds the entire cluster state
  • Scheduler — assigns new pods to appropriate worker nodes based on resources and constraints
  • Controller Manager — runs controllers that maintain desired state (ReplicaSet, Deployment, etc.)

Worker Node components:

  • kubelet — agent on each node that communicates with the API server and manages pods
  • kube-proxy — maintains network rules for pod communication
  • Container Runtime — runs containers (containerd, CRI-O)
Q24. What is a Pod in Kubernetes? Beginner

A Pod is the smallest deployable unit in Kubernetes. It represents one or more containers that share the same network namespace (same IP address and port space) and storage volumes. Containers within a Pod communicate via localhost.

In practice, most pods run a single container. The multi-container pattern (sidecar, ambassador, adapter) is used for specific architectural needs.

# pod.yaml — simple single-container pod
apiVersion: v1
kind: Pod
metadata:
  name: my-api-pod
  labels:
    app: my-api
    tier: backend
spec:
  containers:
    - name: api
      image: myrepo/my-api:1.0
      ports:
        - containerPort: 8080
      env:
        - name: ASPNETCORE_ENVIRONMENT
          value: Production
      resources:
        requests:
          memory: "128Mi"
          cpu: "100m"
        limits:
          memory: "256Mi"
          cpu: "500m"
      livenessProbe:
        httpGet:
          path: /health
          port: 8080
        initialDelaySeconds: 10
        periodSeconds: 10
      readinessProbe:
        httpGet:
          path: /ready
          port: 8080
        initialDelaySeconds: 5
        periodSeconds: 5
# Apply the pod
kubectl apply -f pod.yaml

# Get pod status
kubectl get pods

# Describe a pod (events, conditions)
kubectl describe pod my-api-pod

# View logs
kubectl logs my-api-pod -f

# Execute a command inside the pod
kubectl exec -it my-api-pod -- bash
Q25. What is the difference between a Deployment and a ReplicaSet? Intermediate
  • ReplicaSet — ensures a specified number of identical pod replicas are running at all times. If a pod dies, ReplicaSet creates a new one. But it cannot do rolling updates.
  • Deployment — manages ReplicaSets and adds rolling update and rollback capabilities. You almost never create a ReplicaSet directly — you create a Deployment which creates and manages the ReplicaSet for you.
# deployment.yaml — the standard way to deploy applications in K8s
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-api
  namespace: production
spec:
  replicas: 3                        # run 3 pods
  selector:
    matchLabels:
      app: my-api
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1                    # max 1 extra pod during update
      maxUnavailable: 0              # never go below 3 running pods
  template:
    metadata:
      labels:
        app: my-api
    spec:
      containers:
        - name: api
          image: myrepo/my-api:2.0
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
# Deploy
kubectl apply -f deployment.yaml

# Check rollout status
kubectl rollout status deployment/my-api

# View rollout history
kubectl rollout history deployment/my-api

# Roll back to previous version
kubectl rollout undo deployment/my-api

# Scale manually
kubectl scale deployment my-api --replicas=5
Q26. What are Kubernetes Services and what are the different types? Intermediate

A Service provides a stable network endpoint (IP + DNS name) to access a set of pods. Since pods are ephemeral and their IPs change, Services abstract that away with a stable address.

TypeAccessible FromUse Case
ClusterIPWithin the cluster onlyInternal service communication (default)
NodePortExternal via NodeIP:PortDevelopment, simple external access
LoadBalancerExternal via cloud load balancerProduction external access on cloud (AWS/GCP/Azure)
ExternalNameMaps to external DNS nameAccess external services by K8s name
# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-api-service
spec:
  type: ClusterIP           # internal only
  selector:
    app: my-api             # routes to pods with this label
  ports:
    - protocol: TCP
      port: 80              # service port
      targetPort: 8080      # pod port

---
# LoadBalancer service for production external access
apiVersion: v1
kind: Service
metadata:
  name: my-api-lb
spec:
  type: LoadBalancer
  selector:
    app: my-api
  ports:
    - port: 80
      targetPort: 8080
Q27. What is a Kubernetes Ingress? Intermediate

Ingress is a Kubernetes resource that manages external HTTP/HTTPS access to services within the cluster. It provides path-based and host-based routing, TLS termination, and virtual hosting — all through a single load balancer instead of one per service.

# ingress.yaml — route traffic based on path
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - api.triksbuddy.com
      secretName: triksbuddy-tls
  rules:
    - host: api.triksbuddy.com
      http:
        paths:
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: my-api-service
                port:
                  number: 80
          - path: /admin
            pathType: Prefix
            backend:
              service:
                name: admin-service
                port:
                  number: 80
ℹ️ Ingress requires an Ingress Controller to be installed in the cluster (e.g., NGINX Ingress Controller, Traefik, HAProxy). The Ingress resource alone does nothing without a controller.
Q28. What are ConfigMaps and Secrets in Kubernetes? Intermediate

Both decouple configuration from container images, but serve different purposes:

  • ConfigMap — stores non-sensitive configuration data as key-value pairs (app settings, feature flags, config files)
  • Secret — stores sensitive data (passwords, API keys, TLS certificates). Values are base64-encoded (not encrypted by default — use encryption at rest in production)
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  APP_ENV: "production"
  LOG_LEVEL: "info"
  MAX_CONNECTIONS: "100"
  appsettings.json: |
    {
      "Logging": { "LogLevel": { "Default": "Information" } }
    }

---
# Secret (values must be base64 encoded)
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  DB_PASSWORD: c2VjcmV0MTIz        # base64 of "secret123"
  JWT_SECRET: bXlqd3RzZWNyZXQ=    # base64 of "myjwtsecret"

---
# Use them in a Pod
spec:
  containers:
    - name: api
      image: myrepo/api:1.0
      envFrom:
        - configMapRef:
            name: app-config        # inject all ConfigMap keys as env vars
        - secretRef:
            name: app-secrets       # inject all Secret keys as env vars
      volumeMounts:
        - name: config-volume
          mountPath: /app/config    # mount config file into container
  volumes:
    - name: config-volume
      configMap:
        name: app-config
        items:
          - key: appsettings.json
            path: appsettings.json

Section 5 – Kubernetes Advanced Topics

Q29. What is Horizontal Pod Autoscaler (HPA)? Intermediate

HPA automatically scales the number of pods in a Deployment or ReplicaSet based on observed CPU/memory utilization or custom metrics. It ensures your application scales out under load and scales in when load decreases, optimizing resource usage and cost.

# hpa.yaml — scale between 2 and 10 replicas based on CPU
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-api-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-api
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70    # scale up when avg CPU > 70%
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80
# Check HPA status
kubectl get hpa
kubectl describe hpa my-api-hpa
💡 HPA requires the Metrics Server to be installed in the cluster to collect CPU and memory metrics. For custom metrics (requests per second, queue depth), you need Prometheus + KEDA or a custom metrics adapter.
Q30. What is the difference between Liveness, Readiness, and Startup Probes? Intermediate
ProbePurposeAction on Failure
LivenessIs the container still running correctly?Restart the container
ReadinessIs the container ready to serve traffic?Remove from Service endpoints (no restart)
StartupHas the container finished starting up?Restart the container (disables liveness during startup)
containers:
  - name: api
    image: myrepo/api:1.0

    # Startup probe — give slow-starting apps time to initialize
    startupProbe:
      httpGet:
        path: /health
        port: 8080
      failureThreshold: 30    # 30 * 10s = 5 minutes to start
      periodSeconds: 10

    # Liveness probe — restart if the app deadlocks or crashes
    livenessProbe:
      httpGet:
        path: /health/live
        port: 8080
      initialDelaySeconds: 0
      periodSeconds: 10
      failureThreshold: 3

    # Readiness probe — only send traffic when app is ready
    readinessProbe:
      httpGet:
        path: /health/ready
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5
      failureThreshold: 3     # remove from load balancer after 3 failures
Q31. What are Kubernetes Namespaces and why are they used? Beginner

Namespaces provide a mechanism for isolating groups of resources within a single cluster. They are used to divide cluster resources between multiple users, teams, or environments.

# Create namespaces for different environments
kubectl create namespace development
kubectl create namespace staging
kubectl create namespace production

# Deploy to a specific namespace
kubectl apply -f deployment.yaml -n production

# View resources in a namespace
kubectl get pods -n production
kubectl get all -n production

# Set default namespace for your kubectl context
kubectl config set-context --current --namespace=production
💡 A common pattern is to run all environments (dev/staging/prod) in the same cluster but in different namespaces, with Resource Quotas to limit each namespace's resource consumption.
Q32. What is Helm and why is it used? Intermediate

Helm is the package manager for Kubernetes — often called "the apt/yum of Kubernetes." It bundles Kubernetes YAML files into reusable packages called charts, with templating support for variable substitution. This makes deploying complex applications (and managing differences between environments) much simpler.

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Add a chart repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Search for charts
helm search repo redis

# Install a chart with custom values
helm install my-redis bitnami/redis \
  --namespace production \
  --set auth.password=secretpassword \
  --set replica.replicaCount=3

# Create your own chart
helm create my-app-chart

# Install your chart
helm install my-app ./my-app-chart \
  --namespace production \
  -f values-production.yaml

# Upgrade
helm upgrade my-app ./my-app-chart -f values-production.yaml

# Roll back
helm rollback my-app 1

# List releases
helm list -n production
Q33. What is a StatefulSet and how is it different from a Deployment? Advanced
FeatureDeploymentStatefulSet
Pod identityRandom names (my-app-abc123)Stable, ordered names (my-app-0, my-app-1)
StorageShared or ephemeralEach pod gets its own persistent volume
Scaling orderRandomOrdered (0 → 1 → 2)
Use caseStateless apps (APIs, web servers)Stateful apps (databases, Kafka, Elasticsearch)
# StatefulSet for a database cluster
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "mysql"
  replicas: 3
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:8.0
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secret
                  key: password
          volumeMounts:
            - name: mysql-data
              mountPath: /var/lib/mysql
  volumeClaimTemplates:             # each pod gets its own PVC
    - metadata:
        name: mysql-data
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 10Gi

Section 6 – Production, Security & Best Practices

Q34. What are Kubernetes Resource Requests and Limits? Intermediate

Resource requests and limits tell Kubernetes how much CPU and memory a container needs and how much it is allowed to use:

  • Requests — the minimum guaranteed resources. The scheduler uses this to decide which node a pod goes on.
  • Limits — the maximum resources a container can use. If it exceeds memory limits, it is OOMKilled. If it exceeds CPU limits, it is throttled.
resources:
  requests:
    memory: "128Mi"   # guaranteed 128MB RAM
    cpu: "100m"       # guaranteed 0.1 CPU core (100 millicores)
  limits:
    memory: "256Mi"   # never use more than 256MB RAM
    cpu: "500m"       # never use more than 0.5 CPU cores
⚠️ Always set resource requests and limits in production. Without them, a single misbehaving pod can consume all node resources and evict other pods — the "noisy neighbor" problem.
Q35. What is a Kubernetes NetworkPolicy? Advanced

By default, all pods in a Kubernetes cluster can communicate with each other. NetworkPolicy lets you restrict which pods can talk to which other pods — implementing a zero-trust network model.

# Deny all ingress traffic by default, then allow only what's needed
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-network-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: my-api           # apply to my-api pods
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              role: frontend    # only allow ingress from frontend pods
      ports:
        - protocol: TCP
          port: 8080
  egress:
    - to:
        - podSelector:
            matchLabels:
              app: database     # only allow egress to database pods
      ports:
        - protocol: TCP
          port: 5432
Q36. What are Kubernetes best practices for production deployments? Advanced
  • Always use specific image tags — never :latest in production. Use semantic versioned tags like :2.1.0.
  • Run containers as non-root — set runAsNonRoot: true and readOnlyRootFilesystem: true in security context
  • Set resource requests and limits — every container must have them
  • Configure liveness and readiness probes — essential for zero-downtime deployments
  • Use namespaces — separate environments and teams
  • Enable RBAC — principle of least privilege for all service accounts
  • Use secrets management — External Secrets Operator with AWS Secrets Manager or HashiCorp Vault instead of plain K8s Secrets
  • Implement Pod Disruption Budgets — prevent too many pods from being unavailable during maintenance
  • Enable audit logging — track all API server calls for security compliance
  • Use GitOps — ArgoCD or Flux for declarative, git-driven deployments
# Security context best practices
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    fsGroup: 2000
  containers:
    - name: api
      securityContext:
        allowPrivilegeEscalation: false
        readOnlyRootFilesystem: true
        capabilities:
          drop:
            - ALL

⚡ Quick Command Cheat Sheet

Docker Commands

docker build -t app:1.0 .              # Build image
docker run -d -p 8080:80 app:1.0       # Run container
docker ps                              # Running containers
docker logs -f container-name          # Stream logs
docker exec -it container-name bash    # Open shell
docker system prune -af                # Clean everything

Kubernetes Commands

kubectl get pods -n production         # List pods
kubectl describe pod my-pod            # Pod details and events
kubectl logs my-pod -f                 # Stream pod logs
kubectl exec -it my-pod -- bash        # Shell into pod
kubectl apply -f manifest.yaml         # Apply configuration
kubectl delete -f manifest.yaml        # Delete resources
kubectl get events --sort-by=.lastTimestamp  # Cluster events
kubectl top pods                       # CPU/memory usage
kubectl rollout restart deployment/my-app    # Rolling restart
kubectl port-forward pod/my-pod 8080:80      # Local port forward

💼 Interview Tips for Docker & Kubernetes Roles

  • Know the difference between images and containers cold — this is the first question in almost every Docker interview.
  • Be ready to write a Dockerfile from scratch — multi-stage builds, correct layer ordering, and .dockerignore show you go beyond the basics.
  • Understand why pods are ephemeral and how Services abstract away changing pod IPs — this demonstrates you understand K8s' core design philosophy.
  • Know the three probe types — liveness vs readiness is a very common interview trap. Know that readiness failure removes from load balancer, liveness failure restarts the pod.
  • Mention security practices unprompted — running as non-root, not using :latest, secrets management — this signals production maturity.
  • Understand HPA vs manual scaling — being able to explain resource requests/limits and how HPA uses them shows you understand K8s scheduling.
  • Have a story about a real container problem you solved — debugging a crashlooping pod, fixing an OOMKilled container, or troubleshooting networking makes your answers concrete.

❓ Frequently Asked Questions

What is the difference between Docker Swarm and Kubernetes?

Docker Swarm is Docker's own native clustering and orchestration solution — simpler to set up and manage, but with fewer features. Kubernetes is more complex but far more powerful, feature-rich, and is the industry standard for production container orchestration. For new projects and production systems, Kubernetes is the overwhelming choice. Docker Swarm is primarily used by teams that need something simpler and already have Docker expertise.

What is the difference between a DaemonSet and a Deployment?

A Deployment runs a specified number of pod replicas anywhere in the cluster. A DaemonSet ensures that exactly one pod runs on every node (or a selected subset of nodes). DaemonSets are used for node-level infrastructure — log collectors (Fluentd), monitoring agents (Prometheus Node Exporter), or network plugins that must run on every machine.

How do you debug a CrashLoopBackOff pod?

CrashLoopBackOff means the container starts, crashes, and Kubernetes keeps restarting it. Debug steps: (1) kubectl describe pod pod-name to see events and last exit code. (2) kubectl logs pod-name --previous to see logs from the last crashed instance. (3) Check resource limits — OOMKilled (exit code 137) means out of memory. (4) Check the container's startup command and environment variables. (5) Try running the container locally with docker run to reproduce outside K8s.

What is the difference between kubectl apply and kubectl create?

kubectl create creates a resource and fails if it already exists. kubectl apply creates the resource if it doesn't exist, or updates it if it does — it performs a declarative merge. In CI/CD pipelines, always use kubectl apply so the same command works whether you are creating for the first time or updating.

Is Docker required to use Kubernetes?

No — not anymore. Kubernetes uses the Container Runtime Interface (CRI) to work with any compatible runtime. The most common runtime today is containerd (which Docker itself uses under the hood). Docker as a tool is still used to build images, but Kubernetes no longer requires Docker to be installed on nodes. Kubernetes deprecated direct Docker support (dockershim) in version 1.24.

✅ Key Takeaways

  • Containers share the host OS kernel — they are faster and lighter than VMs but provide process-level (not hardware-level) isolation
  • Docker images are immutable blueprints; containers are running instances — like classes and objects
  • Multi-stage builds are essential for production — they separate build environment from runtime, dramatically reducing image size and attack surface
  • Layer order in Dockerfiles matters — put slow, rarely-changing steps first to maximize cache hits
  • Pods are ephemeral — never hardcode pod IPs. Always use Services for stable network endpoints
  • Deployments manage ReplicaSets and add rolling update and rollback capabilities — never create ReplicaSets directly
  • Readiness probe failure removes a pod from the load balancer without restarting it. Liveness probe failure triggers a container restart
  • Always set resource requests and limits, run containers as non-root, and never use :latest in production
  • Helm makes managing complex Kubernetes applications across environments far simpler through templating and versioned releases

Found this guide useful? Share it with a developer preparing for their next DevOps interview. Have a question not covered here? Drop it in the comments below — we read and respond to every one.

Top 50 Python Interview Questions & Answers From fundamentals to advanced concepts — everything you need to ace your Python interview

Updated for 2026

Top 50 Python Interview Questions & Answers

From fundamentals to advanced concepts — everything you need to ace your Python interview in 2026.

50Questions
3Skill Levels
2026Edition
~25Min Read
Beginner (Q1–Q17)
Intermediate (Q18–Q35)
Advanced (Q36–Q50)
🐍

Beginner Questions

Q1 – Q17
01
What is Python, and what are its key features? Beginner

Python is a high-level, general-purpose, interpreted programming language created by Guido van Rossum and first released in 1991. It emphasizes code readability and simplicity.

Key Features:

  • Easy to learn & read — clean, English-like syntax
  • Interpreted — runs line by line without prior compilation
  • Dynamically typed — no need to declare variable types
  • Multi-paradigm — supports OOP, functional, and procedural styles
  • Extensive standard library — "batteries included" philosophy
  • Cross-platform — runs on Windows, Linux, macOS
  • Large ecosystem — PyPI hosts 500,000+ packages
  • Memory management — automatic garbage collection
💡Python 3.13 (2024) introduced free-threaded mode (no-GIL build), a major step forward for concurrent Python.
02
Is Python interpreted or compiled? Explain. Beginner

Python is both compiled and interpreted. The process has two stages:

  1. Compilation to bytecode: Python source (.py) is first compiled to bytecode (.pyc files in __pycache__).
  2. Interpretation: The Python Virtual Machine (PVM) then interprets the bytecode line by line at runtime.

This is why Python is usually called interpreted — the compilation step is hidden and automatic. Compared to truly compiled languages like C++, Python's interpretation adds overhead, making it slower for CPU-bound tasks but much faster to develop with.

💡Tools like PyPy use Just-In-Time (JIT) compilation to run Python code 5–10× faster than CPython for long-running programs.
03
What is the difference between mutable and immutable objects in Python? Beginner
MutableImmutable
Can be changed after creationCannot be changed after creation
list, dict, set, bytearrayint, float, str, tuple, frozenset, bytes
Same object is modified in-placeAny "change" creates a new object
Python
# Mutable — list changes in place
lst = [1, 2, 3]
print(id(lst))   # e.g. 140234567
lst.append(4)
print(id(lst))   # same id — same object

# Immutable — string creates new object
s = "hello"
print(id(s))    # e.g. 140234999
s += " world"
print(id(s))    # different id — new object
⚠️A common gotcha: mutable default arguments in functions are shared across calls! Always use None as default and create inside the function.
04
What is the difference between a list, tuple, and set? Beginner
FeatureListTupleSet
Syntax[1,2,3](1,2,3){1,2,3}
Ordered✅ Yes✅ Yes❌ No
Mutable✅ Yes❌ No✅ Yes
Duplicates✅ Allowed✅ Allowed❌ Unique only
Indexable✅ Yes✅ Yes❌ No
Use caseGeneral dataFixed recordsMembership tests

Sets support fast O(1) membership testing using hashing, making them ideal for deduplication and intersection/union operations.

05
What are Python's built-in data types? Beginner
  • Numeric: int, float, complex
  • Sequence: str, list, tuple, range
  • Mapping: dict
  • Set types: set, frozenset
  • Boolean: bool (True / False)
  • Binary: bytes, bytearray, memoryview
  • None type: NoneType

Python uses duck typing: the type of an object is determined by its behavior (methods it supports), not its declared type. Use type() or isinstance() to check types at runtime.

06
What is list comprehension and when should you use it? Beginner

List comprehension provides a concise, readable way to create lists from iterables, often replacing multi-line for loops.

Syntax: [expression for item in iterable if condition]

Python
# Traditional loop
squares = []
for x in range(10):
    if x % 2 == 0:
        squares.append(x ** 2)

# List comprehension — same result
squares = [x**2 for x in range(10) if x % 2 == 0]
# [0, 4, 16, 36, 64]

# Nested comprehension (matrix flatten)
flat = [num for row in matrix for num in row]
💡List comprehensions are generally 30–50% faster than equivalent for loops, but avoid nesting more than 2 levels deep — readability suffers.
07
What are *args and **kwargs? How are they used? Beginner

*args allows a function to accept any number of positional arguments (collected as a tuple). **kwargs allows any number of keyword arguments (collected as a dict).

Python
def demo(*args, **kwargs):
    print(args)    # tuple of positional args
    print(kwargs)  # dict of keyword args

demo(1, 2, 3, name="Alice", age=30)
# (1, 2, 3)
# {'name': 'Alice', 'age': 30}

# Unpacking with * and **
nums = [1, 2, 3]
info = {"sep": "-"}
print(*nums, **info)  # 1-2-3
08
What is a lambda function? When should you use it? Beginner

A lambda is an anonymous, single-expression function defined inline. Syntax: lambda arguments: expression.

Python
# Named function vs lambda
def square(x): return x ** 2
square_l = lambda x: x ** 2

# Common use: sorting with key
people = [("Bob", 25), ("Alice", 30), ("Charlie", 20)]
people.sort(key=lambda p: p[1])
# sorted by age: [('Charlie', 20), ('Bob', 25), ('Alice', 30)]

# With map() and filter()
doubled = list(map(lambda x: x*2, [1,2,3]))  # [2,4,6]

Use lambdas for short, throwaway functions. For anything more complex, prefer a named def for readability.

09
How do you work with Python dictionaries? What are the key methods? Beginner

Dictionaries are key-value stores with O(1) average lookup time. Since Python 3.7+, they preserve insertion order.

Python
d = {"name": "Alice", "age": 30}

d.get("name")           # "Alice" (safe, no KeyError)
d.get("city", "Unknown") # default value
d.keys()                 # dict_keys(['name', 'age'])
d.values()               # dict_values(['Alice', 30])
d.items()                # dict_items([('name','Alice'),...])
d.update({"city": "NY"}) # merge another dict
d.pop("age")             # remove & return value
d.setdefault("x", 0)    # set only if key missing

# Merge (Python 3.9+)
merged = d | {"extra": 1}
10
How does exception handling work in Python? Beginner

Python uses try / except / else / finally blocks for exception handling.

Python
try:
    result = 10 / 0
except ZeroDivisionError as e:
    print(f"Error: {e}")
except (TypeError, ValueError):
    print("Type or value error")
else:
    print("No exception!")   # runs only if no exception
finally:
    print("Always runs")      # cleanup, always executes

# Raising custom exceptions
class InvalidAgeError(ValueError):
    pass

raise InvalidAgeError("Age cannot be negative")
11
What are the four pillars of OOP in Python? Beginner
  • Encapsulation — bundling data and methods inside a class; hiding internal state using private (__attr) and protected (_attr) attributes.
  • Inheritance — a child class inherits attributes and methods from a parent class. Enables code reuse.
  • Polymorphism — different classes can be used interchangeably if they share the same interface (same method name). Python achieves this via duck typing and method overriding.
  • Abstraction — hiding implementation details and exposing only essential interfaces, often via abstract base classes (ABC).
Python
class Animal:
    def __init__(self, name):
        self.name = name          # encapsulation

    def speak(self):              # polymorphism
        raise NotImplementedError

class Dog(Animal):              # inheritance
    def speak(self):
        return f"{self.name} says Woof!"
12
What is inheritance and what types does Python support? Beginner

Python supports: Single, Multiple, Multilevel, Hierarchical, and Hybrid inheritance.

Python
# Single
class Child(Parent): pass

# Multiple
class C(A, B): pass

# Multilevel
class GrandChild(Child): pass

# super() — call parent method
class Dog(Animal):
    def __init__(self, name, breed):
        super().__init__(name)   # calls Animal.__init__
        self.breed = breed
💡Python resolves method order in multiple inheritance using the MRO (Method Resolution Order) — specifically the C3 linearisation algorithm. Check it with ClassName.__mro__.
13
What is the difference between __init__ and __new__? Beginner
__new____init__
Creates the object (allocates memory)Initializes the object
Static method; takes clsInstance method; takes self
Called firstCalled after __new__ returns instance
Must return an instanceMust return None

You rarely need to override __new__ except when subclassing immutable types (like int or str) or implementing singletons/metaclasses.

14
What are the most important Python string methods? Beginner
Python
s = "  Hello, World!  "

s.strip()           # "Hello, World!" (removes whitespace)
s.lower()           # "  hello, world!  "
s.upper()           # "  HELLO, WORLD!  "
s.replace("World", "Python")
s.split(",")        # ["  Hello", " World!  "]
s.find("World")    # 9 (index), -1 if not found
s.startswith("  H") # True
s.endswith("!  ")  # True
s.count("l")       # 3
" ".join(["a","b"]) # "a b"
s.zfill(20)         # zero-pad
f"Hello, {'Python'}!" # f-string (recommended)
15
How do you handle files in Python? Beginner
Python
# Always use context manager (with) — auto-closes file
with open("file.txt", "r", encoding="utf-8") as f:
    content = f.read()       # entire file as string
    lines = f.readlines()    # list of lines

# Write
with open("out.txt", "w") as f:
    f.write("Hello\n")

# Append
with open("out.txt", "a") as f:
    f.write("More text\n")

# Modes: r, w, a, x, rb, wb, r+
16
Explain Python's range() function. Beginner

range(start, stop, step) generates an immutable sequence of integers. It is lazy — it doesn't create all numbers in memory at once, making it memory-efficient even for large ranges.

Python
range(5)          # 0, 1, 2, 3, 4
range(2, 8)       # 2, 3, 4, 5, 6, 7
range(0, 10, 2)  # 0, 2, 4, 6, 8
range(10, 0, -2) # 10, 8, 6, 4, 2

import sys
print(sys.getsizeof(range(1000000))) # 48 bytes — always!
17
What is the difference between global and local scope in Python? Beginner

Python uses the LEGB rule for name resolution: Local → Enclosing → Global → Built-in.

Python
x = "global"

def outer():
    x = "enclosing"
    def inner():
        global x          # access module-level x
        nonlocal x        # or: access enclosing x
        x = "modified"
    inner()

# Use global/nonlocal keywords to modify outer variables
⚙️

Intermediate Questions

Q18 – Q35
18
What are decorators and how do they work? Intermediate

A decorator is a function that takes another function as input, adds functionality, and returns a new function — without modifying the original function's source code. They use the @ syntax sugar.

Python
import functools, time

def timer(func):
    @functools.wraps(func)   # preserves metadata
    def wrapper(*args, **kwargs):
        start = time.perf_counter()
        result = func(*args, **kwargs)
        elapsed = time.perf_counter() - start
        print(f"{func.__name__} took {elapsed:.4f}s")
        return result
    return wrapper

@timer
def slow_func():
    time.sleep(0.1)

# Decorator with arguments
def repeat(n):
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            for _ in range(n):
                result = func(*args, **kwargs)
            return result
        return wrapper
    return decorator

@repeat(3)
def greet(): print("Hello!")
19
What are generators and how do they differ from regular functions? Intermediate

A generator is a function that uses yield to produce values lazily, one at a time. It maintains its state between calls and does not load all values into memory.

Python
# Generator function
def fibonacci(n):
    a, b = 0, 1
    for _ in range(n):
        yield a
        a, b = b, a + b

for num in fibonacci(10):
    print(num)  # 0 1 1 2 3 5 8 13 21 34

# Generator expression (lazy list comprehension)
gen = (x**2 for x in range(1000000))
next(gen)  # 0 — computed one at a time

# send() to generator
def accumulator():
    total = 0
    while True:
        value = yield total
        if value is None: break
        total += value
💡Generators are perfect for processing large files or infinite sequences. A generator expression uses ( ) while a list comprehension uses [ ].
20
What is the difference between an iterator and an iterable? Intermediate
IterableIterator
Has __iter__() methodHas both __iter__() and __next__()
Can be looped overProduces one item at a time
list, str, dict, set, rangegenerator, file objects, zip, map
Can restart iterationExhausted after one pass
Python
lst = [1, 2, 3]        # iterable
it = iter(lst)          # iterator
next(it)                # 1
next(it)                # 2
next(it)                # 3
next(it)                # raises StopIteration
21
What are context managers and the "with" statement? Intermediate

Context managers handle setup and teardown logic automatically using the with statement. They implement __enter__ and __exit__ methods (or use @contextmanager).

Python
# Class-based context manager
class DBConnection:
    def __enter__(self):
        self.conn = connect_db()
        return self.conn

    def __exit__(self, exc_type, exc_val, tb):
        self.conn.close()
        return False  # don't suppress exceptions

# Generator-based (cleaner)
from contextlib import contextmanager

@contextmanager
def db_connection():
    conn = connect_db()
    try:
        yield conn
    finally:
        conn.close()

with db_connection() as conn:
    conn.execute("SELECT * FROM users")
22
What are closures in Python? Intermediate

A closure is an inner function that "remembers" the variables from its enclosing scope, even after the outer function has finished executing.

Python
def make_multiplier(factor):
    def multiply(x):
        return x * factor   # 'factor' is closed over
    return multiply

double = make_multiplier(2)
triple = make_multiplier(3)

double(5)   # 10
triple(5)   # 15

# Check closure variables
print(double.__closure__[0].cell_contents)  # 2

Closures are the mechanism behind decorators. They're also used in factory functions and to implement data encapsulation without classes.

23
Explain map(), filter(), and reduce() with examples. Intermediate
Python
from functools import reduce

nums = [1, 2, 3, 4, 5]

# map() — apply function to every element
squares = list(map(lambda x: x**2, nums))
# [1, 4, 9, 16, 25]

# filter() — keep elements where function is True
evens = list(filter(lambda x: x % 2 == 0, nums))
# [2, 4]

# reduce() — fold list to single value
product = reduce(lambda acc, x: acc * x, nums)
# 120 (1*2*3*4*5)

# Modern equivalents (often preferred)
squares = [x**2 for x in nums]
evens = [x for x in nums if x % 2 == 0]
24
What is the difference between deep copy and shallow copy? Intermediate
Shallow CopyDeep Copy
Copies top-level objectRecursively copies all nested objects
Nested objects share referencesNested objects are fully independent
Faster, less memorySlower, more memory
copy.copy() or list[:]copy.deepcopy()
Python
import copy

original = [[1, 2], [3, 4]]
shallow  = copy.copy(original)
deep     = copy.deepcopy(original)

original[0].append(99)
print(shallow[0])   # [1, 2, 99] — affected!
print(deep[0])      # [1, 2] — unaffected
25
What are dunder (magic) methods in Python? Intermediate

Dunder (double underscore) methods, also called magic or special methods, allow you to define how objects behave with Python's built-in operations.

Python
class Vector:
    def __init__(self, x, y):
        self.x, self.y = x, y

    def __repr__(self):          # repr(v)
        return f"Vector({self.x}, {self.y})"

    def __add__(self, other):    # v1 + v2
        return Vector(self.x + other.x, self.y + other.y)

    def __len__(self):           # len(v)
        return int((self.x**2 + self.y**2)**0.5)

    def __eq__(self, other):     # v1 == v2
        return self.x == other.x and self.y == other.y

    def __bool__(self):          # bool(v)
        return self.x != 0 or self.y != 0
26
What is the difference between @classmethod, @staticmethod, and instance methods? Intermediate
Instance Method@classmethod@staticmethod
First argself (instance)cls (class)None
Access instance✅ Yes❌ No❌ No
Access class✅ Yes✅ Yes❌ No
Use caseNormal methodsAlternative constructorsUtility functions
Python
class Date:
    def __init__(self, y, m, d):
        self.y, self.m, self.d = y, m, d

    @classmethod
    def from_string(cls, s):      # alternative constructor
        y, m, d = map(int, s.split("-"))
        return cls(y, m, d)

    @staticmethod
    def is_valid_year(year):      # utility function
        return 1900 <= year <= 2100

d = Date.from_string("2026-04-01")
27
How does multiple inheritance work in Python? What is MRO? Intermediate

Python uses the C3 linearisation algorithm (also called C3 MRO) to determine the order in which base classes are searched when looking up a method. This avoids the "diamond problem".

Python
class A:
    def who(self): print("A")

class B(A):
    def who(self): print("B")

class C(A):
    def who(self): print("C")

class D(B, C): pass  # Diamond

D().who()              # "B" — MRO: D → B → C → A
print(D.__mro__)
# (D, B, C, A, object)
28
What is the difference between a module and a package in Python? Intermediate
  • Module: A single .py file containing Python code (functions, classes, variables).
  • Package: A directory containing multiple modules, identified by an __init__.py file (which can be empty). Enables hierarchical namespace organization.
Python
# Importing a module
import math
from math import sqrt, pi

# Importing from a package
from mypackage.utils import helper

# Package structure:
# mypackage/
#   __init__.py
#   utils.py
#   models/
#     __init__.py
#     user.py
29
How do you use regular expressions in Python? Intermediate
Python
import re

text = "Contact: alice@example.com or bob@test.org"

# Find first match
match = re.search(r'\b\w+@\w+\.\w+\b', text)

# Find all matches
emails = re.findall(r'[\w.-]+@[\w.-]+\.\w+', text)

# Substitute
cleaned = re.sub(r'\s+', ' ', "hello    world")

# Compile for reuse (faster)
pattern = re.compile(r'\d{4}-\d{2}-\d{2}')
dates = pattern.findall("2026-04-01 and 2025-12-31")

# Named groups
m = re.match(r'(?P<year>\d{4})-(?P<month>\d{2})', "2026-04")
m.group('year')  # '2026'
30
What are virtual environments and why are they important? Intermediate

Virtual environments create isolated Python environments for each project, preventing dependency conflicts between projects that require different package versions.

Bash
# Create virtual environment
python -m venv .venv

# Activate (Linux/macOS)
source .venv/bin/activate

# Activate (Windows)
.venv\Scripts\activate

# Install packages (isolated)
pip install requests pandas

# Freeze dependencies
pip freeze > requirements.txt

# Install from requirements
pip install -r requirements.txt

# Modern alternative: uv (2024+)
uv venv && uv pip install requests
31
Explain Python list slicing in detail. Intermediate

Slicing syntax: lst[start:stop:step]. All parameters are optional and can be negative.

Python
lst = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

lst[2:6]     # [2, 3, 4, 5]
lst[:4]      # [0, 1, 2, 3]
lst[6:]      # [6, 7, 8, 9]
lst[:]        # shallow copy
lst[::-1]    # [9,8,7,...,0] — reversed
lst[1:8:2]   # [1, 3, 5, 7]
lst[-3:]     # [7, 8, 9]
lst[2:-2]    # [2, 3, 4, 5, 6, 7]

# Slice assignment
lst[2:5] = [20, 30]  # replaces elements
32
How does Python manage memory? Intermediate
  • Reference Counting: Every object tracks how many references point to it. When the count hits zero, memory is freed immediately.
  • Cyclic Garbage Collector: Handles circular references (A → B → A) that reference counting can't resolve. Can be triggered with gc.collect().
  • Memory Pools: Python's memory allocator (pymalloc) manages small objects (<512 bytes) using memory pools for efficiency.
  • Interning: Small integers (−5 to 256) and short strings are cached and reused to save memory.
Python
import sys, gc

a = []
print(sys.getrefcount(a))   # reference count
print(sys.getsizeof(a))     # object size in bytes

gc.collect()                # force garbage collection
33
What is the difference between threading and multiprocessing in Python? Intermediate
threadingmultiprocessing
UnitThread (lightweight)Process (heavy)
MemorySharedSeparate memory space
GILBlocked by GILBypasses GIL
Best forI/O-bound (network, files)CPU-bound (computation)
OverheadLowHigh
Python
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor

# I/O bound — use threads
with ThreadPoolExecutor(max_workers=4) as ex:
    results = list(ex.map(fetch_url, urls))

# CPU bound — use processes
with ProcessPoolExecutor(max_workers=4) as ex:
    results = list(ex.map(heavy_compute, data))
34
What are dictionary and set comprehensions? Intermediate
Python
words = ["apple", "banana", "cherry", "apple"]

# Set comprehension — unique items
unique = {w for w in words}
# {'apple', 'banana', 'cherry'}

# Dict comprehension
lengths = {w: len(w) for w in words}
# {'apple': 5, 'banana': 6, 'cherry': 6}

# Invert a dict
inv = {v: k for k, v in lengths.items()}

# Conditional dict comprehension
long_words = {w: len(w) for w in words if len(w) > 5}
# {'banana': 6, 'cherry': 6}
35
What is pickling and unpickling in Python? Intermediate

Pickling is the process of serializing a Python object into a byte stream (for storage or transmission). Unpickling is the reverse — deserializing bytes back into an object.

Python
import pickle

data = {"name": "Alice", "scores": [95, 87, 92]}

# Pickle to file
with open("data.pkl", "wb") as f:
    pickle.dump(data, f)

# Unpickle from file
with open("data.pkl", "rb") as f:
    loaded = pickle.load(f)

# In-memory bytes
b = pickle.dumps(data)
original = pickle.loads(b)
⚠️Never unpickle data from untrusted sources — it can execute arbitrary code! Use json for safe data exchange.
🚀

Advanced Questions

Q36 – Q50
36
What is the Global Interpreter Lock (GIL) and what are its implications? Advanced

The GIL is a mutex in CPython that ensures only one thread executes Python bytecode at a time, even on multi-core systems. It simplifies memory management (especially reference counting) but prevents true parallelism in CPU-bound multi-threaded programs.

Implications:

  • CPU-bound code: Multiple threads don't speed things up — use multiprocessing or C extensions instead.
  • I/O-bound code: Threads ARE effective because the GIL is released during I/O waits.
  • Python 3.12+: PEP 703 introduced a GIL-free (free-threaded) build mode as an opt-in experiment.
  • Python 3.13: Free-threaded CPython is available via python3.13t, marking a historic shift.
🔬Alternative Python implementations like Jython and IronPython have no GIL, achieving true multi-thread parallelism. PyPy has its own version of GIL.
37
What are metaclasses in Python and how do you use them? Advanced

A metaclass is the "class of a class" — it defines how classes themselves are created. In Python, type is the default metaclass. When Python sees a class statement, it calls the metaclass to build the class object.

Python
# Custom metaclass
class SingletonMeta(type):
    _instances = {}

    def __call__(cls, *args, **kwargs):
        if cls not in cls._instances:
            cls._instances[cls] = super().__call__(*args, **kwargs)
        return cls._instances[cls]

class Database(metaclass=SingletonMeta):
    def __init__(self):
        self.connection = "connected"

db1 = Database()
db2 = Database()
print(db1 is db2)  # True — same instance

# Metaclass for enforcing API
class EnforceMeta(type):
    def __new__(mcs, name, bases, namespace):
        if 'process' not in namespace:
            raise TypeError(f"{name} must define process()")
        return super().__new__(mcs, name, bases, namespace)
38
Explain async/await and asyncio in Python. Advanced

Python's asyncio implements cooperative multitasking via an event loop. async def defines a coroutine; await suspends it, allowing other tasks to run. This achieves concurrency without threads, ideal for I/O-bound workloads.

Python
import asyncio

async def fetch_data(url: str) -> str:
    await asyncio.sleep(1)  # simulate I/O
    return f"Data from {url}"

async def main():
    # Run concurrently (not sequentially)
    results = await asyncio.gather(
        fetch_data("url1"),
        fetch_data("url2"),
        fetch_data("url3"),
    )
    # All 3 complete in ~1s, not ~3s
    print(results)

asyncio.run(main())

# Async context manager
async with aiohttp.ClientSession() as session:
    async with session.get(url) as resp:
        data = await resp.json()
39
What are Python descriptors and how do they work? Advanced

A descriptor is any object that defines __get__, __set__, or __delete__. When an attribute is a descriptor, Python calls these methods instead of directly accessing the object's __dict__. Properties, functions, and class/static methods are all implemented as descriptors.

Python
class Validated:
    """A descriptor that validates positive numbers."""
    def __set_name__(self, owner, name):
        self.name = name

    def __get__(self, obj, objtype=None):
        if obj is None: return self
        return obj.__dict__.get(self.name)

    def __set__(self, obj, value):
        if value < 0:
            raise ValueError(f"{self.name} must be positive")
        obj.__dict__[self.name] = value

class Circle:
    radius = Validated()  # descriptor instance

c = Circle()
c.radius = 5    # OK
c.radius = -1   # raises ValueError
40
What are Abstract Base Classes (ABCs) and when do you use them? Advanced

ABCs define a common interface that subclasses must implement. They enforce contracts at class definition time, not at runtime call time. Use them to build plugin systems, define APIs, and enable isinstance checks against interfaces.

Python
from abc import ABC, abstractmethod

class Shape(ABC):
    @abstractmethod
    def area(self) -> float: ...

    @abstractmethod
    def perimeter(self) -> float: ...

    def describe(self):            # concrete method
        return f"Area={self.area():.2f}"

class Circle(Shape):
    def __init__(self, r): self.r = r
    def area(self): return 3.14159 * self.r**2
    def perimeter(self): return 2 * 3.14159 * self.r

Shape()  # TypeError: Can't instantiate abstract class
41
Explain Python type hints and static type checking with mypy. Advanced

Type hints (PEP 484+) add optional static type annotations to Python. They don't affect runtime behavior but enable tools like mypy, pyright, and IDE type checkers to catch bugs before execution.

Python
from __future__ import annotations
from typing import Optional, Union, TypeVar, Generic
from collections.abc import Callable, Sequence

def greet(name: str, times: int = 1) -> str:
    return (name + " ") * times

def process(items: list[int] | None) -> dict[str, int]:
    if items is None: return {}
    return {f"item_{i}": v for i, v in enumerate(items)}

T = TypeVar('T')

class Stack(Generic[T]):
    def __init__(self) -> None:
        self._items: list[T] = []

    def push(self, item: T) -> None:
        self._items.append(item)

    def pop(self) -> T:
        return self._items.pop()
42
What are Python dataclasses and how do they compare to NamedTuples? Advanced

@dataclass auto-generates __init__, __repr__, __eq__ (and optionally __lt__, __hash__, __slots__) from field annotations.

Python
from dataclasses import dataclass, field, KW_ONLY

@dataclass(order=True, frozen=True)
class Point:
    x: float
    y: float
    _: KW_ONLY
    label: str = ""

    def distance(self) -> float:
        return (self.x**2 + self.y**2)**0.5

p = Point(3.0, 4.0, label="origin")
p.distance()  # 5.0

# frozen=True makes instances hashable
# order=True adds comparison operators
43
What are __slots__ and when should you use them? Advanced

By default, Python stores instance attributes in a __dict__ per instance. Defining __slots__ replaces this with fixed-size slot arrays, reducing memory usage by ~40–60% and speeding up attribute access.

Python
class Point:
    __slots__ = ('x', 'y')

    def __init__(self, x, y):
        self.x, self.y = x, y

p = Point(1, 2)
p.z = 3          # AttributeError — no __dict__

import sys
class Normal:
    def __init__(self): self.x = self.y = 0

print(sys.getsizeof(Normal()))  # ~232 bytes
print(sys.getsizeof(Point(0,0))) # ~56 bytes
💡Use __slots__ when you need to create millions of small objects (e.g., nodes in a graph, pixels, events). Avoid if you need dynamic attributes or multiple inheritance gets complex.
44
What is a memoryview and when is it useful? Advanced

A memoryview exposes the internal buffer of a bytes-like object without copying it. This is critical for high-performance binary processing where copying large buffers (images, audio, network packets) would be expensive.

Python
data = bytearray(1_000_000)   # 1 MB buffer

# Without memoryview — copies 1MB every time!
chunk = data[1000:2000]

# With memoryview — zero-copy slice
mv = memoryview(data)
chunk = mv[1000:2000]   # no copy, same buffer
chunk[0] = 42           # modifies original data

# Used heavily in: NumPy, Pillow, socket recv_into
sock.recv_into(mv[offset:], nbytes)
45
What are weak references and when do you need them? Advanced

A weak reference doesn't increase an object's reference count. When the object's only remaining references are weak, it can be garbage collected. Useful for caches, observer patterns, and preventing memory leaks in cyclic structures.

Python
import weakref

class Cache:
    def __init__(self):
        self._store = weakref.WeakValueDictionary()

    def set(self, key, value):
        self._store[key] = value

    def get(self, key):
        return self._store.get(key)  # None if GC'd

# WeakValueDictionary auto-removes entries
# when values are garbage collected
cache = Cache()
obj = SomeLargeObject()
cache.set("key", obj)
del obj       # object GC'd; cache entry removed
46
How do you write C extensions for Python and when should you? Advanced

When Python performance is insufficient for CPU-bound work, you can call C code from Python via several approaches, in increasing order of complexity:

  • ctypes: Call shared C libraries directly with zero compilation.
  • cffi: C Foreign Function Interface — cleaner than ctypes.
  • Cython: Write Python-like code compiled to C. Popular in NumPy/SciPy.
  • Python C API: Write full extension modules in C for maximum control.
  • Numba: JIT-compile numerical Python code to LLVM — no C needed.
  • PyO3 (Rust): Write Python extensions in Rust — increasingly popular in 2026.
Python
# ctypes example
import ctypes
lib = ctypes.CDLL("./mylib.so")
lib.add.restype = ctypes.c_int
lib.add.argtypes = [ctypes.c_int, ctypes.c_int]
result = lib.add(3, 4)  # 7

# Numba example (no C needed)
from numba import jit

@jit(nopython=True)
def fast_sum(arr):
    total = 0
    for x in arr:
        total += x
    return total
47
How do you profile and optimize Python code? Advanced

Golden rule: Always measure before optimizing. Use profiling to find the actual bottleneck, not where you guess it is.

Python
# 1. timeit — micro-benchmarking
import timeit
timeit.timeit("[x**2 for x in range(100)]", number=10000)

# 2. cProfile — function-level profiling
python -m cProfile -s cumtime my_script.py

# 3. line_profiler — line-level
@profile
def slow_func(): ...
# kernprof -l -v my_script.py

# 4. memory_profiler
@memory_profiler.profile
def mem_func(): ...

# 5. py-spy — sampling profiler (no code changes)
# py-spy record -o profile.svg -- python app.py

Common optimizations: use built-ins (written in C), prefer generators over lists for large datasets, use collections.deque for queue operations, avoid + string concatenation in loops, cache with functools.lru_cache.

48
What design patterns are commonly used in Python? Advanced
Python
# SINGLETON via metaclass (see Q37)

# OBSERVER pattern
class EventEmitter:
    def __init__(self):
        self._handlers: dict = {}

    def on(self, event, fn):
        self._handlers.setdefault(event, []).append(fn)

    def emit(self, event, *args):
        for fn in self._handlers.get(event, []):
            fn(*args)

# FACTORY method
class ShapeFactory:
    _registry = {}

    @classmethod
    def register(cls, name):
        def decorator(klass):
            cls._registry[name] = klass
            return klass
        return decorator

    @classmethod
    def create(cls, name, **kwargs):
        return cls._registry[name](**kwargs)

# Memoization with lru_cache
from functools import lru_cache

@lru_cache(maxsize=128)
def fib(n):
    return n if n < 2 else fib(n-1) + fib(n-2)
49
How do you write effective tests in Python using pytest? Advanced
Python
import pytest
from unittest.mock import Mock, patch

# Fixtures for setup/teardown
@pytest.fixture
def db(tmp_path):
    conn = create_db(tmp_path / "test.db")
    yield conn
    conn.close()

# Parametrize for many inputs
@pytest.mark.parametrize("n,expected", [
    (0, 0), (1, 1), (10, 55), (20, 6765)
])
def test_fib(n, expected):
    assert fib(n) == expected

# Mocking external services
def test_api_call():
    with patch('requests.get') as mock_get:
        mock_get.return_value.json.return_value = {"ok": True}
        result = fetch_users()
        assert result == {"ok": True}

# Test exceptions
def test_invalid():
    with pytest.raises(ValueError, match="positive"):
        Circle(radius=-1)
50
What are the most important new features in Python 3.12 and 3.13? Advanced

Python 3.12 (Oct 2023):

  • PEP 695: New type parameter syntax — type Alias = list[int], def func[T](x: T) -> T
  • PEP 692: TypedDict with **kwargs using Unpack
  • Better error messages: Clearer SyntaxErrors and tracebacks
  • f-strings: Nested f-strings and quotes inside f-strings
  • ~5% performance improvement over 3.11

Python 3.13 (Oct 2024):

  • PEP 703: Free-threaded CPython (no-GIL build) — experimental but available as python3.13t
  • Experimental JIT compiler: Opt-in with --enable-experimental-jit
  • Improved REPL: Multi-line editing, color highlighting
  • PEP 667: locals() now returns a proper mapping
  • Deprecations removed: Many Python 2 legacy APIs cleaned up
Python
# Python 3.12 — new generic syntax (PEP 695)
def first[T](lst: list[T]) -> T:
    return lst[0]

class Stack[T]:
    def push(self, item: T) -> None: ...

type Vector = list[float]  # type alias statement

# Python 3.12 — f-strings improved
name = "world"
print(f"{'hello'!r} {name}")  # quotes inside f-string

Top 50 Python Interview Questions & Answers · 2026 Edition

Covers Beginner · Intermediate · Advanced · Python 3.12/3.13

Good luck with your interview! Keep coding. 🚀