Containers and Orchestration (Docker & K8s)
Published on January 13, 2026
Containers and Orchestration (Docker & K8s)
In the world of modern software development, one of the most important decisions you must make is how you deploy and run your applications. It’s not just a technical matter; it’s a decision that affects scalability, reliability, portability, and your team’s ability to develop and deploy quickly.
Docker and Kubernetes have become essential tools for developing and deploying modern applications. In this article I’ll share why they’re so important and how to apply them in practice.
The fundamental problem
Before containers, deploying an application was a painful process:
- “Works on my machine”: Different environments (development, staging, production) had slightly different configurations
- Conflicting dependencies: One application needs Python 3.8, another needs Python 3.11
- Manual configuration: Configuring servers was a manual, error-prone process
- Difficult scalability: Adding more instances required configuring servers manually
- Complicated rollbacks: Reverting a problematic deployment was difficult and risky
Containers solve these problems by packaging an application with all its dependencies in an image that runs the same way in any environment.
What are containers?
A container is a software unit that packages code and all its dependencies so the application runs quickly and reliably in any environment. Think of a container as a box that contains everything your application needs: code, runtime, libraries, environment variables, configuration files.
Containers vs. Virtual Machines
Containers are lighter than virtual machines:
- Virtual Machines: Include a full operating system (guest OS), making them heavy (GBs)
- Containers: Share the host operating system kernel, making them light (MBs)
This difference makes containers much more efficient in terms of resources and startup time.
Docker: The container standard
Docker is the most popular platform for creating and running containers. It allows you to create, deploy, and run applications using containers.
Key Docker concepts
Image: A read-only template that defines how to create a container. Like a class in object-oriented programming.
Container: An executable instance of an image. Like an object created from a class.
Dockerfile: A text file containing instructions to build an image.
Docker Compose: A tool for defining and running multi-container applications.
Dockerfile: Creating lightweight, secure images
A well-written Dockerfile is crucial for creating efficient, secure images.
Basic example
# Use a lightweight base image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy dependency files first (to leverage Docker cache)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy the rest of the code
COPY . .
# Expose port
EXPOSE 3000
# Run the application
CMD ["node", "server.js"]
Best practices for lightweight images
1. Use small base images
# ❌ Bad: Large image
FROM ubuntu:latest
# ✅ Good: Alpine image (much smaller)
FROM node:18-alpine
Alpine Linux is a minimal distribution that significantly reduces image size.
2. Multi-stage builds
For compiled applications, use multi-stage builds to reduce final size:
# Stage 1: Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
EXPOSE 3000
CMD ["node", "dist/server.js"]
The final container only contains the files needed to run, not the build tools.
3. Instruction order to leverage cache
# ✅ Good: Dependencies first (change less frequently)
COPY package*.json ./
RUN npm ci
# Code after (changes more frequently)
COPY . .
Docker caches each layer. If package.json doesn’t change, Docker reuses the cached layer.
4. Don’t run as root
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
# Switch to non-root user
USER nodejs
Running containers as root is a security risk.
5. Use .dockerignore
Create a .dockerignore to exclude unnecessary files:
node_modules
.git
.env
*.log
dist
Docker Compose: Multi-container applications
Docker Compose allows you to define and run applications with multiple containers.
Example: Application with database
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/mydb
depends_on:
- db
volumes:
- ./logs:/app/logs
db:
image: postgres:15-alpine
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
With a simple docker-compose up, you have your entire application running.
Practical use cases
Docker completely transforms how you develop and deploy applications. Here are examples of how it’s used in different scenarios:
Node.js application with multi-stage build
For a Node.js application, you can use multi-stage builds to create optimized images:
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:18-alpine
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
USER nodejs
EXPOSE 3000
CMD ["node", "dist/server.js"]
Benefits:
- Local development: Each developer has the same environment
- CI/CD: Tests run in containers identical to production
- Deployment: The same image is used in staging and production
Multi-service application with Docker Compose
For applications with multiple services (API, database, cache), Docker Compose simplifies orchestration:
version: '3.8'
services:
api:
build: ./api
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://postgres:pass@db:5432/mydb
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
Go application with minimal image
For Go applications, you can create extremely lightweight images:
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/service ./cmd/service
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/service .
CMD ["./service"]
The result can be an image of only ~15MB, extremely lightweight and fast to deploy.
Monitoring stack
For monitoring and observability systems, Docker allows you to isolate each tool:
- Monitoring: Containers for Prometheus, Grafana, AlertManager
- Logging: Elasticsearch, Logstash, Kibana (ELK stack)
- Isolation: Each tool in its own container
This allows adding or removing tools without affecting others, and each has its dependencies isolated.
General benefits
Docker standardizes the deployment process. You no longer have to worry about:
- “Does it work in production?” - If it works in the container, it works in production
- “How do I install dependencies?” - They’re in the image
- “What version of Node/Python/Go do I need?” - It’s specified in the Dockerfile
Kubernetes: Orchestration at scale
While Docker solves the problem of packaging applications, Kubernetes solves the problem of orchestrating multiple containers across multiple servers.
What is Kubernetes?
Kubernetes (K8s) is a container orchestration platform that automates the deployment, scaling, and management of containerized applications.
Key Kubernetes concepts
Pod: The smallest unit in Kubernetes. A pod can contain one or more containers that share resources.
Deployment: Defines the desired state of your application (how many replicas, what image, etc.).
Service: Exposes a set of pods as a network service.
Namespace: A “virtual cluster” within a physical cluster, useful for separating environments.
Basic example: Deployment and Service
Deployment (deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-registry/my-app:latest
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Service (service.yaml):
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
With these files, Kubernetes:
- Creates 3 replicas of your application
- Distributes traffic among them
- Restarts failed pods
- Scales automatically based on load
Auto-scaling: When your app goes viral
One of Kubernetes’ most powerful features is the Horizontal Pod Autoscaler (HPA), which automatically scales the number of pods based on metrics.
HPA example:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
This means:
- Minimum: 2 replicas always running
- Maximum: 10 replicas if load is high
- Scale when: CPU > 70% or memory > 80%
If your application goes viral and traffic increases, Kubernetes automatically:
- Detects the load increase
- Creates new pods
- Distributes traffic among all pods
- When load decreases, reduces the number of pods
All without manual intervention.
Use cases with Kubernetes
Kubernetes is especially valuable for applications that need high availability, scalability, and complex orchestration.
High-availability application
For critical applications, Kubernetes provides:
- High availability: Multiple replicas ensure the service is available even if a pod fails
- Auto-scaling: During peak hours, Kubernetes scales automatically
- Zero-downtime rollouts: Update the application without interrupting the service
Real benefits:
- Before: A down server meant downtime
- Now: If a pod fails, Kubernetes restarts it automatically, and traffic is redirected to other pods
Microservices architecture
For applications with multiple microservices, each runs in its own deployment:
# API Gateway
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
spec:
replicas: 2
# ...
---
# Authentication Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-service
spec:
replicas: 3
# ...
---
# Notification Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: notification-service
spec:
replicas: 2
# ...
Each service scales independently according to its needs.
Applications with security requirements
For applications that require security and compliance, Kubernetes provides:
- Isolation: Each component in its own namespace
- Secrets management: Credentials managed securely
- Network policies: Control which services can communicate
- Audit: Logs of all changes and access
Observability stack
For monitoring systems, Kubernetes orchestrates the entire stack:
- Prometheus: Collects metrics from all services
- Grafana: Visualizes metrics
- AlertManager: Sends alerts when something fails
- Loki: Aggregates logs from all services
All running on Kubernetes, scaling automatically based on load.
Best practices
1. Small, secure images
- Use small base images (Alpine)
- Multi-stage builds to reduce size
- Don’t run as root
- Scan images for vulnerabilities
2. Resources and limits
Always define requests and limits in Kubernetes:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
This helps Kubernetes to:
- Decide where to place pods
- Prevent a pod from consuming all resources
- Scale correctly
3. Health checks
Define liveness and readiness probes:
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
- Liveness: Kubernetes restarts the pod if it fails
- Readiness: Kubernetes doesn’t send traffic until the pod is ready
4. Secrets and ConfigMaps
Never hardcode credentials. Use Secrets and ConfigMaps:
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
password: <base64-encoded-password>
5. Namespaces
Organize resources using namespaces:
# Development
kubectl create namespace dev
# Staging
kubectl create namespace staging
# Production
kubectl create namespace prod
6. Monitoring and logging
Implement monitoring from the start:
- Metrics: Prometheus
- Logs: Centralized (ELK, Loki)
- Tracing: Distributed tracing (Jaeger, Zipkin)
When to use Docker vs. Kubernetes
Use Docker only when:
- Simple application: Single application, no need for complex orchestration
- Local development: Docker Compose is enough
- Small scale: Few servers, manual management is acceptable
Use Kubernetes when:
- Multiple services: You need to orchestrate many services
- High availability: You need your application to always be available
- Auto-scaling: You need to scale automatically based on load
- Multiple environments: Development, staging, production
- Large team: Multiple developers deploying frequently
The future: Docker and Kubernetes
Docker and Kubernetes continue to evolve:
- Wasm containers: WebAssembly support in containers
- eBPF: Better observability and security at the kernel level
- GitOps: Infrastructure management through Git
- Serverless on K8s: Knative, OpenFaaS
My personal perspective
After using Docker and Kubernetes in practically all my projects, I’ve reached a clear conclusion: containers and orchestration are not optional for serious applications.
I’ve seen projects without containers that struggle with:
- “Works on my machine but not in production”
- Manual deployments prone to errors
- Difficulty scaling when traffic increases
- Long recovery time when something fails
And I’ve seen projects with Docker and Kubernetes that:
- Deploy consistently in any environment
- Scale automatically when needed
- Recover automatically from failures
- Allow developers to focus on code, not infrastructure
Docker and Kubernetes are essential tools for building and operating software at scale. They’re not just technical tools; they’re enablers that transform how you develop, deploy, and operate applications.
The learning curve can be steep, especially with Kubernetes, but the investment is worth it. Once you understand the concepts, you can:
- Deploy applications with confidence
- Scale without worrying about infrastructure
- Recover automatically from failures
- Focus on building features, not managing servers
At the end of the day, what matters is that your applications work reliably, scalably, and maintainably. Docker and Kubernetes are tools that make this possible, and in my experience, they’re essential for any application you plan to operate seriously.
If your application has the potential to grow, if you need high availability, if you want to deploy with confidence, then Docker and Kubernetes aren’t optional. They’re fundamental.