Container
A container is a lightweight, isolated execution environment that packages an application with all its dependencies, sharing the host OS kernel. Containers are faster and more resource-efficient than virtual machines.
Explanation
Containers use Linux kernel features — namespaces (for isolation: processes, network, filesystem) and cgroups (for resource limits: CPU, memory) — to create isolated environments that share the host kernel. Unlike virtual machines, containers don't run a separate OS; they run on the host kernel. This makes them much lighter: a container starts in milliseconds, uses megabytes of overhead, and can run tens or hundreds per server. A VM starts in minutes and uses gigabytes of overhead. The trade-off: VMs provide stronger isolation (each VM has its own kernel — a VM escape is harder), while containers share the host kernel (a kernel vulnerability could affect all containers on the host). VMs are appropriate for running untrusted code or when OS-level isolation is required. Containers are appropriate for trusted application deployments. Container runtimes: Docker is the developer-friendly interface. Under the hood, Docker uses containerd (the actual container runtime) which uses runc (the low-level runtime that interacts with Linux kernel features). Kubernetes orchestrates containers at scale — scheduling containers across multiple servers, managing service discovery, scaling, and health checks. Container registries store and distribute images: Docker Hub, Amazon ECR, Google Container Registry, GitHub Container Registry. Images are referenced as registry/image:tag (e.g., node:20-alpine is the official Node.js 20 Alpine image from Docker Hub).
Code Example
bash# Container vs VM: key differences in practice
# Virtual Machine:
# - Full OS in each VM (Windows/Linux kernel + libraries)
# - Startup: 30 seconds to 5 minutes
# - Size: 5-20 GB per VM
# - Resource overhead: significant (running a full OS)
# - Isolation: strong (separate kernel)
# Container:
# - Shares host OS kernel
# - Startup: milliseconds to seconds
# - Size: 10 MB to 500 MB typically
# - Resource overhead: minimal
# - Isolation: namespace/cgroup-based (same kernel)
# Container lifecycle (Docker)
docker pull node:20-alpine # download image from registry
docker build -t myapp:1.0.0 . # build image from Dockerfile
docker run -d -p 3000:3000 myapp:1.0.0 # run container in background
docker ps # list running containers
docker logs # view logs
docker exec -it sh # get a shell inside running container
docker stop # stop container gracefully
docker rm # remove stopped container
# In production: Kubernetes (K8s) orchestrates containers
# kubectl get pods # list running pods (container groups)
# kubectl scale deployment myapp --replicas=5 # scale to 5 instances
Why It Matters for Engineers
Containers are the unit of deployment in modern cloud infrastructure. Kubernetes, AWS ECS, Google Cloud Run, and Azure Container Apps all work with containers. Understanding containers means understanding how your code is packaged and run in production — not just on your local machine. This is the foundation of DevOps knowledge for any backend developer. The container model also solves real engineering problems: environment parity (containers run identically in dev, staging, and production), dependency management (no more 'your Node.js version is different'), and horizontal scaling (spin up 10 identical containers for peak traffic).