Containers, Init Containers & Sidecars

This section dives deep into how Kubernetes handles Containers, Init Containers, and Sidecar Containers the core building blocks that define how your applications start, run, and interact inside Pod

Containers in Kubernetes

What is a Container?

A container is the smallest execution unit in Kubernetes. It’s essentially a lightweight, isolated user-space process that runs an application along with its dependencies, libraries, and runtime al,l bundled into a single image. Containers isolate the environment but share the host kernel, using Linux features like namespaces, cgroups, and chroot.

In simple terms:

A Pod is a logical wrapper, but the container is where your code actually runs.

Kubernetes itself doesn’t run containers directly that’s the job of the container runtime, which implements the Container Runtime Interface (CRI). Common runtimes include:

  • containerd (default in most distros)

  • CRI-O

  • Docker Engine (deprecated as runtime, still used for image building)

How Containers Run in a Pod

When you define a Pod, Kubernetes passes its spec to the kubelet on the assigned node. The kubelet then instructs the container runtime to:

  1. Pull the image from a registry (e.g., Docker Hub, ECR, GCR).

  2. Create a sandbox, a network namespace and a pause container (the “infra” container).

  3. Start each container in that sandbox.

  4. Attach volumes, configure environment variables, and apply resource limits.

  5. Monitor liveness and readiness probes.

Each Pod has a special container called the pause container, which owns the Pod’s network namespace. All user containers (your app, sidecars, etc.) share this namespace, meaning they share:

  • The same IP address

  • The same network stack

  • The same storage volumes

That’s why containers in a Pod can talk to each other via localhost.

Container Lifecycle

Every container has its own lifecycle managed by the kubelet:

  1. Image pulled (if not already present)

  2. Container created (filesystem and cgroups prepared)

  3. Container started

  4. Health probes monitored (readiness/liveness)

  5. Container stopped / restarted as per restartPolicy

  6. Logs collected (stdout/stderr)

  7. Cleanup when Pod dies

If the container crashes, kubelet restarts it unless restartPolicy: Never is specified.

YAML Example - Application Container

Key fields

Field
Description

image

Path to container image (supports private registries)

imagePullPolicy

Defines when image should be pulled (Always, IfNotPresent, Never)

ports

Declares which ports container exposes

resources.requests

Scheduler uses these values to place Pod

resources.limits

Enforces upper bound via cgroups

livenessProbe / readinessProbe

HTTP/TCP/Exec health checks

lifecycle

Hooks that run commands before stop or after start

restartPolicy

Defines restart behavior (Always, OnFailure, Never)

Container Networking

All containers within a Pod share the same:

  • Network namespace

  • IP address

  • loopback interface

Example: If your Pod runs two containers (web on port 8080 and metrics on port 9090), they can reach each other via:

because they share the same virtual Ethernet pair connected by the pause container.

Storage

All containers in a Pod can mount shared volumes (e.g., emptyDir, hostPath, configMap, persistentVolumeClaim), which lets them share logs, configuration files, or temporary data.

Init Containers

What is an Init Container?

An Init Container is a special-purpose container that runs before any main application container starts in a Pod. It performs initialization logic or one-time setup tasks, ensuring the environment is ready.

They differ from app containers in two key ways:

  1. They always run sequentially (each must finish before the next starts).

  2. They must succeed (exit code 0) for the Pod to progress.

Once all init containers finish successfully, they never run again.

Why They Exist ?

In complex systems, you often need pre-start operations such as:

  • Waiting for dependent services (like DB or API).

  • Injecting or generating configuration files.

  • Setting permissions on volumes.

  • Performing database migrations.

  • Downloading assets or binaries.

  • Validating environment variables or secrets.

Instead of baking these steps into your main app container, you isolate them into separate init containers, keeping the main image clean and focused.

Technical Behavior

  1. Init containers run in the same network and volume namespace as the main container.

  2. They execute in strict order, as defined in YAML.

  3. They block Pod readiness until all succeed.

  4. If any fails, kubelet restarts it based on restartPolicy.

  5. Once finished, their file system layers are discarded (ephemeral).

YAML Example - Init Container

Explanation

  • init-check-db: waits until the MySQL service is reachable at db:3306.

  • init-permissions: adjusts directory permissions for later use.

  • Both run before the main-app starts.

  • Shared emptyDir volume ensures both containers see the same /data path.

Important Technical Details

  • Init containers run with higher privileges if needed, independent from main containers’ restrictions.

  • They can use different base images (like busybox, curl, or alpine).

  • If a Pod has multiple init containers, they run serially, not in parallel.

  • The kubelet tracks progress in .status.initContainerStatuses[].

You can check status with:

You’ll see events like:

Sidecar Containers

What is a Sidecar Container?

A Sidecar Container is a secondary container that runs alongside your main container within the same Pod. Unlike init containers, sidecars run concurrently from Pod startup to shutdown.

They extend or enhance the main application’s behavior not replace it.

The “sidecar pattern” is borrowed from software architecture, a helper component attached to the main application to provide supporting capabilities.

Common Sidecar Use Cases

Category
Example

Logging

Ship logs to Fluentd/Elasticsearch

Proxying

Reverse proxy (Envoy, Istio) for service mesh

Metrics

Expose Prometheus metrics for scraping

Syncing

Sync data to external storage

Security

Token refreshers, credential injectors

Monitoring

Collect traces and telemetry

How They Work

  • Run in parallel with the main app.

  • Share the same Pod IP, network, and storage volumes.

  • Communicate via localhost or shared files.

  • Terminate when the Pod terminates.

  • If one crashes, the kubelet restarts it individually.

Because of the shared volume, your sidecar can read logs, configs, or data written by the main app.

YAML Example - Sidecar Container

Explanation

  • web-server runs NGINX, writing access logs to /var/log/nginx/access.log.

  • log-forwarder (sidecar) continuously tails the log and streams it to a remote log collector.

  • Both share the logs volume, so data is instantly available between them.

This pattern decouples your app logic (serving traffic) from infrastructure concerns (log collection).

Real-World Sidecar Examples

  • Istio/Envoy Proxy: intercepts all inbound/outbound Pod traffic for observability and control.

  • Fluent Bit / Fluentd: collects and forwards logs from local files.

  • Vault Agent Injector: fetches secrets from HashiCorp Vault and injects them into shared volumes.

  • Prometheus Node Exporter: runs alongside apps to expose metrics.

Design Considerations

  1. Synchronization: ensure the main container doesn’t depend on sidecar readiness unless explicitly required.

  2. Termination: when a Pod shuts down, Kubernetes sends SIGTERM to all containers, the sidecar should handle it gracefully.

  3. Resource isolation: assign explicit CPU/memory limits to sidecars to avoid resource contention.

  4. Security: drop unnecessary Linux capabilities, and avoid privileged mode unless required.

Kubernetes’ strength comes from this composability, separating setup, execution, and auxiliary logic into dedicated containers. Once you understand this trio deeply, you can architect Pods that do far more than just “run an image” you can build intelligent, self-healing, observable workloads.

Last updated