DaemonSet

This section explains DaemonSets, a powerful Kubernetes controller used to ensure that specific Pods run on every node in the cluster.

What is a DaemonSet ?

A DaemonSet is a special type of Kubernetes controller that ensures a specific Pod runs on every (or selected) Node in your cluster.

While Deployments or ReplicaSets create Pods based on a desired count (e.g., 3 replicas), a DaemonSet’s purpose is placement-based, not count-based. It guarantees that each eligible node runs exactly one copy of a specific Pod.

The Core Idea

In Kubernetes, not all workloads are user-facing applications some are infrastructure-level agents that must run on every node to provide cluster-wide functionality.

For example:

  • Log collectors like Fluentd or Fluent Bit (to gather node logs)

  • Monitoring agents like Prometheus Node Exporter

  • CNI (Container Network Interface) plugins for networking

  • Security or compliance agents like Falco or Trivy

  • Storage daemons like Ceph or OpenEBS

Running these as Deployments would mean manually scaling and rebalancing Pods whenever nodes join or leave the cluster. A DaemonSet automates all of that.

In Simple Terms

A DaemonSet ensures that each Node runs one instance of a particular Pod, automatically handling node additions or removals.

When new nodes join the cluster, the DaemonSet automatically schedules Pods on them. When nodes are deleted, their DaemonSet Pods are cleaned up too no manual intervention required.

How a DaemonSet Works Internally ?

  1. The DaemonSet Controller, part of the controller-manager in the control plane, continuously watches all nodes and the DaemonSet’s spec.

  2. When it detects:

    • A node that doesn’t have the DaemonSet’s Pod → it creates one.

    • A node that has the Pod but shouldn’t (e.g., due to label selector change) → it deletes it.

  3. The controller ensures convergence desired state matches actual state.

DaemonSets use Pod templates just like Deployments do, but the key difference is scheduling, Instead of the scheduler choosing nodes, the DaemonSet controller directly assigns Pods to nodes that match certain labels or criteria.

This is known as direct Pod binding, bypassing the regular scheduler unless explicitly configured otherwise.

DaemonSet - YAML Example

Here’s a simple but realistic example of a log collector DaemonSet running Fluent Bit on every node.


Explanation (Field by Field)

Field
Description

apiVersion: apps/v1

DaemonSets belong to the apps API group

kind: DaemonSet

Resource type

metadata.name

Unique DaemonSet name

namespace

Commonly runs in kube-system for infra agents

spec.selector

Selects Pods that belong to this DaemonSet

template:

Pod definition template same as in Deployments

tolerations:

Allows scheduling Pods on tainted nodes (e.g., masters)

hostPath volumes:

Mounts host directories into Pods for system-level access

resources:

CPU/memory guarantees per node

serviceAccountName:

Provides necessary RBAC permissions

Behavior

  • When you create this DaemonSet:

    • Kubernetes spawns one Pod per node (both workers and masters if tolerations allow).

    • If a new node joins, the controller instantly deploys the same Pod there.

    • If a node is drained or removed, its DaemonSet Pod is deleted automatically.

Check the DaemonSet

Output:

  • DESIRED → Number of nodes that should be running the Pod.

  • CURRENT → Number of Pods currently running.

  • READY → Pods ready to serve.

  • UP-TO-DATE → Updated Pods after a spec change.

To see where they’re running:

Scaling Behavior

DaemonSets don’t scale by replica count they scale by nodes. When you add or remove nodes, the DaemonSet automatically adjusts Pod count.

If you really want to control where the DaemonSet runs, use:

  • NodeSelector

  • Node Affinity

  • Taints and Tolerations

Example: run only on nodes labeled env=prod:

Update Strategies

DaemonSets support controlled rollouts, similar to Deployments.

Strategy
Description

RollingUpdate

Default updates Pods on nodes one by one.

OnDelete

Updates only when you manually delete Pods.

For large clusters, use RollingUpdate to minimize downtime during upgrades.

Real-World Examples of DaemonSets

Use Case
Example DaemonSet

Logging

Fluentd, Fluent Bit, Vector

Monitoring

Node Exporter, Datadog Agent

Security

Falco, Sysdig, Trivy Node Agent

Networking

Calico, Cilium, Flannel

Storage

Ceph, OpenEBS, Longhorn Daemons

Each of these needs a presence on every node to monitor, collect, or manage resources at node level.

Security Considerations

Since DaemonSets often access host-level directories (/var/log, /var/lib/docker, /proc), you must:

  • Run them with least privileges.

  • Avoid unnecessary privileged: true unless required.

  • Use PodSecurityPolicy or Pod Security Standards to restrict access.

  • Mount readOnly wherever possible.

DaemonSets are powerful but also have cluster-wide reach, so they’re a favorite for attackers trying to escalate privileges if misconfigured.

Under the Hood (Advanced)

  • DaemonSet Pods are typically managed by the DaemonSet controller, not the default scheduler, unless .spec.template.spec.schedulerName is overridden.

  • When a node is marked Unschedulable, DaemonSet Pods are still allowed (they bypass normal scheduling rules).

  • They can run even on control plane nodes with proper tolerations.

  • Events related to DaemonSets are logged in the controller-manager logs.

A DaemonSet is the backbone of cluster-wide operations it ensures consistency and visibility across every node. From log collectors to monitoring daemons, every serious Kubernetes deployment relies on DaemonSets under the hood.

They represent Kubernetes’ philosophy perfectly:

“Don’t manage nodes manually define the desired state, and let the system enforce it automatically.”

Last updated