Cluster, Node & Namspaces
What is a Kubernetes Cluster?
A Kubernetes Cluster is the complete system that runs and manages your containerized applications. It’s the sum of everything control plane components, worker nodes, networking, storage, and all the workloads running on top of them.
You can think of a cluster as a data center abstraction layer. Instead of worrying about individual servers, operating systems, or where a container runs, you just tell Kubernetes what you want (for example, “run three replicas of my web app”), and the cluster figures out where and how to make that happen.
Technical Breakdown
A Kubernetes cluster consists of two planes:
Control Plane (Master Components)
This is the brain of Kubernetes.
It makes all decisions about the cluster scheduling, scaling, and monitoring.
It runs components like:
kube-apiserver: The API gateway and entry point.
etcd: Key-value database storing cluster state.
kube-scheduler: Decides which node a Pod runs on.
controller-manager: Watches for changes and maintains desired state.
cloud-controller-manager: Integrates with cloud provider APIs (AWS, GCP, etc.).
Worker Nodes (Execution Layer)
These are the machines (VMs or physical servers) where your containers actually run.
Each node runs:
kubelet: Node agent that communicates with the control plane.
kube-proxy: Handles Service networking and routing.
Container Runtime: (e.g.,
containerd,CRI-O) runs containers.
Together, the control plane and worker nodes form the cluster a self-healing, distributed control system.
Real-World Analogy
Think of a Kubernetes cluster like an airport:
The Control Tower = Control Plane → decides when planes take off or land (scheduling Pods).
The Runways & Terminals = Worker Nodes → actual physical infrastructure where planes (containers) operate.
The Pilots and Planes = Containers → executing the actual tasks (applications).
The tower doesn’t fly planes itself it just coordinates everything. That’s exactly how the Kubernetes control plane interacts with nodes in a cluster.
Practical Cluster Example
Typical output:
To get details:
What is a Node in Kubernetes?
A Node is a worker machine in the Kubernetes cluster the place where your Pods actually run.
You can think of it as one logical unit of compute power within the cluster. A node can be a virtual machine (on cloud providers like AWS or GCP) or a physical server (on-premise).
Each node runs essential components that make it part of the cluster:
kubelet - talks to the control plane and ensures containers are healthy and running.
kube-proxy - manages Pod networking and Service communication.
Container Runtime - runs containers (containerd, CRI-O, etc.).
Technical Role of a Node
When the scheduler places a Pod, it evaluates node resources like CPU, memory, taints, and affinity rules, and picks the most suitable one. Once assigned:
The kubelet receives the PodSpec and creates the containers locally.
The runtime pulls container images and starts them.
The kube-proxy sets up networking rules for traffic.
The node reports Pod status back to the control plane.
If a node fails, the control plane automatically reschedules the Pods on another available node (if possible).
Each node advertises its capacity (CPU, memory, storage) and allocatable resources. You can check this by:
You’ll see details like:
Node labels (
kubernetes.io/hostname,topology.kubernetes.io/zone)Resource usage
Allocatable CPU/memory
Running Pods
Taints and conditions (e.g.,
Ready,DiskPressure,MemoryPressure)
Real-World Analogy
Think of nodes as servers in a data center. Each one runs part of your application maybe one node runs your API Pods, another runs your background workers. If one server goes offline, Kubernetes simply shifts the load elsewhere.
What is a Namespace in Kubernetes?
A Namespace in Kubernetes is a virtual cluster within a physical cluster a logical way to divide and isolate resources. It provides multi-tenancy, access control, and organizational boundaries.
If you imagine a Kubernetes cluster as a city, then namespaces are like neighborhoods each one can have its own houses (Pods), schools (Services), and rules (RBAC policies), but they all share the same infrastructure underneath.
Why Namespaces Exist
Kubernetes can scale to thousands of objects (Pods, Deployments, ConfigMaps, etc.). Namespaces help manage this complexity by providing logical separation.
Namespaces are useful for:
Multi-team environments (dev, staging, prod)
Access control isolation (RBAC policies applied per namespace)
Resource quotas (limit CPU/memory per team)
Scoped visibility (teams only see their workloads)
Predefined Namespaces
When you create a cluster, you get a few by default:
default
Where resources go if no namespace is specified.
kube-system
Internal Kubernetes components (scheduler, controller-manager, CoreDNS, etc.).
kube-public
Publicly readable objects (cluster info, bootstrap data).
kube-node-lease
Tracks node heartbeats for faster failure detection.
You can create your own namespaces for separation.
YAML Example - Creating a Namespace
Apply it:
Then use it:
All subsequent operations will happen inside this namespace context.
Namespace Commands
kubectl get ns
List all namespaces
kubectl get all -n <namespace>
Get all resources within a namespace
kubectl create namespace <name>
Create namespace manually
kubectl delete namespace <name>
Delete namespace (and everything inside it)
Real-World Example
In a large organization:
Team A deploys to
dev-team-anamespace.Team B deploys to
prod-team-bnamespace.Each team can have unique RBAC roles, quotas, and network policies.
Even though both share the same cluster, their workloads are isolated just like tenants sharing an apartment building but having separate rooms and locks.
Resource Scoping
Most Kubernetes objects (Pods, Services, Deployments, ConfigMaps, Secrets) are namespace-scoped, while others (like Nodes, PersistentVolumes, and ClusterRoles) are cluster-scoped.
This separation allows cluster admins to enforce governance and boundaries effectively.
Key Takeaway
Namespaces bring logical multi-tenancy to Kubernetes. They help organize workloads, manage access, and prevent conflicts letting multiple teams or environments share one cluster without stepping on each other.
Last updated