ClusterIP, NodePort, LoadBalancer

This section breaks down ClusterIP, NodePort, and LoadBalancer Services by explaining what they are, when and why you should use each of them, and how they actually handle traffic inside a K8s cluster

Networking in Kubernetes looks simple until you realize that the cluster is constantly changing, Pods die, get rescheduled, IPs rotate, and workloads scale up/down on demand. You cannot depend on Pod IPs they are not stable.

That is why Services exist. A Service gives your application a reliable identity inside a world where Pods are temporary and unpredictable.

There are three fundamental types you'll use most often:

  • ClusterIP → internal communication

  • NodePort → node-level external access

  • LoadBalancer → cloud-managed public access

Let’s break each one down properly what it is, when to use it, and how it actually works under the hood.

ClusterIP

What is ClusterIP?

ClusterIP is the default Service type in Kubernetes. It provides a virtual IP address (VIP) that works only inside the cluster. This VIP is stable and does not change even if backend Pods are deleted or rescheduled.

ClusterIP acts like an internal load balancer for Pods.

Example: Your frontend Pod needs to talk to a backend API, and you don’t want to track backend Pod IPs. ClusterIP solves this elegantly.

When / Why should you use ClusterIP?

Use ClusterIP when:

  • You want Pods to communicate within the cluster

  • You want stable DNS names instead of Pod IPs

  • You want internal load balancing between multiple Pods

  • You are building microservice architectures

  • You don’t need external access

This is what 90% of microservices use.

Why it's important:

  • Pod IPs change constantly → you need something stable.

  • Kubernetes networking must be predictable → ClusterIP provides that.

  • It simplifies service discovery → CoreDNS resolves it automatically.

ClusterIP is the foundation, all other Service types build on top of it.

How does ClusterIP work?

  1. When a ClusterIP Service is created, Kubernetes allocates an IP (e.g., 10.96.0.5).

  2. CoreDNS creates a DNS record:

  3. kube-proxy programs iptables/IPVS rules on every node:

  4. Traffic sent to the ClusterIP is load-balanced across backend Pods.

There is no central load balancer, every node independently knows how to route the traffic.

How to use ClusterIP?

Explanation

  • apiVersion / kind → We’re creating a Service

  • metadata.name → Service name; forms DNS entry

  • type: ClusterIP → Internal-only virtual IP

  • selector → Which Pods this service routes traffic to (app=backend)

  • port: 80 → The port exposed by the Service

  • targetPort: 8080 → The port on the Pods where the actual container listens

Pods communicate with it using:

NodePort

What is NodePort?

NodePort exposes a Service on every Node’s IP at a specific port from the range 30000–32767.

Example:

NodePort still creates a ClusterIP internally, but also opens a port directly on the node.

When / Why should you use NodePort?

Use NodePort when:

  • You want external access but do not have a cloud load balancer

  • You are running Kubernetes on bare metal

  • You need to test or debug services locally

  • You are operating a minimal lab or on-premises cluster

NodePort is basically the poor man’s load balancer simple but limited.

Why NodePort can be problematic in production:

  • Exposes every node to traffic → potential security risk

  • Static port range is limited

  • No health checks or smart routing

  • Traffic might hit nodes that have no running Pods → kube-proxy forwards anyway

How does NodePort work?

  1. Kubernetes allocates a NodePort (e.g., 30200)

  2. kube-proxy opens that port on every node

  3. Any traffic to any node’s:

    is forwarded to the Service’s ClusterIP

  4. ClusterIP rules forward traffic to backend Pods

  5. Load balancing happens at the kube-proxy layer

Traffic flow:

How to use NodePort?

Explanation

  • type: NodePort → expose the service externally on each node

  • port: 80 → the internal Service port (ClusterIP)

  • targetPort: 8080 → container port

  • nodePort: 32080 → external port exposed on every node

Access from outside the cluster:

LoadBalancer

What is a LoadBalancer Service?

A LoadBalancer Service exposes your application to the internet using the cloud provider’s load balancer. This is the standard way to publicly expose applications on EKS, GKE, AKS, DigitalOcean, Linode, etc.

When you create it, Kubernetes automatically provisions:

  • A cloud load balancer (AWS ELB/NLB, GCP GLB, Azure LB)

  • A public IP address

  • Firewall rules

  • Health checks

LoadBalancer = ClusterIP + NodePort + Cloud Load Balancer

When / Why should you use LoadBalancer?

Use a LoadBalancer when:

  • You want internet-facing endpoints

  • You need production-grade external access

  • You rely on cloud-managed infrastructure

  • You want automatic failover

  • You need SSL termination, DDoS protection, security groups, etc.

Why it matters:

  • It offloads complexity to the cloud provider

  • It integrates seamlessly with autoscaling

  • It is stable, automated, and highly available

LoadBalancer is the recommended production pattern for external services.

How does LoadBalancer work?

Internally:

  1. Kubernetes creates a ClusterIP

  2. Kubernetes opens a NodePort

  3. The Cloud Controller Manager provisions a cloud load balancer

  4. The cloud load balancer health-checks nodes

  5. Requests flow through:

Important Detail — Source IP

externalTrafficPolicy: Local preserves the real client IP externalTrafficPolicy: Cluster rewrites the source IP

This affects:

  • Logging

  • Security rules

  • Rate limiting

How to use LoadBalancer?

Explanation

  • type: LoadBalancer → tells cloud provider to create an external LB

  • port: 80 → public port exposed via LB

  • targetPort: 8080 → container port

  • externalTrafficPolicy: Local → preserve the real client IP

Cluster assigns:

  • A public IP

  • A DNS name

  • NodePorts under the hood

Once you understand what each Service actually does, why it exists, and how traffic flows, Kubernetes networking suddenly makes a lot more sense.

Last updated