🐶
Kubernetes

Kubernetes Endpoints Explained: A Simple Guide

By Jan on 02/02/2025

Learn what a Kubernetes endpoint is, how it connects services to pods, and why it's essential for your containerized applications.

Kubernetes Endpoints Explained: A Simple Guide

Table of Contents

Introduction

In Kubernetes, an Endpoint acts as a bridge between Services and Pods. Think of a Service as a fixed address for your application, and Pods as the actual instances running your code. Endpoints maintain a list of IP addresses and ports corresponding to healthy Pods that belong to a Service. When a Service needs to send traffic to a Pod, it consults its associated Endpoints to determine the correct destinations. This dynamic mapping ensures that even if Pods are created or destroyed (e.g., during scaling or updates), the Service remains accessible. For larger deployments, Kubernetes introduced EndpointSlices, which divide Endpoint information into smaller chunks. This improves scalability and performance by allowing for more efficient updates and distribution of Endpoint data across the cluster.

Step-by-Step Guide

In Kubernetes, an Endpoint acts as a bridge between Services and Pods.

Think of a Service as a fixed address for your application, and Pods as the actual instances running your code.

kubectl get svc
kubectl get pods

Endpoints maintain a list of IP addresses and ports corresponding to healthy Pods that belong to a Service.

kubectl describe svc <your-service-name>

When a Service needs to send traffic to a Pod, it consults its associated Endpoints to determine the correct destinations.

This dynamic mapping ensures that even if Pods are created or destroyed (e.g., during scaling or updates), the Service remains accessible.

kubectl get endpoints

For larger deployments, Kubernetes introduced EndpointSlices, which divide Endpoint information into smaller chunks.

This improves scalability and performance by allowing for more efficient updates and distribution of Endpoint data across the cluster.

kubectl get endpointslices

Code Example

The provided YAML code defines a Kubernetes deployment and service. The deployment creates three pods running a specified application, each labeled with "app: my-app". The service exposes these pods externally using a load balancer, mapping port 80 on the service to port 8080 on the pods. Kubernetes automatically manages endpoints, which store the IP addresses and ports of the healthy pods, ensuring traffic directed to the service is routed appropriately. This setup ensures application accessibility even with pod changes.

Unfortunately, I can't directly execute kubectl commands or create real Kubernetes resources as I'm just a text-based program.

However, I can provide you with example YAML configurations to illustrate how Services, Pods, and Endpoints work together:

1. Deployment (Creating Pods):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
spec:
  replicas: 3 # Create 3 replica Pods
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-container
        image: <your-app-image>
        ports:
        - containerPort: 8080 # The port your app listens on

2. Service (Exposing the App):

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app # Target Pods with this label
  ports:
    - protocol: TCP
      port: 80 # Service port (external traffic)
      targetPort: 8080 # Pod port (internal traffic)
  type: LoadBalancer # Expose using a Load Balancer (other options: ClusterIP, NodePort)

Explanation:

  • Deployment: This YAML defines a Deployment that creates 3 Pods running your application. Each Pod will have a label app: my-app.
  • Service: This YAML defines a Service named my-app-service.
    • selector: It uses the label app: my-app to target the Pods created by the Deployment.
    • ports: It maps port 80 on the Service to port 8080 on the Pods.
    • type: LoadBalancer: This will provision a Load Balancer (if your cloud provider supports it) to expose the Service externally.

How it Works:

  1. Deployment creates Pods: When you apply the Deployment, Kubernetes schedules and runs 3 Pods of your application.
  2. Service is created: The Service is created and automatically discovers the Pods with the matching label (app: my-app).
  3. Endpoints are populated: Kubernetes creates Endpoints objects that store the IP addresses and ports of the healthy Pods associated with the Service.
  4. Traffic routing: When traffic is sent to the Service's IP address (provided by the Load Balancer), the Service uses the Endpoints information to route the traffic to one of the healthy Pods.

Endpoints and EndpointSlices:

  • You can use kubectl describe svc my-app-service to see the Endpoints associated with the Service.
  • For larger deployments, Kubernetes might use EndpointSlices instead of a single Endpoints object to improve performance. You can view them with kubectl get endpointslices.

Important:

  • Replace <your-app-image> with the actual Docker image of your application.
  • The type: LoadBalancer might require specific configurations based on your cloud provider.

This example demonstrates the basic relationship between Services, Pods, and Endpoints. Kubernetes handles the dynamic mapping and updates behind the scenes, ensuring that your application remains accessible even with Pod changes.

Additional Notes

Core Functionality:

  • Automatic Discovery and Updates: Kubernetes automatically manages Endpoints. You don't usually create or modify them directly. The system watches for changes in Pods (like creation, deletion, health status) and updates the corresponding Endpoints accordingly.
  • Abstraction for Service Discovery: Endpoints provide a level of abstraction. Services don't need to know the exact IP addresses of Pods, making your application more resilient to changes in the underlying infrastructure.
  • Health Checks: Only Pods that pass their health checks (defined in the Pod's specification) are included in the Endpoints list. This ensures that traffic is only directed to healthy instances of your application.

Beyond the Basics:

  • Session Affinity: While Services typically load balance traffic across Pods, you can configure "session affinity" (sticky sessions) to direct requests from the same client to the same Pod. This can be useful for applications that maintain state in the client's session.
  • Headless Services: A special type of Service called a "Headless Service" does not create a load-balancing IP address. Instead, it allows you to directly access the individual Pods backing the Service using their DNS names. This is useful for scenarios where you need more control over service discovery.

EndpointSlices for Scalability:

  • Addressing Performance Bottlenecks: In large clusters with many Services and Pods, the single Endpoints object could become a performance bottleneck. EndpointSlices address this by dividing the information into smaller, more manageable chunks.
  • Efficient Updates: EndpointSlices allow for more granular updates. When a Pod changes state, only the relevant EndpointSlice needs to be updated, reducing the amount of data that needs to be distributed across the cluster.

Troubleshooting:

  • kubectl describe svc <service-name>: This command is your primary tool for inspecting Endpoints. It shows you the associated Pods, their IP addresses, ports, and health status.
  • Check Pod Labels and Selectors: Ensure that your Service's selectors correctly match the labels on your Pods. Mismatches can prevent Endpoints from being populated correctly.
  • Verify Pod Health: If a Pod is not appearing in the Endpoints list, check its health status. It might be failing its health checks or experiencing other issues.

In summary: Endpoints and EndpointSlices are essential components of Kubernetes networking. They provide a dynamic and scalable way to connect Services to Pods, ensuring that your applications remain available and resilient.

Summary

Concept Description Kubectl Command
Service A fixed address for your application, providing a stable entry point for accessing Pods. kubectl get svc
Pod An instance of your application running in a container. kubectl get pods
Endpoint A bridge between Services and Pods, maintaining a list of healthy Pod IP addresses and ports. kubectl describe svc <your-service-name>
kubectl get endpoints
EndpointSlice A scalable mechanism for managing Endpoint information in larger deployments, dividing data into smaller chunks for efficiency. kubectl get endpointslices

Summary:

Kubernetes Endpoints and EndpointSlices play a crucial role in service discovery and load balancing. They dynamically map Services to healthy Pods, ensuring that applications remain accessible even during scaling events or Pod updates. While Endpoints provide the core functionality, EndpointSlices offer improved scalability and performance for large-scale deployments.

Conclusion

In conclusion, Kubernetes Endpoints and EndpointSlices form a critical, yet often unseen, layer in the platform's networking model. They provide the essential link between Services, which offer a stable point of access to applications, and the dynamic world of Pods, where application instances run and change over time. Endpoints maintain the mapping of IP addresses and ports for healthy Pods associated with a Service, ensuring traffic is directed appropriately. For larger deployments, EndpointSlices offer a more scalable and performant way to manage this mapping, dividing the information into smaller chunks for efficient updates and distribution across the cluster. Understanding how Endpoints and EndpointSlices function is key to building robust and scalable applications on Kubernetes.

References

Were You Able to Follow the Instructions?

😍Love it!
😊Yes
😐Meh-gical
😞No
🤮Clickbait