Post

Using Ingress Controllers to Serve Traffic Outside a Kubernetes Cluster

Using Ingress Controllers to Serve Traffic Outside a Kubernetes Cluster

If you’re crazy like me and are running a Kubernetes cluster in your home then chances are you also have other computers running other services. In the past, I would have my storage server run Nginx for any internal services (meaning LAN only) and then used an ingress controller like Traefik or Nginx to serve traffic from inside my cluster to the internet. Recently, I thought that it might be better to also run my internal reverse proxy on something highly available should (God forbid) my storage server go down. So off to the internet I went looking. Turns out its actually surprisingly easy to accomplish this.

Setting Up Separated Ingress Controllers for Internal & External Traffic

First, update your existing chart/manifests to distinguish that your current ingress controller is going to be used for external traffic only. So change its ingressClassName, containerName, and anything else you feel necessary. Secondly, go to your existing ingresses and make sure they’re updated to have the correct ingressClassName or kubernetes.io/ingress.class (which is deprecated at time of writing) so that they will all still work correctly.

Setting Up Internal Traffic inside Kubernetes

Now we have to create another instance of an ingress controller for internal services that I want to run on my cluster. Very simple, just duplicate your helm chart or manifests, do the same changes as the previous step and change the values like ingressClass, controller.name, etc … to correctly route for the new internal ingress controller, then add the new LoadBalancer IP to your local DNS. I suggest making a subdomain for all your internal traffic. For example all my internal services are on local.thrailkill.cloud instead of the base domain. Totally up to you.

Setting Up Internal Traffic outside Kubernetes

Finally – the reason for the article! The secret here is a Kubernetes resource called Endpoints and how they relate to a service. A service does not link straight to pods. When Kubernetes sees a service, and if the service selector matches a pod label, Kubernetes will automatically create an Endpoints object with the same name as the service, which stores the pod’s IP address and port. We can create our own endpoints in comboination with a headless service to forward the traffic outside our cluster. So for example:

1
2
3
4
5
6
7
8
9
10
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: my-service
      namespace: my-namespace
    subsets:
      - addresses:
          - ip: 10.0.0.100
        ports:
          - port: 8080

Then, to create a headless service you need to set clusterIP to None.

NOTE: The endpoint name and the service name must be the same.

1
2
3
4
5
6
7
8
9
10
11
    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
      namespace: my-namespace
    spec:
      clusterIP: None
      ports:
      - port: 8080
        protocol: TCP
        targetPort: 8080

That should be it! Update your local DNS to point a URL to your internal ingress controller and it should work now!

That was too easy…

Yeah, unfortunatly nothing is ever easy. Somethings like Proxmox or GitLab want to handle SSL termination on their end instead of serving HTTP traffic and letting the SSL termination be handled by a reverse proxy. In that case, you will have to allow SSL Passthrough and that should get you where you want to be.

Also, Endpoints are in the process of being replaced by EndpointSlices. I havn’t figured out quite yet how to use this resource but when I do I’ll update this guide.

Let me know if you have any questions or want to share your setup!

This post is licensed under CC BY 4.0 by the author.