NGINX as Ingress controller

NGINX as Ingress controller

After we install applications into the cluster we usually want some of them to be seen outside of the cluster. For that purpose, Kubernetes provides a concept of Service and gives us control over how Service is published using different Service types. In a previous post, I have shown what it takes to expose a Service using a Load Balancer.

Bare metal load balancer for Kubernetes cluster
Once an application is deployed in the Kubernetes cluster, we definitely want to make it reachable. We can define the visibility of an application i.e., to be seen externally, from outside of the cluster, or internally, so within the cluster. Whatever option we choose to expose, we always have

Load Balancer requires a pool of IP addresses it can bind to. Internally, a cluster may have multiple IPs assigned but, things get complicated in a home environment where you have a single external IP only. The following diagram depicts the possible home deployment of a cluster with multiple internal IPs assigned to the Load Balancer.

A home network deployment

From the client's perspective, a single external IP is seen. Now based on policies defined in a router, the request is either redirected to the first or second internal IP. Let's assume for a moment router has an IP address of 87.106.200.118 and the Load Balancer has a pool of two IP addresses, 192.168.111.1 and 192.168.111.2 which are bound to Services. We can define a routing rule that requests for IP address 87.106.200.118 and port 80 will be forwarded to address 192.168.111.1 and port 80, and requests for IP address 87.106.200.118 and port 8080 will be forwarded to address 192.168.111.2 and port 80. Such an approach is cumbersome as it requires the client to know upfront the port. What if we would like to host two services on the same port, i.e. 80? This is something Ingress may help us with.

What is Ingress?

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.

The following diagram depicts how Ingress can be used for routing.

Routing using Ingress

‌From the client's perspective, there is no change, the single IP address is seen. Router has a single rule to forward all the traffic on ports 80 and 443 to the Load Balancer. The Load Balancer has a single internal IP address in the pool, and the address is used by Ingress. Now based on the Ingress routing rule, requests for foo.example.com are routed to the first service, and requests for bar.example.com are routed to the other one. So instead of mixing IPs and ports, we can perform routing upon subdomain.

But creating just an Ingress resource has no effect - you must have an Ingress controller deployed in the cluster!

NGINX

For the Ingress resource to work, the cluster must have an ingress controller running. Unlike other types of controllers which run as part of the kube-controller-manager binary, Ingress controllers are not started automatically with a cluster. Kubernetes as a project supports and maintains AWS, GCE, and ingress-nginx ingress controllers.

We will use ingress-nginx as an Ingress controller for Kubernetes. It uses NGINX as a reverse proxy and a load balancer.‌

nginx news

The most convenient way of installing NGINX into the Kubernetes cluster is to use Helm Chart.

❯ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" has been added to your repositories

Once the repository is added we are ready to deploy NGINX. We will set NGINX as the default Ingress controller.

❯ helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace \
  --set controller.setAsDefaultIngress="true"

After deploying NGINX Helm Char a new Pod will be initialized.

❯ kubectl get pods
NAME                                        READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-7444c75fcf-jw5xj   1/1     Running   0          51d

Now we can take a look at deployed services in the ingress-nginx namespace.

❯ kubectl get service
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                                   AGE
ingress-nginx-controller             LoadBalancer   10.102.87.119    192.168.111.11   80:30384/TCP,443:32623/TCP                87d
ingress-nginx-controller-admission   ClusterIP      10.100.117.151   <none>           443/TCP                                   87d                                 87d

It is worth noticing that ingress-nginx-controller Service is bound to LoadBalancer and uses the IP address 192.168.111.11. In my case, the router has the policy to forward all the traffic from ports 80 and 443 to this particular IP address.

Defining an Ingress rule

Having a working controller, we are ready to define Ingress resources for our applications.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/permanent-redirect: https://blog.slys.dev
  name: ghost-slys-dev
  namespace: ghost
spec:
  ingressClassName: nginx
  rules:
  - host: slys.dev
    http:
      paths:
      - backend:
          service:
            name: ghost
            port:
              name: https
        path: /
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - slys.dev
    secretName: slys.dev-tls

Ingress with redirection 

Here I have defined a rule that redirects requests coming from slys.dev to blog.slys.dev. ghost Service is used as the backend. Additionally, I use Cert Manager and Let's Encrypt to terminate the SSL endpoint. Ingress for blog endpoint is pretty similar but managed by Helm.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/tls-acme: "true"
    meta.helm.sh/release-name: ghost
    meta.helm.sh/release-namespace: ghost
  labels:
    app.kubernetes.io/component: ghost
    app.kubernetes.io/instance: ghost
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ghost
    helm.sh/chart: ghost-19.1.79
  name: ghost
  namespace: ghost
spec:
  ingressClassName: nginx
  rules:
  - host: blog.slys.dev
    http:
      paths:
      - backend:
          service:
            name: ghost
            port:
              name: https
        path: /
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - blog.slys.dev
    secretName: blog.slys.dev-tls

Blog ingress

The Ingress does two things, points blog.slys.dev to ghost service as well as sets encryption.

Conclusion

Ingress works very smoothly when it comes to routing based on virtual hosts. Defining an Ingress resource doesn't solve the whole problem as the Ingress controller has to be present in the cluster. NGINX provides a reverse proxy and a load balancer functionality and can be used as the controller. Kubernetes project maintains it as ingress-nginx.