Post

On the Importance of Logging: Setting up Loki

On the Importance of Logging: Setting up Loki

Why Loki?

So the other day I had an application not doing its daily jobs correctly. Unfortunately, because of my docker setup on this particular node, I didn’t have the logs to go back and see what had happened. This left me to wait until the next day to watch the console in real time to diagnose the failure. In an enterprise setting this just isn’t something I have ever had to worry about because logging is so important that it’s almost a reflex anytime I write code. So why would I treat my private cloud any differently?

First off, this is not a comparison of the various logging products out there. There are a lot of other well-written articles out there concerning that. I particularly thought The Chief I/O article to be insightful and helpful. To me it came down to a solution that would be able to show me a unified interface to search my logs across Kubernetes as well as my other machines like my NAS or Raspberry Pis. The kube-prometheus-stack helm chart also seemed to give me a bunch of Out Of Memory exceptions that I couldn’t solve. So I needed to find something else.

For me, it came down to Graylog or Loki. I was already familiar with the Grafana, Prometheus, Alert Manager and the like. Graylog seemed like a really good product that has a lot of backing and adoption however I could not find an official way to bring in Kubernetes logs into Graylog. There are third-party solutions out there for what it’s worth.

The Work

I decided to run this on my Kubernetes cluster as a helm chart. Here are the various ways to run it in case that isn’t for you.

  1. Create a new Loki namespace (always separate workloads by namespace)
  2. Create your chart values file. Here is what mine looks like:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    
     loki:
       enabled: true
       persistence:
         enabled: true
         size: 5Gi
        
     promtail:
       enabled: true
        
     grafana:
       enabled: true
       sidecar:
         datasources:
           enabled: true
       image:
         tag: 7.5.0
        
     prometheus:
       enabled: true
       alertmanager:
         persistentVolume:
           enabled: true
       server:
         persistentVolume:
           enabled: false
    
  3. Run the helm install command:
    1
    
    helm install loki grafana/loki-stack --values values.yaml -n <YOUR NAMESPACE>
    
  4. That’s it! If everything worked correctly you should have logging via Loki and monitoring through Prometheus. To see grafana you can get the admin user account password by running: kubectl get secret --namespace loki loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

  5. To see Grafana either do:
  • kubectl edit svc grafana-loki -n <YOUR NAMSPACE> and change the service from ClusterIp to HostPort or LoadBalancer then access from the IP address.
  • Or you can temporaily connect by forwarding the port to your localhost by doing: kubectl port-forward --namespace loki service/loki-grafana 3000:80

Congrats! Now make sure you familiarize yourself with the LogQL that Loki uses for queries. It’s a bit of a learning curve but I’ve found it to be intuitive. I might write up a review if I feel strongly one way or another.

This post is licensed under CC BY 4.0 by the author.