Minimal High Availability Kubernetes Install
Why?
So for my self hosted setup, my workloads run on Kubernetes using k3s across 3 machines. For the sake of keeping it stupid simple, I’m going to provide some light context and documentation for where to go if you want to really understand whats happening.
If you want high availability (also commonly referred to as HA) then you have 2 options. You can either run an embedded database or an external database. The embedded DB uses etcd and the external DB can use any major DB (see the list here). With an external DB you have a minimum of 2 machines and with the embedded you need a minimum of 3 but also you have to have an odd number of machines. See the full list of requirements for external and embedded. For this guide, I used the embedded since I have exactly 3 machines and I didn’t want to have to maintain another DB. Don’t worry, even if you use the embedded there’s still a way to backup your DB (stay tuned for another guide on how to setup that).
One thing that I am going to change here for my installation of k3s is their service load-balancer. Its Klipper and while I’m sure it can work, I’d rather use MetalLB which I’m more comfortable with. Since I’m not using AWS or GCP for this deployment and therefore unable to use the load-balancers they offer, MetalLB will allow me to create and use a bare-metal load-balancer which will be helpful to help make my workloads HA across my different machines. More on that later.
So to keep this as stupid simple as possible, this is what you’re going to get by following this guide:
- A load-balancer for your machines (so if one machine goes down, the other machines can still talk to each other)
- High Availbility Kubernetes Cluster (the dream)
- A load-balancer for your workloads (so your workloads can use static IPs which makes configuring workloads WAY easier)
- Sanity
Steps:
-
Make sure your machines have static IPs. Kubernetes REALLY doesn’t like it when machines start moving around, even with a load-balancer in front of them.
-
Open up the correct ports on your machines. See here for a list of what each is for.
The TCP ports are:
- 80
- 179
- 443
- 2376
- 2379
- 2380
- 6443
- 9099
- 9796
- 10250
- 10254
The UDP ports are:
- 22
- 80
- 443
- 2376
- 4789
- 6783
- 6784
- 7946
- 8472
- Set up your machine load-balancer. You need to set up a load-balancer that will act as the single point of reference for your cluster. Then that load-balancer will distribute the requests to the machines as it sees fit. I used NGINX. Simple, light, and powerful. Feel free to use whatever you want. This can be run on docker or bare-metal. Literally thousands of ways and guides on how. Here is the config for NGINX:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
#uncomment this next line if you are NOT running nginx in docker #load_module /usr/lib/nginx/modules/ngx_stream_module.so; events {} stream { upstream k3s_servers { server server0:6443; server server1:6443; server server2:6443; } server { listen 6443; proxy_pass k3s_servers; ssl_verify_client optional; } }
- Make sure your machines can talk to each other. a simple
nc -vz server:port
will tell you if your machines can talk to each other over these ports. - Choose which computer is going to be the master machine, which are going to be servers, and which are going to be worker nodes. If you don’t know what this means, don’t worry. Click here to understand. Got it? Great. We’re going to install a small tool called k3sup to help us easily install and manage k3s. Run the installer script like so:
curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/
- Now that you got it installed, here is a quick little bash script I made to make this super easy for you. All you got to do is add your hostnames or IPs and the location of the SSH key for these machines:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
#!/bin/bash main_server=server0 servers='server1 server2' agents='agent0 agent1 agent2' ssh_key='~/.ssh/ssh.pem' echo 'Installing K3s on main server' k3sup install \ --ip $main_server \ --user sean \ --cluster \ --k3s-extra-args '--no-deploy traefik --no-deploy servicelb' \ --ssh-key $ssh_key echo ' ' echo 'Joining other servers' for s in $servers; do echo ' ' echo 'Joining ' $s k3sup join \ --ip $s \ --user sean \ --server-user sean \ --server-ip $main_server \ --server \ --ssh-key $ssh_key done echo ' ' echo 'Joining other agents' for s in $agents; do echo ' ' echo 'Joining ' $s k3sup join \ --ip $s \ --user sean \ --server-user sean \ --server-ip $main_server \ --ssh-key $ssh_key done echo 'K3s Install Completed'
- Last step, you need to be able to control this cluster. So take the contents of
kubeconfig
that was generated for you and put that in the~/.kube/config
folder on whatever machine youre going to use to runkubectl
on. - Now on said machine, test to see if everything is up and running by doing:
kubectl get nodes
and make sure you see all your machines (from now on, they’re called “nodes”).
If you did everything right, you should see all your nodes when running
1
kubectl get nodes
What Now?
Now you’ve got the dream! Congratulations! Hold up though, before you start applying kubectl
commands all over the place, we wanted to use MetalLB, remember? So lets do that real quick.
- Open up a terminal on your master node. Were going to run some
kubectl
commands. - Run
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
- Run
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
- Run
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
So now we have MetalLB installed but not configured. To do that we need to specify what range of IP addresses MetalLB can assign from. So create a file like this:
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- ip.address.range.start-ip.address.range.end
Hopefully it’s obvious, but replace ip.address.range.start
and ip.address.range.end
with the starting and ending IP address of the pool MetalLB can assign from. Something like 192.168.0.2-192.168.0.100
.
- Final Step, apply that file you just created:
kubectl apply -f fileName.yaml
Congratulations!
You’re done. For now. Next step is to deploy some workloads or something. Next post will be about getting Rancher up and running to make deploying workloads way easier for someone not familiar or learning Kubernetes. Peace.
Edited 06/15/21 with updated commands to install K3s using k3sup