Skip to content

Instantly share code, notes, and snippets.

@cdot65
Created January 3, 2022 23:07
Show Gist options
  • Save cdot65/3c95539080dcf0a8fe937378c85a87d8 to your computer and use it in GitHub Desktop.
Save cdot65/3c95539080dcf0a8fe937378c85a87d8 to your computer and use it in GitHub Desktop.
Install a cluster of servers using Rancher and their terrible docs

Create a HA cluster of k3s servers with Rancher

if you've ever tried to follow Rancher's documentation for standing up a k3s cluster, you'd understand why this document exists

offical docs

Tasks

  1. Install and setup database
  2. Install and setup load balancer
  3. Create DNS records
  4. Install first k3s master
  5. Install additional masters
  6. Install and join agents
  7. Update role of worker nodes

Install and setup database

HA k3s requires an external database, we'll use postgres 12 here

All commands are to be executed on the postgres server you're setting up within your lab

Create the file repository configuration

sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'

Import the repository signing key

wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

Update the package lists and install postgresql 12

sudo apt-get update
sudo apt-get -y install postgresql-12

update /etc/postgresql/12/main/pg_hba.conf file

# IPv4 local connections:
host    all             all             0.0.0.0/0               md5

update /etc/postgresql/12/main/postgresql.conf file

listen_addresses = '*'

restart postgres service

sudo systemctl restart postgresql

access psql

sudo -u postgres psql

create database user, setup permissions, and quit

postgres=# CREATE USER rancher WITH PASSWORD 'Juniper!1';
postgres=# ALTER USER rancher WITH SUPERUSER;
postgres=# \q

Install and setup load balancer

A load balancer will be needed to handle our API calls with kubectl; we'll be using nginx here

All commands are to be executed on the load balancer server you're setting up within your lab

nginx setup

update dependencies in file /etc/apt/sources.list

deb https://nginx.org/packages/ubuntu/ focal nginx
deb-src https://nginx.org/packages/ubuntu/ focal nginx

update system's list of remote packages and install nginx

sudo apt update
sudo apt install nginx

replace nginx config /etc/nginx/nginx.conf with below

worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

stream {
    upstream rancher_servers_https_api {
        least_conn;
        server 192.168.108.11:6443 max_fails=3 fail_timeout=5s;
        server 192.168.108.12:6443 max_fails=3 fail_timeout=5s;
        server 192.168.108.13:6443 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     6443;
        proxy_pass rancher_servers_https_api;
    }
}

start and/or restart service

sudo systemctl restart nginx

Create DNS records

DNS will be your friend or mortal enemy. I will be using BIND for my DNS service internally.

All commands listed below are to be entered on your DNS server (not a part of this k3s cluster)

update forward records

my file was at /etc/bind/zones/db.192.168

;#######################################################################
; k8s hosts
;#######################################################################
11.108                  IN                      PTR                     k8s-master1.dmz.home.
12.108                  IN                      PTR                     k8s-master2.dmz.home.
13.108                  IN                      PTR                     k8s-master3.dmz.home.
21.108                  IN                      PTR                     k8s-worker1.dmz.home.
22.108                  IN                      PTR                     k8s-worker2.dmz.home.
23.108                  IN                      PTR                     k8s-worker3.dmz.home.
31.108                  IN                      PTR                     k8s-db1.dmz.home.
41.108                  IN                      PTR                     k8s-ntp1.dmz.home.
51.108                  IN                      PTR                     k8s-lb1.dmz.home.

update reverse records

my file was at /etc/bind/zones/db.dmz.home

;#######################################################################
; k8s hosts
;#######################################################################
k8s-master1.dmz.home.                           IN              A       192.168.108.11
k8s-master2.dmz.home.                           IN              A       192.168.108.12
k8s-master3.dmz.home.                           IN              A       192.168.108.13
k8s-worker1.dmz.home.                           IN              A       192.168.108.21
k8s-worker2.dmz.home.                           IN              A       192.168.108.22
k8s-worker3.dmz.home.                           IN              A       192.168.108.23
k8s-db1.dmz.home.                               IN              A       192.168.108.31
k8s-ntp1.dmz.home.                              IN              A       192.168.108.41
k8s-lb1.dmz.home.                               IN              A       192.168.108.51
k3s                                             IN              CNAME   k8s-lb1.dmz.home.

Install first k3s master

install k3s master, setting a unique tls name so we don't get kubectl cert errors against a load balanced cluster

enter this command on the first server only; subsequent master servers will need the token created in this step

curl -sfL https://get.k3s.io | sh -s - server --tls-san k8s-master1 --datastore-endpoint="postgres://rancher:Juniper!1@k8s-db1:5432/k3s"

validate

sudo k3s kubectl get node

Install additional masters

jump on your additional servers and join them to the cluster. you'll need to pass your token created on the first master's /var/lib/rancher/k3s/server/node-token file

curl -sfL https://get.k3s.io | sh -s - server --tls-san k8s-master2 --datastore-endpoint="postgres://rancher:Juniper!1@k8s-db1:5432/k3s" --token="MY_TOKEN_WAS_HERE"
curl -sfL https://get.k3s.io | sh -s - server --tls-san k8s-master3 --datastore-endpoint="postgres://rancher:Juniper!1@k8s-db1:5432/k3s" --token="MY_TOKEN_WAS_HERE"

Install and join agents

all commands below are to be executed on the worker agent nodes.

recent k3 packages

download binary, update permissions, place in path

wget https://github.com/k3s-io/k3s/releases/download/v1.22.5%2Bk3s1/k3s
sudo chmod a+x ./k3s
sudo mv k3s /usr/local/bin/

download and unzip package containing the install scipt

wget https://github.com/k3s-io/k3s/archive/refs/tags/v1.22.5+k3s1.tar.gz
tar zxvf v1.22.5+k3s1.tar.gz

install worker agent with script and connect to k3s load balanced CNAME

cd k3s-1.22.5-k3s1/
sudo INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://k3s:6443 K3S_TOKEN=MY_TOKEN_WAS_HERE ./install.sh

Update role of worker nodes

All commands below are to be executed on any of the cluster's master servers

change the role of our worker nodes to agent

sudo k3s kubectl label nodes k8s-worker1 kubernetes.io/role=agent
sudo k3s kubectl label nodes k8s-worker2 kubernetes.io/role=agent
sudo k3s kubectl label nodes k8s-worker3 kubernetes.io/role=agent
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment