K0S is a new Kubernetes distribution from Mirantis that is similar to K3S, however it ships with only the bare minimum of extensions. This allows flexibility for users who want to customize it to their needs, by allowing them to define their own ingress, storage, and other controllers in the CRD manifest which configures the cluster during bootstrap.
In the example below, I'll be guiding you through how to accomplish getting a functioning Kubernetes cluster by:
- Installing K0S on a clean linux VM
- Configuring Traefik and MetalLB as an extension
- Starting k0s
- Deploying the Traefik Dashboard IngressRoute and an example service
Before we start, you should do this on a clean install of Linux, probably in a VM.
You will be running K0S as a server/worker, and the worker installs components into the /var/lib
filesystem as root
(so root is a requirement here). My understanding is there are plans to allow non-root workers in the future.
Hopefully, in addition to non-root, the k0s binary will keep installations in a central location as well.
It's also worth noting that cleanly shutting down and wiping the cluster is not a feature yet in the k0s binary. For now, rebooting the system, and wiping
/var/lib/k0s
is the easiest option.
Once you have a clean linux VM (I'm using Ubuntu 20.04.1), you'll probably want to install the Helm and kubectl binaries:
curl -O https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz
tar xvzf helm-v3.4.1-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin
Once those are installed, install the k0s binary, create the working directory for k0s, and create a default config.
Note: The installer and running k0s requires root
# make sure you're running as root
curl -sSLf get.k0s.sh | sh
# create the working directory and set the permissions
mkdir -p /var/lib/k0s && chmod 755 /var/lib/k0s
# create the default config
k0s default-config > /var/lib/k0s/k0s.yaml
In this step, you'll be configuring Traefik and MetalLB as extensions which will be installed during the clusters bootstrap. This will function as an ingress controller and allow you to access services from a logical IP address that is deployed as a service LB. You will want to have a small range of IP addresses that are addressable on your network, preferably outside of the range of your DHCP server.
Modify the newly created k0s.yaml file in /var/lib/k0s/k0s.yaml
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
...
extensions:
helm:
repositories:
- name: traefik
url: https://helm.traefik.io/traefik
- name: bitnami
url: https://charts.bitnami.com/bitnami
charts:
- name: traefik
chartname: traefik/traefik
version: "9.11.0"
namespace: default
- name: metallb
chartname: bitnami/metallb
version: "1.0.1"
namespace: default
values: |2
configInline:
address-pools:
- name: generic-cluster-pool
protocol: layer2
addresses:
- 172.16.100.215-172.16.100.220
Again, be sure to provide a range of IPs for MetalLB that are addressable on your network if you want to access the LoadBalancer and Ingress services from outside this machine.
cd /var/lib/k0s
k0s server --enable-worker
After a minute or two, you should be able to access the cluster utilizing the certificate generated by K0S, located in /var/lib/k0s/pki/admin.conf
, and see that MetalLB was deployed along with the Traefik Ingress Controller.
root@k0s-host:/home/k0s# export KUBECONFIG=/var/lib/k0s/pki/admin.conf
root@k0s-host:/home/k0s# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/metallb-1607085578-controller-864c9757f6-bpx6r 1/1 Running 0 81s
pod/metallb-1607085578-speaker-245c2 1/1 Running 0 60s
pod/traefik-1607085579-77bbc57699-b2f2t 1/1 Running 0 81s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 96s
service/traefik-1607085579 LoadBalancer 10.105.119.102 172.16.100.215 80:32153/TCP,443:30791/TCP 84s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/metallb-1607085578-speaker 1 1 1 1 1 kubernetes.io/os=linux 87s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/metallb-1607085578-controller 1/1 1 1 87s
deployment.apps/traefik-1607085579 1/1 1 1 84s
NAME DESIRED CURRENT READY AGE
replicaset.apps/metallb-1607085578-controller-864c9757f6 1 1 1 81s
replicaset.apps/traefik-1607085579-77bbc57699 1 1 1 81s
Take note of the IP address assigned to the Traefik Load Balancer here:
service/traefik-1607085579 LoadBalancer 10.105.119.102 172.16.100.215 80:32153/TCP,443:30791/TCP 84s
You will need the EXTERNAL-IP
(in this case, 172.16.100.215) later when accessing Ingress resources on your cluster.
- Deploy the Traefik Dashboard
- Deploy a sample "whoami" service
Now that you have a functional and addressable load balancer on your cluster, you can easily deploy the Traefik dashboard and access it from anywhere on your local network (provided that you configured MetalLB with an addressable range).
Create the Traefik Dashboard IngressRoute in a yaml file:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
kind: Rule
services:
- name: api@internal
kind: TraefikService
And deploy it:
root@k0s-host:~# kubectl apply -f traefik-dashboard.yaml
ingressroute.traefik.containo.us/dashboard created
You can now access it from your browser by visiting http://172.16.100.215/dashboard/:
Great, now let's deploy a simple "whoami" service.
Create the whoami Deployment, Service, and IngressRoute manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-deployment
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami-container
image: containous/whoami
---
apiVersion: v1
kind: Service
metadata:
name: whoami-service
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: whoami-insecure
namespace: default
spec:
entryPoints:
- web
routes:
- match: Path(`/whoami`)
kind: Rule
services:
- name: whoami-service
port: 80
And now, deploy and test it...
root@k0s-host:~# kubectl apply -f whoami.yaml
deployment.apps/whoami-deployment created
service/whoami-service created
ingressroute.traefik.containo.us/whoami-insecure created
# test the route
root@k0s-host:~# curl http://172.16.100.215/whoami
Hostname: whoami-deployment-85bfbd48f-7l77c
IP: 127.0.0.1
IP: ::1
IP: 10.244.214.198
IP: fe80::b049:f8ff:fe77:3e64
RemoteAddr: 10.244.214.196:34858
GET /whoami HTTP/1.1
Host: 172.16.100.215
User-Agent: curl/7.68.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 172.16.100.77
X-Forwarded-Host: 172.16.100.215
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-1607085579-77bbc57699-b2f2t
X-Real-Ip: 172.16.100.77
The benefit of K0s being a single binary installer and allowing modular customizability to it is a unique offering in the Kubernetes community. While still being relatively new to the scene, I hope this post gives you an idea of what it's capable of, and how you can get started experimenting with your own customized Kubernetes setup.
In this post, we covered installing K0s, setting up a fully functional Load Balancer and Ingress controller for use in your local environment. From here, you could use a tool such as ngrok to expose your Load Balancer to the world, and setup Let's Encrypt so you can provision your own SSL certificates. Thanks for taking the time to read this post!