You can use root to perform following steps until a regular user is indicated
If one of these packages are not presented, kubeadm will report warnings or errors
pacman -S docker ebtables ethtool socat
pacman -S curl wget unzip
I referenced K8s the hard way repo for this part
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
chmod +x cfssl cfssljson
mv cfssl cfssljson /usr/local/bin/
systemctl enable docker && systemctl start docker
export CNI_VERSION="v0.6.0"
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
mkdir -p /opt/bin
cd /opt/bin
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable kubelet && systemctl start kubelet
--pod-network-cidr=10.244.0.0/16
is needed because we will be using Flannel
kubeadm init --pod-network-cidr=10.244.0.0/16
Now we add a non-root user by useradd -m foo
(You can name the user whatever you want)
Still using the root user, we can move the config file to the non-root user's home folder
mkdir /home/foo/.kube
cp -i /etc/kubernetes/admin.conf /home/foo/.kube/config
chown foo:foo /home/foo/.kube/config
At this point you should be able to use kubectl
su - foo # switch to the non-root user and run kubectl
kubectl get nodes
NAME STATUS ROLES AGE VERSION
stephen-arch-linux Ready master 31m v1.12.1
We will be using Flannel but there are a few others you can choose from. See K8s Document
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
Once Flannel is up, your cluster is up and running.
kubectl taint nodes --all node-role.kubernetes.io/master-
Install an Ingress controller, for instance the NGINX Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Dont forget the k8s service(NodePort):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
Using the Ingress
object you will be able to access your services.
Create a storageClass (this object is not namespaced):
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Make it the default one:
kubectl annotate storageclass local-storage storageclass.kubernetes.io/is-default-class=true
For each PersistentVolumeClaim, you will need to manually create a PersistentVolume:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /etc/kubernetes/local
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
Be sure that the spec.local.path exists on the host.