Let's play around with persistent volumes on GKE.
$ gcloud init
<output_omitted>
$ export CLUSTER_NAME=gke-east-exp
$ export REGION=us-east1
<output_omitted>
$ gcloud container clusters get-credentials $CLUSTER_NAME --region $REGION
<output_omitted>
$ kubectl run my-nginx --image=nginx
<output_omitted>
$ kubectl get po
<output_omitted>
Get the name of the pod from the output of the command above
Let's check the file system before applying any change
$ kubectl exec my-nginx-554c674d4c-r9f2p -- df -h
Filesystem Size Used Avail Use% Mounted on
overlay 95G 2.5G 92G 3% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 95G 2.5G 92G 3% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 1.9G 12K 1.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 1.9G 0 1.9G 0% /sys/firmware
$ kubectl exec my-nginx-554c674d4c-r9f2p -- lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
|-sda1 8:1 0 95.9G 0 part /etc/hosts
|-sda2 8:2 0 16M 0 part
|-sda3 8:3 0 2G 0 part
|-sda4 8:4 0 16M 0 part
|-sda5 8:5 0 2G 0 part
|-sda6 8:6 0 512B 0 part
|-sda7 8:7 0 512B 0 part
|-sda8 8:8 0 16M 0 part
|-sda9 8:9 0 512B 0 part
|-sda10 8:10 0 512B 0 part
|-sda11 8:11 0 8M 0 part
|-sda12 8:12 0 32M 0 part
$ kubectl get deployment my-nginx -o yaml > my-nginx.yaml
<output_omitted>
Edit the YAML file to remove creationTimestamp, selfLink, uid, resourceVersion, and all the status information
It would be something like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
run: my-nginx
name: my-nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: my-nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: my-nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: my-nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Remove the deployment so we can deploy some changes
$ kubectl delete deploy my-nginx
<output_omitted>
PersistentVolumes can be dynamically provisioned; the user does not have to manually create and delete the backing storage.
PersistentVolumes are cluster resources that exist independently of Pods. This means that the disk and data represented by a PersistentVolume continue to exist as the cluster changes and as Pods are deleted and recreated. PersistentVolume resources can be provisioned dynamically through PersistentVolumeClaims, or they can be explicitly created by a cluster administrator.
Create a file nginx-pvc.yaml
with the following content
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-disk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
$ kubectl apply -f nginx-pvc.yaml
<output_omitted>
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-7541d937-7fcf-11e8-925c-42010a8e003c 30Gi RWO Delete Bound default/nginx-disk standard 1m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/nginx-disk Bound pvc-7541d937-7fcf-11e8-925c-42010a8e003c 30Gi RWO standard 1m
When you create this PersistentVolumeClaim Kubernetes dynamically creates a corresponding PersistentVolume object. Assuming that you haven't replaced the GKE default storage class, this PersistentVolume is backed by a new, empty Compute Engine persistent disk. You use this disk in a Pod by using the claim as a volume.
When you delete this claim, the corresponding PersistentVolume object as well as the provisioned Compute Engine persistent disk are also deleted. If you want to prevent deletion of dynamically provisioned persistent disks, set the reclaim policy of the PersistentVolume resource, or its StorageClass resource, to Retain. In this case, you are charged for the persistent disk for as long as it exists even if there is no PersistentVolumeClaim consuming it.
$ gcloud compute disks list
NAME ZONE SIZE_GB TYPE STATUS
gke-gke-east-exp-default-pool-c2c56854-05cf us-east1-c 100 pd-standard READY
gke-gke-east-exp-default-pool-004f7b27-zr20 us-east1-b 100 pd-standard READY
gke-gke-east-exp-c59bf-pvc-7541d937-7fcf-11e8-925c-42010a8e003c us-east1-d 30 pd-standard READY
gke-gke-east-exp-default-pool-ac6789ae-qpg3 us-east1-d 100 pd-standard READY
Edit the my-nginx.yaml
file and add the claim as a volume
...
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: my-nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/var/www/html"
name: www
volumes:
- name: www
persistentVolumeClaim:
claimName: nginx-disk
...
kubectl create -f my-nginx.yaml
<output_omitted>
kubectl get po,deploy
<output_omitted>
kubectl exec my-nginx-5f4c9f9994-fwdtc -- df -h
Filesystem Size Used Avail Use% Mounted on
overlay 95G 2.6G 92G 3% /
tmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 95G 2.6G 92G 3% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/sdb 30G 45M 28G 1% /var/www/html
tmpfs 1.9G 12K 1.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 1.9G 0 1.9G 0% /sys/firmware
kubectl exec my-nginx-5f4c9f9994-fwdtc -- lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
|-sda1 8:1 0 95.9G 0 part /etc/hosts
|-sda2 8:2 0 16M 0 part
|-sda3 8:3 0 2G 0 part
|-sda4 8:4 0 16M 0 part
|-sda5 8:5 0 2G 0 part
|-sda6 8:6 0 512B 0 part
|-sda7 8:7 0 512B 0 part
|-sda8 8:8 0 16M 0 part
|-sda9 8:9 0 512B 0 part
|-sda10 8:10 0 512B 0 part
|-sda11 8:11 0 8M 0 part
|-sda12 8:12 0 32M 0 part
sdb 8:16 0 30G 0 disk /var/www/html
$ kubectl exec my-nginx-5f4c9f9994-fwdtc -- touch /var/www/html/foo /var/www/html/bar /var/www/html/baz
$ kubectl exec my-nginx-5f4c9f9994-fwdtc -- ls /var/www/html
bar
baz
foo
lost+found
$ kubectl delete deploy my-nginx
$ kubectl get po,deploy
No resources found.
$ kubectl create -f my-nginx.yaml
$ kubectl get po,deploy
$ kubectl exec my-nginx-5f4c9f9994-vhh8p -- ls /var/www/html
bar
baz
foo
lost+found
Now, and if we want to resize this disk?
The allowVolumeExpansion
property has been ignored. With this configuration we should be able to
dynamically resize the persistent volume by editing the PVC as described here, but it didn't work.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: custom-sc
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
allowVolumeExpansion: true
Note that in the config above we are defining a pd-ssd
instead of a pd-standard
used in the default StorageClass.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-ssd
spec:
storageClassName: custom-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
Create a new disk using the GCE console or the gcloud tool and create a file existing-disk-pvc.yaml
with the following content:
(ngnix-disk-www
is the name of the disk created)
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-existing-disk
spec:
storageClassName: ""
capacity:
storage: 150Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: ngnix-disk-www
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-existiting-disk
spec:
storageClassName: ""
volumeName: pv-existing-disk
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 150Gi
Then update the name of the claim in the my-nginx.yaml
file.
There is a direct way to attach a persistent disk to the deployment configuration that is to use the gcePersistentDisk
while creating the volume. Like this:
...
volumeMounts:
- mountPath: "/var/www/html"
name: www
volumes:
- name: www
gcePersistentDisk:
fsType: "ext4"
pdName: ngnix-disk-2
...
Notice that if you have nodes in different zones you can have problems, since VMs need to be in the same GCE project and zone as the persistent disk. In this way, adding the following config will help:
...
spec:
nodeSelector:
failure-domain.beta.kubernetes.io/zone: us-east1-b #replace with your disk zone
containers:
...
Currently, I couldn't get working an approach to dynamically resize a persistent volume on GKE. It seems that the ExpandPersistentVolumes
feature gate is not enabled. If you want to resize a persistent volume we should first resize the disk following the instructions here.
It, basically, means that you have to go to a node instance and run:
sudo resize2fs /dev/[DEVICE_ID] # would be something like /dev/sdb