This explains how to create a PersistentVolume using existing (vsphereVolume) persistent disks populated with data, and how to use the PersistentVolume in a Pod.
Create a storage class in the nginx namespace, here we're using the obsolete in-tree VCP provisioner:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vcp-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
Create a PersistentVolumeClaim in the nginx namespace
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Create an Nginx pod that mounts the volume at /usr/share/nginx/html
in the nginx namespace
apiVersion: v1
kind: Pod
metadata:
name: nginx-pv-pod
spec:
volumes:
- name: nginx-pv-storage
persistentVolumeClaim:
claimName: nginx-claim
containers:
- name: nginx-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-pv-storage
Exec into the container and create an index.html with some custom content:
kubectl exec -it nginx-pv-pod -n nginx -- /bin/bash -c "echo 'Hello from Kubernetes storage' > /usr/share/nginx/html/index.html"
Validate we see the new content by forwarding the pod port and curling the local port:
kubectl port-forward nginx-pv-pod -n nginx 8999:80
curl http://localhost:8999
Find the name of the PVC, specifically note the VOLUME:
kubectl get pvc -n nginx
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-claim Bound pvc-9543dc75-859e-4b87-85e1-a8d77e047d7d 2Gi RWO vcp-storage 38m
Kill the pod so that the disk becomes unlocked in vSphere allowing us to move it. We should also delete the PV and PCV so we can recreate them to point at the new disk. Grab the PV name from the PVC VOLUME output above.
kubectl delete pod nginx-pv-pod -n nginx
kubectl delete pvc nginx-claim -n nginx
Find the disk in the vSphere datastore explorer using the VOLUME name. The disk will be in the kubevols
folder and be named something like pvc-cluster-dynamic-pvc-9543dc75-859e-4b87-85e1-a8d77e047d7d.vmdk
.
Let's create a new folder named kubevols-moved
in the vSphere datastore explorer and move the PV disk to this new folder. Once moved note the vmdk's new path, it'll be something like [vsanDatastore] 3d085c64-a0e0-7e18-c0c8-bc97e1d34160/pvc-cluster-dynamic-pvc-9543dc75-859e-4b87-85e1-a8d77e047d7d.vmdk
.
Let's use that new path and create a new static PV from it.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-9543dc75-859e-4b87-85e1-a8d77e047d7d
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
claimRef:
kind: PersistentVolumeClaim
name: nginx-claim
namespace: nginx
persistentVolumeReclaimPolicy: Delete
storageClassName: vcp-storage
volumeMode: Filesystem
vsphereVolume:
fsType: ext4
volumePath: '[vsanDatastore] 3d085c64-a0e0-7e18-c0c8-bc97e1d34160/pvc-cluster-dynamic-pvc-9543dc75-859e-4b87-85e1-a8d77e047d7d.vmdk'
Replace the volumePath
in the above template with your new path from vSphere. Create the PV. Create the PVC.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: nginx
name: nginx-claim
spec:
storageClassName: vcp-storage
volumeName: pvc-9543dc75-859e-4b87-85e1-a8d77e047d7d
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Recreate the nginx-pv-pod using the same yaml as before. Finally validate we see the same content "Hello from Kubernetes storage" by forwarding the pod port and curling the local port:
kubectl port-forward nginx-pv-pod -n nginx 8999:80
curl http://localhost:8999