Following https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv.
For now, we can only choose the following Azure storage redundancy for skuName
:
Standard_LRS
- standard locally redundant storage (LRS)Standard_GRS
- standard geo-redundant storage (GRS)Standard_RAGRS
- standard read-access geo-redundant storage (RA-GRS)
Azure Files currently only work with Standard storage. If you use Premium storage, the volume fails to provision.
Form Storage > Storage Classes:
- input Name
- choose Azure File as Provisioner
- input
Standard_LRS
as Sku Name - optional - set Location (where your Node located) and Storage Account
- add Mount Options:
dir_mode=0777
file_mode=0777
Lauch kubectl from Cluster page:
> k get sa persistent-volume-binder -n kube-system
> k get clusterrole system:azure-cloud-provider -n kube-system
> k get clsuterrolebinding system:azure-cloud-provider -n kube-system
Should we create these RBAC resources when choosing Azure cloud provider?
If not existing, Import YAML from Workloads page of System project:
- choose Cluster as Import Mode
- import as below:
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:azure-cloud-provider roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:azure-cloud-provider subjects: - kind: ServiceAccount name: persistent-volume-binder namespace: kube-system
From Workloads page of Default project, select Volumes tab, click Add Volumes:
- optional - input Name
- choose Namespace for deploying workloads
- select Storage Class as Source
- select Storage Class which created previously
- input Capacity
- expand Customize to select only Many Nodes Read-Write item
From Workloads page of Default project, Deploy a new workload:
- input Name
- input ubuntu:xenial as Docker Image
- choose Run all pods for this worload on Linux nodes only
- expand Volumes to Use an existing persistent volume (claim) (Add a new persistent volume (claim) is the same as Add the PVC based on the above StorageClass):
- input Volume Name
- choose Persistent Volume Claim which created previously
- mount to
/mnt/azure
Mount Point
- Execute Shell to this workload to check the monut point:
# mount | grep /mnt/azure # echo $(hostname) > /mnt/azure/hosts
- input Name
- input mcr.microsoft.com/powershell:nanoserver-1809 as Docker Image
- choose Run all pods for this worload on Windows nodes only
- expand Volumes to Use an existing persistent volume (claim):
- input Volume Name
- choose Persistent Volume Claim which binded on Linux workload
- mount to
/data
Mount Point
- Execute Shell to this workload to check the monut point:
C:\> pwsh.exe C:\> cat c:/data/hosts C:\> echo $(hostname) >> c:/data/hosts
- go back to the Linux workload to check:
# cat /mnt/azure/hosts
Form Azure portal, choose Storage accounts, Add:
- input the same Resource group as nodes
- input Storage account name
- choose Location where the nodes located
- select Standard as Performance (Azure Files currently only work with Standard storage. If you use Premium storage, the volume fails to provision.)
- select StorageV2 (gerneral purpose v2) as Account kind
- select Locally-redundant storage (LRS) as Replication
- next, next -> create
- go into this storage resource
- find Access keys from Settings of this resource
- view the value of Storage account name (use for next step) and one Key (use for next step)
- find Files from File service of this resource
- add File share:
- input Name (use for next step)
- input Quota
From Resources > Secrets page of Default project, Add Secret:
- input Name (use for next step)
- select Available to a single namespace as Scope
- add
azurestorageaccountname
= the above Storage account name value - add
azurestorageaccountkey
= the above Key value
From Workloads page of Default project, Deploy a new workload:
- input Name
- input ubuntu:xenial as Docker Image
- choose Run all pods for this worload on Linux nodes only
- expand Volumes to Add an ephemeral volume:
- select Azure Filesystem as Source
- expand Source Configuration:
- input Share Name = the Name of the above created File share
- input Secret Name which created previously
- select No as Read Only
- mount to
/mnt/azure
Mount Point
- Execute Shell to this workload to check the monut point:
# mount | grep /mnt/azure # echo $(hostname) > /mnt/azure/hosts
- input Name
- input mcr.microsoft.com/powershell:nanoserver-1809 as Docker Image
- choose Run all pods for this worload on Windows nodes only
- expand Volumes to Add an ephemeral volume
- select Azure Filesystem as Source
- expand Source Configuration:
- input Share Name = the Name of the above created File share
- input Secret Name which created previously
- select No as Read Only
- mount to
/data
Mount Point
- Execute Shell to this workload to check the monut point:
C:\> pwsh.exe C:\> cat c:/data/hosts C:\> echo $(hostname) >> c:/data/hosts
- go back to the Linux workload to check:
# cat /mnt/azure/hosts