SFS is a network-attached storage (NAS) that provides shared, scalable, and high-performance file storage. It applies to large-capacity expansion and cost-sensitive services. This section describes how to use an existing SFS file system to statically create PVs and PVCs for data persistence and sharing in workloads.
Parameter |
Description |
---|---|
PVC Type |
In this example, select SFS. |
PVC Name |
Enter the PVC name, which must be unique in a namespace. |
Creation Method |
In this example, select Create new to create both a PV and PVC on the console. |
PVa |
Select an existing PV in the cluster. For details about how to create a PV, see "Creating a storage volume" in Related Operations. You do not need to specify this parameter in this example. |
SFSb |
Click Select SFS. On the displayed page, select the SFS file system that meets your requirements and click OK. NOTE:
Currently, only SFS 3.0 Capacity-Oriented is supported. |
PV Nameb |
Enter the PV name, which must be unique in the same cluster. |
Access Modeb |
SFS volumes support only ReadWriteMany, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see Volume Access Modes. |
Reclaim Policyb |
You can select Delete or Retain to specify the reclaim policy of the underlying storage when the PVC is deleted. For details, see PV Reclaim Policy. NOTE:
If multiple PVs use the same underlying storage volume, use Retain to prevent the underlying volume from being deleted with a PV. |
Mount Optionsb |
Enter the mounting parameter key-value pairs. For details, see Configuring SFS Volume Mount Options. |
a: The parameter is available when Creation Method is set to Use existing.
b: The parameter is available when Creation Method is set to Create new.
You can choose Storage in the navigation pane and view the created PVC and PV on the PVCs and PVs tab pages, respectively.
Parameter |
Description |
---|---|
PVC |
Select an existing SFS volume. |
Mount Path |
Enter a mount path, for example, /tmp. This parameter specifies a container path to which a data volume will be mounted. Do not mount the volume to a system directory such as / or /var/run. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, leading to container startup failures or workload creation failures.
NOTICE:
If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. |
Subpath |
Enter the subpath of the storage volume and mount a path in the storage volume to the container. In this way, different folders of the same storage volume can be used in a single pod. tmp, for example, indicates that data in the mount path of the container is stored in the tmp folder of the storage volume. If this parameter is left blank, the root path is used by default. |
Permission |
|
In this example, the disk is mounted to the /data path of the container. The container data generated in this path is stored in the SFS file system.
After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence and Sharing.
apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: everest-csi-provisioner everest.io/reclaim-policy: retain-volume-only # (Optional) The underlying volume is retained when the PV is deleted. name: pv-sfs # PV name spec: accessModes: - ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS. capacity: storage: 1Gi # SFS volume capacity csi: driver: nas.csi.everest.io # Dependent storage driver for the mounting fsType: nfs volumeHandle: <your_volume_id> # SFS Capacity-Oriented volume ID volumeAttributes: everest.io/share-export-location: <your_location> # Shared path of the SFS volume storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner persistentVolumeReclaimPolicy: Retain # Reclaim policy storageClassName: csi-nas # StorageClass name. csi-nas indicates that SFS Capacity-Oriented is used. mountOptions: [] # Mount options
Parameter |
Mandatory |
Description |
---|---|---|
everest.io/reclaim-policy |
No |
Only retain-volume-only is supported. This parameter is valid only when the Everest version is 1.2.9 or later and the reclaim policy is Delete. If the reclaim policy is Delete and the current value is retain-volume-only, the associated PV is deleted while the underlying storage volume is retained, when a PVC is deleted. |
volumeHandle |
Yes |
|
everest.io/share-export-location |
Yes |
Shared path of the file system.
|
mountOptions |
Yes |
Mount options. If not specified, the following configurations are used by default. For details, see Configuring SFS Volume Mount Options. mountOptions: - vers=3 - timeo=600 - nolock - hard |
persistentVolumeReclaimPolicy |
Yes |
A reclaim policy is supported when the cluster version is or later than 1.19.10 and the Everest version is or later than 1.2.9. The Delete and Retain reclaim policies are supported. For details, see PV Reclaim Policy. If multiple PVs use the same SFS volume, use Retain to prevent the underlying volume from being deleted with a PV. Retain: When a PVC is deleted, both the PV and underlying storage resources will be retained. You need to manually delete these resources. After the PVC is deleted, the PV is in the Released state and cannot be bound to a PVC again. Delete: When a PVC is deleted, its PV will also be deleted. |
storage |
Yes |
Requested capacity in the PVC, in Gi. For SFS, this field is used only for verification (cannot be empty or 0). Its value is fixed at 1, and any value you set does not take effect for SFS file systems. |
kubectl apply -f pv-sfs.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-sfs namespace: default annotations: volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner spec: accessModes: - ReadWriteMany # The value must be ReadWriteMany for SFS. resources: requests: storage: 1Gi # SFS volume capacity. storageClassName: csi-nas # Storage class name, which must be the same as the PV's storage class. volumeName: pv-sfs # PV name
Parameter |
Mandatory |
Description |
---|---|---|
storage |
Yes |
Requested capacity in the PVC, in Gi. The value must be the same as the storage size of the existing PV. |
volumeName |
Yes |
PV name, which must be the same as the PV name in 1. |
kubectl apply -f pvc-sfs.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: web-demo namespace: default spec: replicas: 2 selector: matchLabels: app: web-demo template: metadata: labels: app: web-demo spec: containers: - name: container-1 image: nginx:latest volumeMounts: - name: pvc-sfs-volume # Volume name, which must be the same as the volume name in the volumes field mountPath: /data # Location where the storage volume is mounted imagePullSecrets: - name: default-secret volumes: - name: pvc-sfs-volume # Volume name, which can be customized persistentVolumeClaim: claimName: pvc-sfs # Name of the created PVC
kubectl apply -f web-demo.yaml
After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to Verifying Data Persistence and Sharing.
kubectl get pod | grep web-demo
web-demo-846b489584-mjhm9 1/1 Running 0 46s web-demo-846b489584-wvv5s 1/1 Running 0 46s
kubectl exec web-demo-846b489584-mjhm9 -- ls /data kubectl exec web-demo-846b489584-wvv5s -- ls /data
If no result is returned for both pods, no file exists in the /data path.
kubectl exec web-demo-846b489584-mjhm9 -- touch /data/static
kubectl exec web-demo-846b489584-mjhm9 -- ls /data
Expected output:
static
kubectl delete pod web-demo-846b489584-mjhm9
Expected output:
pod "web-demo-846b489584-mjhm9" deleted
After the deletion, the Deployment controller automatically creates a replica.
kubectl get pod | grep web-demo
web-demo-846b489584-d4d4j 1/1 Running 0 110s web-demo-846b489584-wvv5s 1/1 Running 0 7m50s
kubectl exec web-demo-846b489584-d4d4j -- ls /data
Expected output:
static
The static file is retained, indicating that the data in the file system can be stored persistently.
kubectl get pod | grep web-demo
web-demo-846b489584-d4d4j 1/1 Running 0 7m web-demo-846b489584-wvv5s 1/1 Running 0 13m
kubectl exec web-demo-846b489584-d4d4j -- touch /data/share
kubectl exec web-demo-846b489584-d4d4j -- ls /data
Expected output:
share static
kubectl exec web-demo-846b489584-wvv5s -- ls /data
Expected output:
share static
After you create a file in the /data path of a pod, if the file is also created in the /data path of the other pod, the two pods share the same volume.
Operation |
Description |
Procedure |
---|---|---|
Creating a storage volume (PV) |
Create a PV on the CCE console. |
|
Viewing events |
View event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. |
|
Viewing a YAML file |
View, copy, or download the YAML file of a PVC or PV. |
|