From 291c194b26da319f3a818e81c1f8d72335582a4f Mon Sep 17 00:00:00 2001 From: OpenTelekomCloud Proposal Bot Date: Mon, 31 Oct 2022 11:23:46 +0000 Subject: [PATCH] Update content --- ...edns_system_resource_add-on,_mandatory.rst | 1 - umn/source/add-ons/gpu-beta.rst | 3 - .../node_affinity.rst | 1 - .../workload_affinity.rst | 1 - .../workload_anti-affinity.rst | 1 - .../scheduling_policy_overview.rst | 2 - .../node_scaling_mechanisms.rst | 1 - .../workload_scaling_mechanisms.rst | 4 +- .../querying_cts_logs.rst | 2 - umn/source/clusters/cluster_overview.rst | 2 - ..._of_pods_that_can_be_created_on_a_node.rst | 2 +- .../controlling_cluster_permissions.rst | 1 - .../clusters/creating_a_cce_turbo_cluster.rst | 1 - .../managing_a_cluster/deleting_a_cluster.rst | 1 - .../obtaining_a_cluster_certificate.rst | 1 - .../clusters/setting_cluster_auto_scaling.rst | 2 +- .../clusters/upgrading_a_cluster/overview.rst | 1 - ...rming_in-place_upgrade_v1.15_and_later.rst | 8 -- .../connecting_to_a_cluster_using_kubectl.rst | 2 - umn/source/instruction.rst | 1 - .../migrating_clusters.rst | 3 +- .../migrating_images.rst | 3 - .../monitoring_and_logs/container_logs.rst | 1 - .../monitoring_overview.rst | 6 +- ...uring_a_namespace-level_network_policy.rst | 1 - umn/source/namespaces/managing_namespaces.rst | 2 - .../cloud_native_network_2.0.rst | 4 - .../container_tunnel_network.rst | 4 +- .../container_network_models/vpc_network.rst | 3 - umn/source/networking/ingress/overview.rst | 2 - .../using_elb_ingresses_on_the_console.rst | 1 - umn/source/networking/network_policies.rst | 4 - umn/source/networking/overview.rst | 3 - .../networking/services/eni_loadbalancer.rst | 3 +- .../intra-cluster_access_clusterip.rst | 5 +- .../networking/services/loadbalancer.rst | 5 +- umn/source/networking/services/nodeport.rst | 5 +- umn/source/networking/services/overview.rst | 2 - umn/source/node_pools/node_pool_overview.rst | 1 - ..._a_linux_lvm_disk_partition_for_docker.rst | 1 - umn/source/nodes/creating_a_node.rst | 4 +- ...ating_the_reserved_resources_of_a_node.rst | 74 +++++++++---------- .../performing_rolling_upgrade_for_nodes.rst | 1 - umn/source/nodes/resetting_a_node.rst | 1 - umn/source/nodes/stopping_a_node.rst | 2 - umn/source/nodes/synchronizing_node_data.rst | 1 - .../cluster_permissions_iam-based.rst | 1 - ...pace_permissions_kubernetes_rbac-based.rst | 2 - .../permissions_overview.rst | 1 - ...when_the_cluster_status_is_unavailable.rst | 3 - ...insufficient_eips_when_a_node_is_added.rst | 3 - ...planning_cidr_blocks_for_a_cce_cluster.rst | 8 -- ...k_model_when_creating_a_cluster_on_cce.rst | 2 - ...ble_but_the_node_status_is_unavailable.rst | 3 - ...ip_between_clusters,_vpcs,_and_subnets.rst | 3 +- .../failed_to_pull_an_image.rst | 2 +- .../failed_to_restart_a_container.rst | 2 - .../obs_volumes/using_obs_volumes.rst | 1 - umn/source/storage_csi/overview.rst | 2 - .../persistentvolumeclaims_pvcs.rst | 8 +- .../storage_csi/persistentvolumes_pvs.rst | 8 +- ...reating_a_pv_from_an_existing_evs_disk.rst | 18 ++--- .../overview.rst | 1 - ...ating_a_pv_from_an_existing_obs_bucket.rst | 14 ++-- .../overview.rst | 1 - .../using_obs_volumes.rst | 1 - ..._a_pv_from_an_existing_sfs_file_system.rst | 14 ++-- .../overview.rst | 1 - .../overview.rst | 1 - .../setting_an_environment_variable.rst | 2 - .../setting_container_specifications.rst | 8 +- .../setting_container_startup_commands.rst | 4 - .../setting_health_check_for_a_container.rst | 6 +- umn/source/workloads/creating_a_daemonset.rst | 3 +- .../workloads/creating_a_deployment.rst | 5 +- .../workloads/creating_a_statefulset.rst | 3 +- .../workloads/managing_workloads_and_jobs.rst | 1 - umn/source/workloads/overview.rst | 3 - 78 files changed, 98 insertions(+), 217 deletions(-) diff --git a/umn/source/add-ons/coredns_system_resource_add-on,_mandatory.rst b/umn/source/add-ons/coredns_system_resource_add-on,_mandatory.rst index 493a642..1b1fc16 100644 --- a/umn/source/add-ons/coredns_system_resource_add-on,_mandatory.rst +++ b/umn/source/add-ons/coredns_system_resource_add-on,_mandatory.rst @@ -184,7 +184,6 @@ DNS policies can be set on a per-pod basis. Currently, Kubernetes supports four .. figure:: /_static/images/en-us_image_0186273271.png :alt: **Figure 1** Routing - **Figure 1** Routing Upgrading the Add-on diff --git a/umn/source/add-ons/gpu-beta.rst b/umn/source/add-ons/gpu-beta.rst index e9a452e..d4b92a1 100644 --- a/umn/source/add-ons/gpu-beta.rst +++ b/umn/source/add-ons/gpu-beta.rst @@ -77,7 +77,6 @@ Obtaining the Driver Link from Public Network .. figure:: /_static/images/en-us_image_0000001280466745.png :alt: **Figure 1** Setting parameters - **Figure 1** Setting parameters 5. After confirming the driver information, click **SEARCH**. A page is displayed, showing the driver information, as shown in :ref:`Figure 2 `. Click **DOWNLOAD**. @@ -87,7 +86,6 @@ Obtaining the Driver Link from Public Network .. figure:: /_static/images/en-us_image_0181616313.png :alt: **Figure 2** Driver information - **Figure 2** Driver information 6. Obtain the driver link in either of the following ways: @@ -101,7 +99,6 @@ Obtaining the Driver Link from Public Network .. figure:: /_static/images/en-us_image_0181616314.png :alt: **Figure 3** Obtaining the link - **Figure 3** Obtaining the link Uninstalling the Add-on diff --git a/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/node_affinity.rst b/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/node_affinity.rst index 1710905..c122712 100644 --- a/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/node_affinity.rst +++ b/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/node_affinity.rst @@ -51,7 +51,6 @@ Using the CCE Console .. figure:: /_static/images/en-us_image_0000001190658439.png :alt: **Figure 1** Node affinity scheduling policy - **Figure 1** Node affinity scheduling policy Using kubectl diff --git a/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/workload_affinity.rst b/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/workload_affinity.rst index 01eba5f..3b5b5b6 100644 --- a/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/workload_affinity.rst +++ b/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/workload_affinity.rst @@ -59,7 +59,6 @@ Workload affinity determines the pods as which the target workload will be deplo .. figure:: /_static/images/en-us_image_0000001144578756.png :alt: **Figure 1** Pod affinity scheduling policy - **Figure 1** Pod affinity scheduling policy Using kubectl diff --git a/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/workload_anti-affinity.rst b/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/workload_anti-affinity.rst index 90ae0f9..5593edf 100644 --- a/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/workload_anti-affinity.rst +++ b/umn/source/affinity_and_anti-affinity_scheduling/custom_scheduling_policies/workload_anti-affinity.rst @@ -59,7 +59,6 @@ Workload anti-affinity determines the pods from which the target workload will b .. figure:: /_static/images/en-us_image_0000001144738550.png :alt: **Figure 1** Pod anti-affinity scheduling policy - **Figure 1** Pod anti-affinity scheduling policy Using kubectl diff --git a/umn/source/affinity_and_anti-affinity_scheduling/scheduling_policy_overview.rst b/umn/source/affinity_and_anti-affinity_scheduling/scheduling_policy_overview.rst index c767572..f514284 100644 --- a/umn/source/affinity_and_anti-affinity_scheduling/scheduling_policy_overview.rst +++ b/umn/source/affinity_and_anti-affinity_scheduling/scheduling_policy_overview.rst @@ -44,7 +44,6 @@ A simple scheduling policy allows you to configure affinity between workloads an .. figure:: /_static/images/en-us_image_0165899095.png :alt: **Figure 1** Affinity between workloads - **Figure 1** Affinity between workloads - **Anti-affinity between workloads**: For details, see :ref:`Workload-Workload Anti-Affinity `. Constraining multiple instances of the same workload from being deployed on the same node reduces the impact of system breakdowns. Anti-affinity deployment is also recommended for workloads that may interfere with each other. @@ -56,7 +55,6 @@ A simple scheduling policy allows you to configure affinity between workloads an .. figure:: /_static/images/en-us_image_0165899282.png :alt: **Figure 2** Anti-affinity between workloads - **Figure 2** Anti-affinity between workloads .. important:: diff --git a/umn/source/auto_scaling/scaling_a_cluster_node/node_scaling_mechanisms.rst b/umn/source/auto_scaling/scaling_a_cluster_node/node_scaling_mechanisms.rst index cb93e08..eb6cf01 100644 --- a/umn/source/auto_scaling/scaling_a_cluster_node/node_scaling_mechanisms.rst +++ b/umn/source/auto_scaling/scaling_a_cluster_node/node_scaling_mechanisms.rst @@ -42,7 +42,6 @@ autoscaler Architecture .. figure:: /_static/images/en-us_image_0000001199848585.png :alt: **Figure 1** autoscaler architecture - **Figure 1** autoscaler architecture **Description** diff --git a/umn/source/auto_scaling/scaling_a_workload/workload_scaling_mechanisms.rst b/umn/source/auto_scaling/scaling_a_workload/workload_scaling_mechanisms.rst index e2f0875..29fe78c 100644 --- a/umn/source/auto_scaling/scaling_a_workload/workload_scaling_mechanisms.rst +++ b/umn/source/auto_scaling/scaling_a_workload/workload_scaling_mechanisms.rst @@ -44,9 +44,9 @@ HPA can work with Metrics Server to implement auto scaling based on the CPU and Use the formula: ratio = currentMetricValue/desiredMetricValue - When \|ratio – 1.0\| ≤ tolerance, scaling will not be performed. + When \|ratio - 1.0\| <= tolerance, scaling will not be performed. - When \|ratio – 1.0\| > tolerance, the desired value is calculated using the formula mentioned above. + When \|ratio - 1.0\| > tolerance, the desired value is calculated using the formula mentioned above. The default value is 0.1 in the current community version. diff --git a/umn/source/cloud_trace_service_cts/querying_cts_logs.rst b/umn/source/cloud_trace_service_cts/querying_cts_logs.rst index a9da04e..fdc622b 100644 --- a/umn/source/cloud_trace_service_cts/querying_cts_logs.rst +++ b/umn/source/cloud_trace_service_cts/querying_cts_logs.rst @@ -45,7 +45,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001144779790.png :alt: **Figure 1** Expanding trace details - **Figure 1** Expanding trace details #. Click **View Trace** in the **Operation** column. The trace details are displayed. @@ -54,7 +53,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001144620002.png :alt: **Figure 2** Viewing event details - **Figure 2** Viewing event details .. |image1| image:: /_static/images/en-us_image_0144054048.gif diff --git a/umn/source/clusters/cluster_overview.rst b/umn/source/clusters/cluster_overview.rst index 83a9b8e..741cba9 100644 --- a/umn/source/clusters/cluster_overview.rst +++ b/umn/source/clusters/cluster_overview.rst @@ -22,7 +22,6 @@ The following figure shows the architecture of a Kubernetes cluster. .. figure:: /_static/images/en-us_image_0267028603.png :alt: **Figure 1** Kubernetes cluster architecture - **Figure 1** Kubernetes cluster architecture **Master node** @@ -127,5 +126,4 @@ Cluster Lifecycle .. figure:: /_static/images/en-us_image_0000001160731158.png :alt: **Figure 2** Cluster status transition - **Figure 2** Cluster status transition diff --git a/umn/source/clusters/cluster_parameters/maximum_number_of_pods_that_can_be_created_on_a_node.rst b/umn/source/clusters/cluster_parameters/maximum_number_of_pods_that_can_be_created_on_a_node.rst index b0af384..1d56b72 100644 --- a/umn/source/clusters/cluster_parameters/maximum_number_of_pods_that_can_be_created_on_a_node.rst +++ b/umn/source/clusters/cluster_parameters/maximum_number_of_pods_that_can_be_created_on_a_node.rst @@ -36,7 +36,7 @@ This parameter affects the maximum number of pods that can be created on a node. |image1| -By default, a node occupies three container IP addresses (network address, gateway address, and broadcast address). Therefore, the number of container IP addresses that can be allocated to a node equals the number of selected container IP addresses minus 3. For example, in the preceding figure, **the number of container IP addresses that can be allocated to a node is 125 (128 – 3)**. +By default, a node occupies three container IP addresses (network address, gateway address, and broadcast address). Therefore, the number of container IP addresses that can be allocated to a node equals the number of selected container IP addresses minus 3. For example, in the preceding figure, **the number of container IP addresses that can be allocated to a node is 125 (128 - 3)**. .. _cce_01_0348__section16296174054019: diff --git a/umn/source/clusters/controlling_cluster_permissions.rst b/umn/source/clusters/controlling_cluster_permissions.rst index 299a756..036e0e0 100644 --- a/umn/source/clusters/controlling_cluster_permissions.rst +++ b/umn/source/clusters/controlling_cluster_permissions.rst @@ -85,7 +85,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001144208440.png :alt: **Figure 1** Obtaining the access address - **Figure 1** Obtaining the access address In addition, the **X-Remote-Group** header field, that is, the user group name, is supported. During role binding, a role can be bound to a group and carry user group information when you access the cluster. diff --git a/umn/source/clusters/creating_a_cce_turbo_cluster.rst b/umn/source/clusters/creating_a_cce_turbo_cluster.rst index e6aa32d..d73aeaf 100644 --- a/umn/source/clusters/creating_a_cce_turbo_cluster.rst +++ b/umn/source/clusters/creating_a_cce_turbo_cluster.rst @@ -27,7 +27,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001150420952.png :alt: **Figure 1** Creating a CCE Turbo cluster - **Figure 1** Creating a CCE Turbo cluster #. On the page displayed, set the following parameters: diff --git a/umn/source/clusters/managing_a_cluster/deleting_a_cluster.rst b/umn/source/clusters/managing_a_cluster/deleting_a_cluster.rst index 80f0a0a..95c76f2 100644 --- a/umn/source/clusters/managing_a_cluster/deleting_a_cluster.rst +++ b/umn/source/clusters/managing_a_cluster/deleting_a_cluster.rst @@ -38,7 +38,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001190168507.png :alt: **Figure 1** Deleting a cluster - **Figure 1** Deleting a cluster #. Click **Yes** to start deleting the cluster. diff --git a/umn/source/clusters/obtaining_a_cluster_certificate.rst b/umn/source/clusters/obtaining_a_cluster_certificate.rst index 6c45593..f792f7f 100644 --- a/umn/source/clusters/obtaining_a_cluster_certificate.rst +++ b/umn/source/clusters/obtaining_a_cluster_certificate.rst @@ -23,7 +23,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001190859184.png :alt: **Figure 1** Downloading a certificate - **Figure 1** Downloading a certificate .. important:: diff --git a/umn/source/clusters/setting_cluster_auto_scaling.rst b/umn/source/clusters/setting_cluster_auto_scaling.rst index 8c3d588..5de8c5a 100644 --- a/umn/source/clusters/setting_cluster_auto_scaling.rst +++ b/umn/source/clusters/setting_cluster_auto_scaling.rst @@ -35,7 +35,7 @@ Automatic Cluster Scale-out +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Maximum Nodes | Maximum number of nodes to which the cluster can scale out. | | | | - | | 1 ≤ Maximum Nodes < cluster node quota | + | | 1 <= Maximum Nodes < cluster node quota | | | | | | .. note:: | | | | diff --git a/umn/source/clusters/upgrading_a_cluster/overview.rst b/umn/source/clusters/upgrading_a_cluster/overview.rst index edb1f40..6217422 100644 --- a/umn/source/clusters/upgrading_a_cluster/overview.rst +++ b/umn/source/clusters/upgrading_a_cluster/overview.rst @@ -21,7 +21,6 @@ Choose **Resource Management** > **Clusters** and check whether there is an upgr .. figure:: /_static/images/en-us_image_0000001190048341.png :alt: **Figure 1** Cluster with the upgrade flag - **Figure 1** Cluster with the upgrade flag Cluster Upgrade diff --git a/umn/source/clusters/upgrading_a_cluster/performing_in-place_upgrade_v1.15_and_later.rst b/umn/source/clusters/upgrading_a_cluster/performing_in-place_upgrade_v1.15_and_later.rst index b21b644..cd9c070 100644 --- a/umn/source/clusters/upgrading_a_cluster/performing_in-place_upgrade_v1.15_and_later.rst +++ b/umn/source/clusters/upgrading_a_cluster/performing_in-place_upgrade_v1.15_and_later.rst @@ -40,7 +40,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other .. figure:: /_static/images/en-us_image_0000001229793402.png :alt: **Figure 1** Upgrading a cluster - **Figure 1** Upgrading a cluster .. note:: @@ -56,7 +55,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other .. figure:: /_static/images/en-us_image_0000001280171657.png :alt: **Figure 2** Determining whether to back up the entire master node - **Figure 2** Determining whether to back up the entire master node #. Check the version information, last update/upgrade time, available upgrade version, and upgrade history of the current cluster. @@ -67,7 +65,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other .. figure:: /_static/images/en-us_image_0000001274316069.png :alt: **Figure 3** Cluster upgrade page - **Figure 3** Cluster upgrade page #. Click **Upgrade** on the right. Set the upgrade parameters. @@ -89,7 +86,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other .. figure:: /_static/images/en-us_image_0000001229794946.png :alt: **Figure 4** Configuring upgrade parameters - **Figure 4** Configuring upgrade parameters #. Read the upgrade instructions carefully, and select **I have read the upgrade instructions**. Click **Upgrade**. @@ -98,7 +94,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other .. figure:: /_static/images/en-us_image_0000001280421317.png :alt: **Figure 5** Final step before upgrade - **Figure 5** Final step before upgrade #. After you click **Upgrade**, the cluster upgrade starts. You can view the upgrade process in the lower part of the page. @@ -109,7 +104,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other .. figure:: /_static/images/en-us_image_0000001280181541.png :alt: **Figure 6** Cluster upgrade in process - **Figure 6** Cluster upgrade in process #. When the upgrade progress reaches 100%, the cluster is upgraded. The version information will be properly displayed, and no upgrade is required. @@ -118,7 +112,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other .. figure:: /_static/images/en-us_image_0000001236582394.png :alt: **Figure 7** Upgrade completed - **Figure 7** Upgrade completed #. After the upgrade is complete, verify the cluster Kubernetes version on the **Clusters** page. @@ -127,7 +120,6 @@ This section describes how to upgrade a CCE cluster of v1.15 or later. For other .. figure:: /_static/images/en-us_image_0000001236263298.png :alt: **Figure 8** Verifying the upgrade success - **Figure 8** Verifying the upgrade success .. |image1| image:: /_static/images/en-us_image_0000001159118361.png diff --git a/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst b/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst index f3d501d..b792257 100644 --- a/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst +++ b/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst @@ -45,7 +45,6 @@ On the `Kubernetes release `. + Allocatable node resources (CPU or memory) = Total amount - Reserved amount - Eviction thresholds. For details, see :ref:`Formula for Calculating the Reserved Resources of a Node `. On the cluster monitoring page, you can also view monitoring data of nodes, workloads, and pods. You can click |image3| to view the detailed data. @@ -82,8 +82,8 @@ The node list page also displays the data about the allocable resources of the n The calculation formulas are as follows: -- Allocatable CPU = Total CPU – Requested CPU of all pods – Reserved CPU for other resources -- Allocatable memory = Total memory – Requested memory of all pods – Reserved memory for other resources +- Allocatable CPU = Total CPU - Requested CPU of all pods - Reserved CPU for other resources +- Allocatable memory = Total memory - Requested memory of all pods - Reserved memory for other resources Viewing Workload Monitoring Data -------------------------------- diff --git a/umn/source/namespaces/configuring_a_namespace-level_network_policy.rst b/umn/source/namespaces/configuring_a_namespace-level_network_policy.rst index a251c11..56455f1 100644 --- a/umn/source/namespaces/configuring_a_namespace-level_network_policy.rst +++ b/umn/source/namespaces/configuring_a_namespace-level_network_policy.rst @@ -36,7 +36,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001144779784.png :alt: **Figure 1** Namespace-level network policy - **Figure 1** Namespace-level network policy Network Isolation Description diff --git a/umn/source/namespaces/managing_namespaces.rst b/umn/source/namespaces/managing_namespaces.rst index 28727ba..e3c624d 100644 --- a/umn/source/namespaces/managing_namespaces.rst +++ b/umn/source/namespaces/managing_namespaces.rst @@ -32,7 +32,6 @@ Isolating Namespaces .. figure:: /_static/images/en-us_image_0000001098645539.png :alt: **Figure 1** One namespace for one environment - **Figure 1** One namespace for one environment - **Isolating namespaces by application** @@ -43,7 +42,6 @@ Isolating Namespaces .. figure:: /_static/images/en-us_image_0000001098403383.png :alt: **Figure 2** Grouping workloads into different namespaces - **Figure 2** Grouping workloads into different namespaces Deleting a Namespace diff --git a/umn/source/networking/container_network_models/cloud_native_network_2.0.rst b/umn/source/networking/container_network_models/cloud_native_network_2.0.rst index 31fea24..7ae00d8 100644 --- a/umn/source/networking/container_network_models/cloud_native_network_2.0.rst +++ b/umn/source/networking/container_network_models/cloud_native_network_2.0.rst @@ -14,7 +14,6 @@ Developed by CCE, Cloud Native Network 2.0 deeply integrates Elastic Network Int .. figure:: /_static/images/en-us_image_0000001231949185.png :alt: **Figure 1** Cloud Native Network 2.0 - **Figure 1** Cloud Native Network 2.0 **Pod-to-pod communication** @@ -55,7 +54,6 @@ In the Cloud Native Network 2.0 model, BMS nodes use ENIs and ECS nodes use sub- .. figure:: /_static/images/en-us_image_0000001172076961.png :alt: **Figure 2** IP address management in Cloud Native Network 2.0 - **Figure 2** IP address management in Cloud Native Network 2.0 - Pod IP addresses are allocated from **Pod Subnet** you configure from the VPC. @@ -86,7 +84,6 @@ In addition, a subnet can be added to the container CIDR block after a cluster i .. figure:: /_static/images/en-us_image_0000001159831938.png :alt: **Figure 3** Configuring CIDR blocks - **Figure 3** Configuring CIDR blocks Example of Cloud Native Network 2.0 Access @@ -98,7 +95,6 @@ Create a CCE Turbo cluster, which contains three ECS nodes. .. figure:: /_static/images/en-us_image_0000001198867835.png :alt: **Figure 4** Cluster network - **Figure 4** Cluster network Access the details page of one node. You can see that the node has one primary NIC and one extended NIC, and both of them are ENIs. The extended NIC belongs to the container CIDR block and is used to mount a sub-ENI to the pod. diff --git a/umn/source/networking/container_network_models/container_tunnel_network.rst b/umn/source/networking/container_network_models/container_tunnel_network.rst index b26ac72..2707e5f 100644 --- a/umn/source/networking/container_network_models/container_tunnel_network.rst +++ b/umn/source/networking/container_network_models/container_tunnel_network.rst @@ -14,7 +14,6 @@ The container tunnel network is constructed on but independent of the node netwo .. figure:: /_static/images/en-us_image_0000001145535931.png :alt: **Figure 1** Container tunnel network - **Figure 1** Container tunnel network **Pod-to-pod communication** @@ -61,12 +60,11 @@ The container tunnel network allocates container IP addresses according to the f .. figure:: /_static/images/en-us_image_0000001198861255.png :alt: **Figure 2** IP address allocation of the container tunnel network - **Figure 2** IP address allocation of the container tunnel network Maximum number of nodes that can be created in the cluster using the container tunnel network = Number of IP addresses in the container CIDR block / Size of the IP CIDR block allocated to the node by the container CIDR block at a time (16 by default) -For example, if the container CIDR block is 172.16.0.0/16, the number of IP addresses is 65536. If 16 IP addresses are allocated to a node at a time, a maximum of 4096 (65536/16) nodes can be created in the cluster. This is an extreme case. If 4096 nodes are created, a maximum of 16 pods can be created for each node because only 16 IP CIDR block\s are allocated to each node. In addition, the number of nodes that can be created in a cluster also depends on the node network and cluster scale. +For example, if the container CIDR block is 172.16.0.0/16, the number of IP addresses is 65536. If 16 IP addresses are allocated to a node at a time, a maximum of 4096 (65536/16) nodes can be created in the cluster. This is an extreme case. If 4096 nodes are created, a maximum of 16 pods can be created for each node because only 16 IP CIDR block\\s are allocated to each node. In addition, the number of nodes that can be created in a cluster also depends on the node network and cluster scale. Recommendation for CIDR Block Planning -------------------------------------- diff --git a/umn/source/networking/container_network_models/vpc_network.rst b/umn/source/networking/container_network_models/vpc_network.rst index d0aa72a..847a78d 100644 --- a/umn/source/networking/container_network_models/vpc_network.rst +++ b/umn/source/networking/container_network_models/vpc_network.rst @@ -14,7 +14,6 @@ The VPC network uses VPC routing to integrate with the underlying network. This .. figure:: /_static/images/en-us_image_0000001116237931.png :alt: **Figure 1** VPC network model - **Figure 1** VPC network model **Pod-to-pod communication** @@ -58,7 +57,6 @@ The VPC network allocates container IP addresses according to the following rule .. figure:: /_static/images/en-us_image_0000001153101092.png :alt: **Figure 2** IP address management of the VPC network - **Figure 2** IP address management of the VPC network Maximum number of nodes that can be created in the cluster using the VPC network = Number of IP addresses in the container CIDR block /Number of IP addresses in the CIDR block allocated to the node by the container CIDR block @@ -91,7 +89,6 @@ Create a cluster using the VPC network model. .. figure:: /_static/images/en-us_image_0000001198980979.png :alt: **Figure 3** Cluster network - **Figure 3** Cluster network The cluster contains one node. diff --git a/umn/source/networking/ingress/overview.rst b/umn/source/networking/ingress/overview.rst index eedcbd6..8bdcdd1 100644 --- a/umn/source/networking/ingress/overview.rst +++ b/umn/source/networking/ingress/overview.rst @@ -17,7 +17,6 @@ An ingress is an independent resource in the Kubernetes cluster and defines rule .. figure:: /_static/images/en-us_image_0000001238003081.png :alt: **Figure 1** Ingress diagram - **Figure 1** Ingress diagram The following describes the ingress-related definitions: @@ -41,5 +40,4 @@ ELB Ingress Controller is deployed on the master node and bound to the load bala .. figure:: /_static/images/en-us_image_0000001192723190.png :alt: **Figure 2** Working principle of ELB Ingress Controller - **Figure 2** Working principle of ELB Ingress Controller diff --git a/umn/source/networking/ingress/using_elb_ingresses_on_the_console.rst b/umn/source/networking/ingress/using_elb_ingresses_on_the_console.rst index 157bbe4..8ca88a5 100644 --- a/umn/source/networking/ingress/using_elb_ingresses_on_the_console.rst +++ b/umn/source/networking/ingress/using_elb_ingresses_on_the_console.rst @@ -135,7 +135,6 @@ This section uses an Nginx workload as an example to describe how to add an ELB .. figure:: /_static/images/en-us_image_0000001192723194.png :alt: **Figure 1** Accessing the /healthz interface of defaultbackend - **Figure 1** Accessing the /healthz interface of defaultbackend Updating an Ingress diff --git a/umn/source/networking/network_policies.rst b/umn/source/networking/network_policies.rst index f3ae0fa..73885f7 100644 --- a/umn/source/networking/network_policies.rst +++ b/umn/source/networking/network_policies.rst @@ -64,7 +64,6 @@ Using Ingress Rules .. figure:: /_static/images/en-us_image_0259557735.png :alt: **Figure 1** podSelector - **Figure 1** podSelector - **Using namespaceSelector to specify the access scope** @@ -95,7 +94,6 @@ Using Ingress Rules .. figure:: /_static/images/en-us_image_0259558489.png :alt: **Figure 2** namespaceSelector - **Figure 2** namespaceSelector Using Egress Rules @@ -133,7 +131,6 @@ Diagram: .. figure:: /_static/images/en-us_image_0000001340138373.png :alt: **Figure 3** ipBlock - **Figure 3** ipBlock You can define ingress and egress in the same rule. @@ -172,7 +169,6 @@ Diagram: .. figure:: /_static/images/en-us_image_0000001287883210.png :alt: **Figure 4** Using both ingress and egress - **Figure 4** Using both ingress and egress Adding a Network Policy on the Console diff --git a/umn/source/networking/overview.rst b/umn/source/networking/overview.rst index 3e57607..f25bd35 100644 --- a/umn/source/networking/overview.rst +++ b/umn/source/networking/overview.rst @@ -50,7 +50,6 @@ A Service is used for pod access. With a fixed IP address, a Service forwards ac .. figure:: /_static/images/en-us_image_0258889981.png :alt: **Figure 1** Accessing pods through a Service - **Figure 1** Accessing pods through a Service You can configure the following types of Services: @@ -73,7 +72,6 @@ Services forward requests using layer-4 TCP and UDP protocols. Ingresses forward .. figure:: /_static/images/en-us_image_0258961458.png :alt: **Figure 2** Ingress and Service - **Figure 2** Ingress and Service For details about the ingress, see :ref:`Overview `. @@ -100,7 +98,6 @@ Workload access scenarios can be categorized as follows: .. figure:: /_static/images/en-us_image_0000001160748146.png :alt: **Figure 3** Network access diagram - **Figure 3** Network access diagram .. |image1| image:: /_static/images/en-us_image_0000001159292060.png diff --git a/umn/source/networking/services/eni_loadbalancer.rst b/umn/source/networking/services/eni_loadbalancer.rst index a42c37f..a3f469b 100644 --- a/umn/source/networking/services/eni_loadbalancer.rst +++ b/umn/source/networking/services/eni_loadbalancer.rst @@ -77,7 +77,7 @@ You can set the Service when creating a workload on the CCE console. An Nginx wo - **Protocol**: protocol used by the Service. - **Container Port**: port defined in the container image and on which the workload listens. The Nginx application listens on port 80. - - **Access Port**: port mapped to the container port at the load balancer's IP address. The workload can be accessed at <*Load balancer's IP address*>:<*Access port*>. The port number range is 1–65535. + - **Access Port**: port mapped to the container port at the load balancer's IP address. The workload can be accessed at <*Load balancer's IP address*>:<*Access port*>. The port number range is 1-65535. #. After the configuration is complete, click **OK**. @@ -173,7 +173,6 @@ After an ENI LoadBalancer Service is created, you can view the listener forwardi .. figure:: /_static/images/en-us_image_0000001204449561.png :alt: **Figure 1** ELB forwarding - **Figure 1** ELB forwarding You can find that a listener is created for the load balancer. The backend server address is the IP address of the pod, and the service port is the container port. This is because the pod uses an ENI or sub-ENI. When traffic passes through the load balancer, it directly forwards the traffic to the pod. This is the same as that described in :ref:`Scenario `. diff --git a/umn/source/networking/services/intra-cluster_access_clusterip.rst b/umn/source/networking/services/intra-cluster_access_clusterip.rst index 0502d5b..db56a0a 100644 --- a/umn/source/networking/services/intra-cluster_access_clusterip.rst +++ b/umn/source/networking/services/intra-cluster_access_clusterip.rst @@ -19,7 +19,6 @@ The cluster-internal domain name format is **.\ *:. The port number range is 1–65535. + - **Access Port**: a port mapped to the container port at the cluster-internal IP address. The workload can be accessed at :. The port number range is 1-65535. #. After the configuration, click **OK** and then **Next: Configure Advanced Settings**. On the page displayed, click **Create**. #. Click **View Deployment Details** or **View StatefulSet Details**. On the **Services** tab page, obtain the access address, for example, 10.247.74.100:8080. @@ -58,7 +57,7 @@ You can set the Service after creating a workload. This has no impact on the wor - **Protocol**: protocol used by the Service. - **Container Port**: port on which the workload listens. The Nginx application listens on port 80. - - **Access Port**: port mapped to the container port at the cluster-internal IP address. The workload can be accessed at :. The port number range is 1–65535. + - **Access Port**: port mapped to the container port at the cluster-internal IP address. The workload can be accessed at :. The port number range is 1-65535. #. Click **Create**. The ClusterIP Service will be added for the workload. diff --git a/umn/source/networking/services/loadbalancer.rst b/umn/source/networking/services/loadbalancer.rst index 4956eec..4bd38c4 100644 --- a/umn/source/networking/services/loadbalancer.rst +++ b/umn/source/networking/services/loadbalancer.rst @@ -18,7 +18,6 @@ In this access mode, requests are transmitted through an ELB load balancer to a .. figure:: /_static/images/en-us_image_0000001163928763.png :alt: **Figure 1** LoadBalancer - **Figure 1** LoadBalancer Notes and Constraints @@ -98,7 +97,7 @@ You can set the Service when creating a workload on the CCE console. An Nginx wo - **Protocol**: protocol used by the Service. - **Container Port**: port defined in the container image and on which the workload listens. The Nginx application listens on port 80. - - **Access Port**: port mapped to the container port at the load balancer's IP address. The workload can be accessed at <*Load balancer's IP address*>:<*Access port*>. The port number range is 1–65535. + - **Access Port**: port mapped to the container port at the load balancer's IP address. The workload can be accessed at <*Load balancer's IP address*>:<*Access port*>. The port number range is 1-65535. #. After the configuration is complete, click **OK**. @@ -335,7 +334,6 @@ You can set the access type when creating a workload using kubectl. This section .. figure:: /_static/images/en-us_image_0276664171.png :alt: **Figure 2** Accessing Nginx through the LoadBalancer Service - **Figure 2** Accessing Nginx through the LoadBalancer Service Using kubectl to Create a Service (Automatically Creating a Load Balancer) @@ -642,7 +640,6 @@ You can add a Service when creating a workload using kubectl. This section uses .. figure:: /_static/images/en-us_image_0000001093275701.png :alt: **Figure 3** Accessing Nginx through the LoadBalancer Service - **Figure 3** Accessing Nginx through the LoadBalancer Service .. _cce_01_0014__section52631714117: diff --git a/umn/source/networking/services/nodeport.rst b/umn/source/networking/services/nodeport.rst index 68f30a2..195c045 100644 --- a/umn/source/networking/services/nodeport.rst +++ b/umn/source/networking/services/nodeport.rst @@ -14,7 +14,6 @@ A Service is exposed on each node's IP address at a static port (NodePort). A Cl .. figure:: /_static/images/en-us_image_0000001163847995.png :alt: **Figure 1** NodePort access - **Figure 1** NodePort access Notes and Constraints @@ -54,7 +53,7 @@ You can set the access type when creating a workload on the CCE console. An Ngin - **Access Port**: node port (with a private IP address) to which the container port will be mapped. You are advised to select **Automatically generated**. - **Automatically generated**: The system automatically assigns a port number. - - **Specified port**: You have to manually specify a fixed node port number in the range of 30000–32767. Ensure that the port is unique in a cluster. + - **Specified port**: You have to manually specify a fixed node port number in the range of 30000-32767. Ensure that the port is unique in a cluster. #. After the configuration is complete, click **OK**. #. Click **Next: Configure Advanced Settings**. On the page displayed, click **Create**. @@ -96,7 +95,7 @@ You can set the Service after creating a workload. This has no impact on the wor - **Access Port**: node port (with a private IP address) to which the container port will be mapped. You are advised to select **Automatically generated**. - **Automatically generated**: The system automatically assigns a port number. - - **Specified port**: You have to manually specify a fixed node port number in the range of 30000–32767. Ensure that the port is unique in a cluster. + - **Specified port**: You have to manually specify a fixed node port number in the range of 30000-32767. Ensure that the port is unique in a cluster. #. Click **Create**. A NodePort Service will be added for the workload. diff --git a/umn/source/networking/services/overview.rst b/umn/source/networking/services/overview.rst index 5923044..27850a6 100644 --- a/umn/source/networking/services/overview.rst +++ b/umn/source/networking/services/overview.rst @@ -21,7 +21,6 @@ For example, an application uses Deployments to create the frontend and backend. .. figure:: /_static/images/en-us_image_0258894622.png :alt: **Figure 1** Inter-pod access - **Figure 1** Inter-pod access Using Services for Pod Access @@ -36,7 +35,6 @@ In the preceding example, a Service is added for the frontend pod to access the .. figure:: /_static/images/en-us_image_0258889981.png :alt: **Figure 2** Accessing pods through a Service - **Figure 2** Accessing pods through a Service Service Types diff --git a/umn/source/node_pools/node_pool_overview.rst b/umn/source/node_pools/node_pool_overview.rst index ea1ec51..3b57594 100644 --- a/umn/source/node_pools/node_pool_overview.rst +++ b/umn/source/node_pools/node_pool_overview.rst @@ -26,7 +26,6 @@ Node Pool Architecture .. figure:: /_static/images/en-us_image_0269288708.png :alt: **Figure 1** Overall architecture of a node pool - **Figure 1** Overall architecture of a node pool Generally, all nodes in a node pool have the following same attributes: diff --git a/umn/source/nodes/creating_a_linux_lvm_disk_partition_for_docker.rst b/umn/source/nodes/creating_a_linux_lvm_disk_partition_for_docker.rst index c31e4b2..e664e87 100644 --- a/umn/source/nodes/creating_a_linux_lvm_disk_partition_for_docker.rst +++ b/umn/source/nodes/creating_a_linux_lvm_disk_partition_for_docker.rst @@ -106,7 +106,6 @@ Procedure .. figure:: /_static/images/en-us_image_0144042759.png :alt: **Figure 1** Creating a partition - **Figure 1** Creating a partition c. Configure the start and last sectors as follows for example: diff --git a/umn/source/nodes/creating_a_node.rst b/umn/source/nodes/creating_a_node.rst index 24f7be6..a958c3d 100644 --- a/umn/source/nodes/creating_a_node.rst +++ b/umn/source/nodes/creating_a_node.rst @@ -215,8 +215,8 @@ Procedure The calculation formula is as follows: - - Allocatable CPUs = Total CPUs – Requested CPUs of all pods – Reserved CPUs for other resources - - Allocatable memory = Total memory – Requested memory of all pods – Reserved memory for other resources + - Allocatable CPUs = Total CPUs - Requested CPUs of all pods - Reserved CPUs for other resources + - Allocatable memory = Total memory - Requested memory of all pods - Reserved memory for other resources .. |image1| image:: /_static/images/en-us_image_0273156799.png .. |image2| image:: /_static/images/en-us_image_0220702939.png diff --git a/umn/source/nodes/formula_for_calculating_the_reserved_resources_of_a_node.rst b/umn/source/nodes/formula_for_calculating_the_reserved_resources_of_a_node.rst index 041d112..5952e0e 100644 --- a/umn/source/nodes/formula_for_calculating_the_reserved_resources_of_a_node.rst +++ b/umn/source/nodes/formula_for_calculating_the_reserved_resources_of_a_node.rst @@ -22,35 +22,35 @@ Total reserved amount = Reserved memory for system components + Reserved memory .. table:: **Table 1** Reservation rules for system components - +---------------------+-------------------------------------------------------------------------+ - | Total Memory (TM) | Reserved Memory for System Components | - +=====================+=========================================================================+ - | TM ≤ 8 GB | 0 MB | - +---------------------+-------------------------------------------------------------------------+ - | 8 GB < TM ≤ 16 GB | [(TM – 8 GB) x 1024 x 10%] MB | - +---------------------+-------------------------------------------------------------------------+ - | 16 GB < TM ≤ 128 GB | [8 GB x 1024 x 10% + (TM – 16 GB) x 1024 x 6%] MB | - +---------------------+-------------------------------------------------------------------------+ - | TM > 128 GB | (8 GB x 1024 x 10% + 112 GB x 1024 x 6% + (TM – 128 GB) x 1024 x 2%) MB | - +---------------------+-------------------------------------------------------------------------+ + +----------------------+-------------------------------------------------------------------------+ + | Total Memory (TM) | Reserved Memory for System Components | + +======================+=========================================================================+ + | TM <= 8 GB | 0 MB | + +----------------------+-------------------------------------------------------------------------+ + | 8 GB < TM <= 16 GB | [(TM - 8 GB) x 1024 x 10%] MB | + +----------------------+-------------------------------------------------------------------------+ + | 16 GB < TM <= 128 GB | [8 GB x 1024 x 10% + (TM - 16 GB) x 1024 x 6%] MB | + +----------------------+-------------------------------------------------------------------------+ + | TM > 128 GB | (8 GB x 1024 x 10% + 112 GB x 1024 x 6% + (TM - 128 GB) x 1024 x 2%) MB | + +----------------------+-------------------------------------------------------------------------+ .. table:: **Table 2** Reservation rules for kubelet - +-------------------+--------------------------------+-------------------------------------------------+ - | Total Memory (TM) | Number of Pods | Reserved Memory for kubelet | - +===================+================================+=================================================+ - | TM ≤ 2 GB | - | TM x 25% | - +-------------------+--------------------------------+-------------------------------------------------+ - | TM > 2 GB | 0 < Max. pods on a node ≤ 16 | 700 MB | - +-------------------+--------------------------------+-------------------------------------------------+ - | | 16 < Max. pods on a node ≤ 32 | [700 + (Max. pods on a node – 16) x 18.75] MB | - +-------------------+--------------------------------+-------------------------------------------------+ - | | 32 < Max. pods on a node ≤ 64 | [1024 + (Max. pods on a node – 32) x 6.25] MB | - +-------------------+--------------------------------+-------------------------------------------------+ - | | 64 < Max. pods on a node ≤ 128 | [1230 + (Max. pods on a node – 64) x 7.80] MB | - +-------------------+--------------------------------+-------------------------------------------------+ - | | Max. pods on a node > 128 | [1740 + (Max. pods on a node – 128) x 11.20] MB | - +-------------------+--------------------------------+-------------------------------------------------+ + +-------------------+---------------------------------+-------------------------------------------------+ + | Total Memory (TM) | Number of Pods | Reserved Memory for kubelet | + +===================+=================================+=================================================+ + | TM <= 2 GB | - | TM x 25% | + +-------------------+---------------------------------+-------------------------------------------------+ + | TM > 2 GB | 0 < Max. pods on a node <= 16 | 700 MB | + +-------------------+---------------------------------+-------------------------------------------------+ + | | 16 < Max. pods on a node <= 32 | [700 + (Max. pods on a node - 16) x 18.75] MB | + +-------------------+---------------------------------+-------------------------------------------------+ + | | 32 < Max. pods on a node <= 64 | [1024 + (Max. pods on a node - 32) x 6.25] MB | + +-------------------+---------------------------------+-------------------------------------------------+ + | | 64 < Max. pods on a node <= 128 | [1230 + (Max. pods on a node - 64) x 7.80] MB | + +-------------------+---------------------------------+-------------------------------------------------+ + | | Max. pods on a node > 128 | [1740 + (Max. pods on a node - 128) x 11.20] MB | + +-------------------+---------------------------------+-------------------------------------------------+ .. important:: @@ -61,17 +61,17 @@ Rules for Reserving Node CPU .. table:: **Table 3** Node CPU reservation rules - +---------------------------+------------------------------------------------------------------------+ - | Total CPU Cores (Total) | Reserved CPU Cores | - +===========================+========================================================================+ - | Total ≤ 1 core | Total x 6% | - +---------------------------+------------------------------------------------------------------------+ - | 1 core < Total ≤ 2 cores | 1 core x 6% + (Total – 1 core) x 1% | - +---------------------------+------------------------------------------------------------------------+ - | 2 cores < Total ≤ 4 cores | 1 core x 6% + 1 core x 1% + (Total – 2 cores) x 0.5% | - +---------------------------+------------------------------------------------------------------------+ - | Total > 4 cores | 1 core x 6% + 1 core x 1% + 2 cores x 0.5% + (Total – 4 cores) x 0.25% | - +---------------------------+------------------------------------------------------------------------+ + +----------------------------+------------------------------------------------------------------------+ + | Total CPU Cores (Total) | Reserved CPU Cores | + +============================+========================================================================+ + | Total <= 1 core | Total x 6% | + +----------------------------+------------------------------------------------------------------------+ + | 1 core < Total <= 2 cores | 1 core x 6% + (Total - 1 core) x 1% | + +----------------------------+------------------------------------------------------------------------+ + | 2 cores < Total <= 4 cores | 1 core x 6% + 1 core x 1% + (Total - 2 cores) x 0.5% | + +----------------------------+------------------------------------------------------------------------+ + | Total > 4 cores | 1 core x 6% + 1 core x 1% + 2 cores x 0.5% + (Total - 4 cores) x 0.25% | + +----------------------------+------------------------------------------------------------------------+ .. important:: diff --git a/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst b/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst index 95e2ace..51a063d 100644 --- a/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst +++ b/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst @@ -15,7 +15,6 @@ In a rolling upgrade, a new node is created, existing workloads are migrated to .. figure:: /_static/images/en-us_image_0295359661.png :alt: **Figure 1** Workload migration - **Figure 1** Workload migration Notes and Constraints diff --git a/umn/source/nodes/resetting_a_node.rst b/umn/source/nodes/resetting_a_node.rst index 684a063..ddda560 100644 --- a/umn/source/nodes/resetting_a_node.rst +++ b/umn/source/nodes/resetting_a_node.rst @@ -39,7 +39,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001190302085.png :alt: **Figure 1** Resetting the selected node - **Figure 1** Resetting the selected node #. Click **Yes** and wait until the node is reset. diff --git a/umn/source/nodes/stopping_a_node.rst b/umn/source/nodes/stopping_a_node.rst index b2f7ee4..b3d27c5 100644 --- a/umn/source/nodes/stopping_a_node.rst +++ b/umn/source/nodes/stopping_a_node.rst @@ -31,7 +31,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001190302087.png :alt: **Figure 1** Nodes details page - **Figure 1** Nodes details page #. In the upper right corner of the ECS details page, click **Stop**. In the **Stop ECS** dialog box, click **Yes**. @@ -40,5 +39,4 @@ Procedure .. figure:: /_static/images/en-us_image_0000001144342232.png :alt: **Figure 2** ECS details page - **Figure 2** ECS details page diff --git a/umn/source/nodes/synchronizing_node_data.rst b/umn/source/nodes/synchronizing_node_data.rst index 8cc0093..e53f7ef 100644 --- a/umn/source/nodes/synchronizing_node_data.rst +++ b/umn/source/nodes/synchronizing_node_data.rst @@ -27,7 +27,6 @@ Procedure .. figure:: /_static/images/en-us_image_0000001144502022.png :alt: **Figure 1** Synchronizing node data - **Figure 1** Synchronizing node data After the synchronization is complete, the "Sync success" message is displayed in the upper right corner. diff --git a/umn/source/permissions_management/cluster_permissions_iam-based.rst b/umn/source/permissions_management/cluster_permissions_iam-based.rst index d407180..a10763c 100644 --- a/umn/source/permissions_management/cluster_permissions_iam-based.rst +++ b/umn/source/permissions_management/cluster_permissions_iam-based.rst @@ -28,7 +28,6 @@ Process Flow .. figure:: /_static/images/en-us_image_0000001120226646.png :alt: **Figure 1** Process of assigning CCE permissions - **Figure 1** Process of assigning CCE permissions #. .. _cce_01_0188__li10176121316284: diff --git a/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst b/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst index 475b138..3e33bb8 100644 --- a/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst +++ b/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst @@ -22,7 +22,6 @@ Role and ClusterRole specify actions that can be performed on specific resources .. figure:: /_static/images/en-us_image_0000001142984374.png :alt: **Figure 1** Role binding - **Figure 1** Role binding On the CCE console, you can assign permissions to a user or user group to access resources in one or multiple namespaces. By default, the CCE console provides the following ClusterRoles: @@ -139,7 +138,6 @@ The **subjects** section binds a Role with an IAM user so that the IAM user can .. figure:: /_static/images/en-us_image_0262051194.png :alt: **Figure 2** A RoleBinding binds the Role to the user. - **Figure 2** A RoleBinding binds the Role to the user. You can also specify a user group in the **subjects** section. In this case, all users in the user group obtain the permissions defined in the Role. diff --git a/umn/source/permissions_management/permissions_overview.rst b/umn/source/permissions_management/permissions_overview.rst index 7a7f44d..8053ca5 100644 --- a/umn/source/permissions_management/permissions_overview.rst +++ b/umn/source/permissions_management/permissions_overview.rst @@ -32,7 +32,6 @@ In general, you configure CCE permissions in two scenarios. The first is creatin .. figure:: /_static/images/en-us_image_0000001168537057.png :alt: **Figure 1** Illustration on CCE permissions - **Figure 1** Illustration on CCE permissions These permissions allow you to manage resource users at a finer granularity. diff --git a/umn/source/reference/how_do_i_rectify_the_fault_when_the_cluster_status_is_unavailable.rst b/umn/source/reference/how_do_i_rectify_the_fault_when_the_cluster_status_is_unavailable.rst index 1df5c95..a6c470e 100644 --- a/umn/source/reference/how_do_i_rectify_the_fault_when_the_cluster_status_is_unavailable.rst +++ b/umn/source/reference/how_do_i_rectify_the_fault_when_the_cluster_status_is_unavailable.rst @@ -28,7 +28,6 @@ Check Item 1: Whether the Security Group Is Modified .. figure:: /_static/images/en-us_image_0000001223473841.png :alt: **Figure 1** Viewing inbound rules of the security group - **Figure 1** Viewing inbound rules of the security group Inbound rule parameter description: @@ -42,7 +41,6 @@ Check Item 1: Whether the Security Group Is Modified .. figure:: /_static/images/en-us_image_0000001178192662.png :alt: **Figure 2** Viewing outbound rules of the security group - **Figure 2** Viewing outbound rules of the security group .. _cce_faq_00039__section11822101617614: @@ -64,5 +62,4 @@ Check Item 2: Whether the DHCP Function of the Subnet Is Disabled .. figure:: /_static/images/en-us_image_0000001223473843.png :alt: **Figure 3** DHCP description in the VPC API Reference - **Figure 3** DHCP description in the VPC API Reference diff --git a/umn/source/reference/how_do_i_troubleshoot_insufficient_eips_when_a_node_is_added.rst b/umn/source/reference/how_do_i_troubleshoot_insufficient_eips_when_a_node_is_added.rst index 6cacb1e..f55c6cf 100644 --- a/umn/source/reference/how_do_i_troubleshoot_insufficient_eips_when_a_node_is_added.rst +++ b/umn/source/reference/how_do_i_troubleshoot_insufficient_eips_when_a_node_is_added.rst @@ -14,7 +14,6 @@ When a node is added, **EIP** is set to **Automatically assign**. The node canno .. figure:: /_static/images/en-us_image_0000001223393901.png :alt: **Figure 1** Purchasing an EIP - **Figure 1** Purchasing an EIP Solution @@ -36,7 +35,6 @@ Two methods are available to solve the problem. .. figure:: /_static/images/en-us_image_0000001223152423.png :alt: **Figure 2** Unbinding an EIP - **Figure 2** Unbinding an EIP #. Return to the **Create Node** page on the CCE console and click **Use existing** to add an EIP. @@ -45,7 +43,6 @@ Two methods are available to solve the problem. .. figure:: /_static/images/en-us_image_0000001223272345.png :alt: **Figure 3** Using an unbound EIP - **Figure 3** Using an unbound EIP - Method 2: Increase the EIP quota. diff --git a/umn/source/reference/planning_cidr_blocks_for_a_cce_cluster.rst b/umn/source/reference/planning_cidr_blocks_for_a_cce_cluster.rst index 6e54cd7..157fcbd 100644 --- a/umn/source/reference/planning_cidr_blocks_for_a_cce_cluster.rst +++ b/umn/source/reference/planning_cidr_blocks_for_a_cce_cluster.rst @@ -24,7 +24,6 @@ A subnet is a network that manages ECS network planes. It supports IP address ma .. figure:: /_static/images/en-us_image_0000001223152421.png :alt: **Figure 1** VPC CIDR block architecture - **Figure 1** VPC CIDR block architecture By default, ECSs in all subnets of the same VPC can communicate with one another, while ECSs in different VPCs cannot communicate with each other. @@ -55,7 +54,6 @@ These are the simplest scenarios. The VPC CIDR block is determined when the VPC .. figure:: /_static/images/en-us_image_0000001223152417.png :alt: **Figure 2** CIDR block in the single-VPC single-cluster scenario - **Figure 2** CIDR block in the single-VPC single-cluster scenario **Single-VPC Multi-Cluster Scenarios** @@ -74,7 +72,6 @@ Pay attention to the following: .. figure:: /_static/images/en-us_image_0000001178034110.png :alt: **Figure 3** VPC network - multi-cluster scenario - **Figure 3** VPC network - multi-cluster scenario In the tunnel network model, the container network is an overlay network plane deployed over the VPC network. Though at some cost of performance, the tunnel encapsulation enables higher interoperability and compatibility with advanced features (such as network policy-based isolation), meeting the requirements of most applications. @@ -83,7 +80,6 @@ In the tunnel network model, the container network is an overlay network plane d .. figure:: /_static/images/en-us_image_0000001178192670.png :alt: **Figure 4** Tunnel network - multi-cluster scenario - **Figure 4** Tunnel network - multi-cluster scenario Pay attention to the following: @@ -102,7 +98,6 @@ In the VPC network model, after creating a peering connection, you need to add r .. figure:: /_static/images/en-us_image_0000001223393899.png :alt: **Figure 5** VPC Network - VPC interconnection scenario - **Figure 5** VPC Network - VPC interconnection scenario To interconnect cluster containers across VPCs, you need to create VPC peering connections. @@ -119,7 +114,6 @@ Pay attention to the following: .. figure:: /_static/images/en-us_image_0000001178034114.png :alt: **Figure 6** Adding the peer container CIDR block to the local route on the VPC console - **Figure 6** Adding the peer container CIDR block to the local route on the VPC console In the tunnel network model, after creating a peering connection, you need to add routes for the peering connection to enable communication between the two VPCs. @@ -128,7 +122,6 @@ In the tunnel network model, after creating a peering connection, you need to ad .. figure:: /_static/images/en-us_image_0000001223473845.png :alt: **Figure 7** Tunnel network - VPC interconnection scenario - **Figure 7** Tunnel network - VPC interconnection scenario Pay attention to the following: @@ -143,7 +136,6 @@ Pay attention to the following: .. figure:: /_static/images/en-us_image_0000001178034116.png :alt: **Figure 8** Adding the subnet CIDR block of the peer cluster node to the local route on the VPC console - **Figure 8** Adding the subnet CIDR block of the peer cluster node to the local route on the VPC console **VPC-IDC Scenarios** diff --git a/umn/source/reference/selecting_a_network_model_when_creating_a_cluster_on_cce.rst b/umn/source/reference/selecting_a_network_model_when_creating_a_cluster_on_cce.rst index 1ef009c..a7cf537 100644 --- a/umn/source/reference/selecting_a_network_model_when_creating_a_cluster_on_cce.rst +++ b/umn/source/reference/selecting_a_network_model_when_creating_a_cluster_on_cce.rst @@ -17,7 +17,6 @@ CCE uses high-performance container networking add-ons, which support the tunnel .. figure:: /_static/images/en-us_image_0000001223393893.png :alt: **Figure 1** Container tunnel network - **Figure 1** Container tunnel network - **VPC network**: The container network uses VPC routing to integrate with the underlying network. This network model is applicable to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. VPC networks are free from tunnel encapsulation overhead and outperform container tunnel networks. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in the cluster can be directly accessed from outside the cluster. @@ -26,7 +25,6 @@ CCE uses high-performance container networking add-ons, which support the tunnel .. figure:: /_static/images/en-us_image_0000001178034108.png :alt: **Figure 2** VPC network - **Figure 2** VPC network The following table lists the differences between the network models. diff --git a/umn/source/reference/what_can_i_do_if_my_cluster_status_is_available_but_the_node_status_is_unavailable.rst b/umn/source/reference/what_can_i_do_if_my_cluster_status_is_available_but_the_node_status_is_unavailable.rst index 309fa2e..6b71aff 100644 --- a/umn/source/reference/what_can_i_do_if_my_cluster_status_is_available_but_the_node_status_is_unavailable.rst +++ b/umn/source/reference/what_can_i_do_if_my_cluster_status_is_available_but_the_node_status_is_unavailable.rst @@ -75,7 +75,6 @@ Check Item 3: Whether You Can Log In to the ECS .. figure:: /_static/images/en-us_image_0000001178034104.png :alt: **Figure 1** Check the node name on the VM and whether the node can be logged in to - **Figure 1** Check the node name on the VM and whether the node can be logged in to If the node names are inconsistent and the password and key cannot be used to log in to the node, Cloud-Init problems occurred when an ECS was created. In this case, restart the node and submit a service ticket to the ECS personnel to locate the root cause. @@ -95,7 +94,6 @@ Check Item 4: Whether the Security Group Is Modified .. figure:: /_static/images/en-us_image_0000001223393891.png :alt: **Figure 2** Viewing inbound rules of the security group - **Figure 2** Viewing inbound rules of the security group Inbound rule parameter description: @@ -109,7 +107,6 @@ Check Item 4: Whether the Security Group Is Modified .. figure:: /_static/images/en-us_image_0000001223393887.png :alt: **Figure 3** Viewing outbound rules of the security group - **Figure 3** Viewing outbound rules of the security group .. _cce_faq_00120__section165209286116: diff --git a/umn/source/reference/what_is_the_relationship_between_clusters,_vpcs,_and_subnets.rst b/umn/source/reference/what_is_the_relationship_between_clusters,_vpcs,_and_subnets.rst index 3db0b35..f738568 100644 --- a/umn/source/reference/what_is_the_relationship_between_clusters,_vpcs,_and_subnets.rst +++ b/umn/source/reference/what_is_the_relationship_between_clusters,_vpcs,_and_subnets.rst @@ -5,7 +5,7 @@ What Is the Relationship Between Clusters, VPCs, and Subnets? ============================================================= -A VPC is similar to a private local area network (LAN) managed by a home gateway whose IP address is 192.168.0.0/16. A VPC is a private network built on the cloud and provides basic network environment for running elastic cloud servers (ECSs), elastic load balancers (ELBs), and middleware. Networks of different scales can be configured based on service requirements. Generally, you can set the CIDR block to 10.0.0.0/8–24, 172.16.0.0/12–24, or 192.168.0.0/16–24. The largest CIDR block is 10.0.0.0/8, which corresponds to a class A network. +A VPC is similar to a private local area network (LAN) managed by a home gateway whose IP address is 192.168.0.0/16. A VPC is a private network built on the cloud and provides basic network environment for running elastic cloud servers (ECSs), elastic load balancers (ELBs), and middleware. Networks of different scales can be configured based on service requirements. Generally, you can set the CIDR block to 10.0.0.0/8-24, 172.16.0.0/12-24, or 192.168.0.0/16-24. The largest CIDR block is 10.0.0.0/8, which corresponds to a class A network. A VPC can be divided into multiple subnets. Security groups are configured to determine whether these subnets can communicate with each other. This ensures that subnets can be isolated from each other, so that you can deploy different services on different subnets. @@ -22,5 +22,4 @@ As shown in :ref:`Figure 1 ` | - +-----------------------------+--------------------------------+-----------------------------------------------------+ - | 1.11 ≤ K8s version < 1.11.7 | Clusters from v1.11 to v1.11.7 | :ref:`Example YAML ` | - +-----------------------------+--------------------------------+-----------------------------------------------------+ - | K8s version = 1.9 | Clusters of v1.9 | :ref:`Example YAML ` | - +-----------------------------+--------------------------------+-----------------------------------------------------+ + +-------------------------------+--------------------------------+-----------------------------------------------------+ + | Kubernetes Version | Description | YAML Example | + +===============================+================================+=====================================================+ + | 1.11.7 <= K8s version <= 1.13 | Clusters from v1.11.7 to v1.13 | :ref:`Example YAML ` | + +-------------------------------+--------------------------------+-----------------------------------------------------+ + | 1.11 <= K8s version < 1.11.7 | Clusters from v1.11 to v1.11.7 | :ref:`Example YAML ` | + +-------------------------------+--------------------------------+-----------------------------------------------------+ + | K8s version = 1.9 | Clusters of v1.9 | :ref:`Example YAML ` | + +-------------------------------+--------------------------------+-----------------------------------------------------+ **Clusters from v1.11.7 to v1.13** diff --git a/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/overview.rst b/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/overview.rst index 04d374c..f7bb8c9 100644 --- a/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/overview.rst +++ b/umn/source/storage_flexvolume/using_evs_disks_as_storage_volumes/overview.rst @@ -11,7 +11,6 @@ To achieve persistent storage, CCE allows you to mount the storage volumes creat .. figure:: /_static/images/en-us_image_0276664178.png :alt: **Figure 1** Mounting EVS volumes to CCE - **Figure 1** Mounting EVS volumes to CCE Description diff --git a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst index 0a8f863..3609cbd 100644 --- a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst +++ b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst @@ -32,13 +32,13 @@ Procedure **touch pv-obs-example.yaml** **pvc-obs-example.yaml** - +---------------------------+------------------------------+-----------------------------------------------------+ - | Kubernetes Version | Description | YAML Example | - +===========================+==============================+=====================================================+ - | 1.11 ≤ K8s version ≤ 1.13 | Clusters from v1.11 to v1.13 | :ref:`Example YAML ` | - +---------------------------+------------------------------+-----------------------------------------------------+ - | K8s version = 1.9 | Clusters of v1.9 | :ref:`Example YAML ` | - +---------------------------+------------------------------+-----------------------------------------------------+ + +-----------------------------+------------------------------+-----------------------------------------------------+ + | Kubernetes Version | Description | YAML Example | + +=============================+==============================+=====================================================+ + | 1.11 <= K8s version <= 1.13 | Clusters from v1.11 to v1.13 | :ref:`Example YAML ` | + +-----------------------------+------------------------------+-----------------------------------------------------+ + | K8s version = 1.9 | Clusters of v1.9 | :ref:`Example YAML ` | + +-----------------------------+------------------------------+-----------------------------------------------------+ **Clusters from v1.11 to v1.13** diff --git a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/overview.rst b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/overview.rst index fb3f5b3..03014dc 100644 --- a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/overview.rst +++ b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/overview.rst @@ -11,7 +11,6 @@ CCE allows you to mount a volume created from an Object Storage Service (OBS) bu .. figure:: /_static/images/en-us_image_0276664570.png :alt: **Figure 1** Mounting OBS volumes to CCE - **Figure 1** Mounting OBS volumes to CCE Storage Class diff --git a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/using_obs_volumes.rst b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/using_obs_volumes.rst index 9986c80..debd06d 100644 --- a/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/using_obs_volumes.rst +++ b/umn/source/storage_flexvolume/using_obs_buckets_as_storage_volumes/using_obs_volumes.rst @@ -34,7 +34,6 @@ The procedure for configuring the AK/SK is as follows: .. figure:: /_static/images/en-us_image_0000001190538605.png :alt: **Figure 1** Configuring the AK/SK - **Figure 1** Configuring the AK/SK #. Click |image1|, select a key file, and click **Upload** to upload the key file. diff --git a/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst b/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst index 87ca71e..d5434b1 100644 --- a/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst +++ b/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst @@ -31,13 +31,13 @@ Procedure **touch pv-sfs-example.yaml** **pvc-sfs-example.yaml** - +---------------------------+------------------------------+-----------------------------------------------------+ - | Kubernetes Version | Description | YAML Example | - +===========================+==============================+=====================================================+ - | 1.11 ≤ K8s version ≤ 1.13 | Clusters from v1.11 to v1.13 | :ref:`Example YAML ` | - +---------------------------+------------------------------+-----------------------------------------------------+ - | K8s version = 1.9 | Clusters of v1.9 | :ref:`Example YAML ` | - +---------------------------+------------------------------+-----------------------------------------------------+ + +-----------------------------+------------------------------+-----------------------------------------------------+ + | Kubernetes Version | Description | YAML Example | + +=============================+==============================+=====================================================+ + | 1.11 <= K8s version <= 1.13 | Clusters from v1.11 to v1.13 | :ref:`Example YAML ` | + +-----------------------------+------------------------------+-----------------------------------------------------+ + | K8s version = 1.9 | Clusters of v1.9 | :ref:`Example YAML ` | + +-----------------------------+------------------------------+-----------------------------------------------------+ **Clusters from v1.11 to v1.13** diff --git a/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/overview.rst b/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/overview.rst index 58c1a61..ec847a0 100644 --- a/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/overview.rst +++ b/umn/source/storage_flexvolume/using_sfs_file_systems_as_storage_volumes/overview.rst @@ -11,7 +11,6 @@ CCE allows you to mount a volume created from a Scalable File Service (SFS) file .. figure:: /_static/images/en-us_image_0276664213.png :alt: **Figure 1** Mounting SFS volumes to CCE - **Figure 1** Mounting SFS volumes to CCE Description diff --git a/umn/source/storage_flexvolume/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst b/umn/source/storage_flexvolume/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst index 2dcee32..45f6032 100644 --- a/umn/source/storage_flexvolume/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst +++ b/umn/source/storage_flexvolume/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst @@ -11,7 +11,6 @@ CCE allows you to mount a volume created from an SFS Turbo file system to a cont .. figure:: /_static/images/en-us_image_0276664792.png :alt: **Figure 1** Mounting SFS Turbo volumes to CCE - **Figure 1** Mounting SFS Turbo volumes to CCE Description diff --git a/umn/source/workloads/configuring_a_container/setting_an_environment_variable.rst b/umn/source/workloads/configuring_a_container/setting_an_environment_variable.rst index baa0d7a..892a491 100644 --- a/umn/source/workloads/configuring_a_container/setting_an_environment_variable.rst +++ b/umn/source/workloads/configuring_a_container/setting_an_environment_variable.rst @@ -35,7 +35,6 @@ Manually Adding Environment Variables .. figure:: /_static/images/en-us_image_0000001190302095.png :alt: **Figure 1** Manually adding environment variables - **Figure 1** Manually adding environment variables Importing Environment Variables from a Secret @@ -55,7 +54,6 @@ Importing Environment Variables from a Secret .. figure:: /_static/images/en-us_image_0000001190302097.png :alt: **Figure 2** Importing environment variables from a secret - **Figure 2** Importing environment variables from a secret Importing Environment Variables from a ConfigMap diff --git a/umn/source/workloads/configuring_a_container/setting_container_specifications.rst b/umn/source/workloads/configuring_a_container/setting_container_specifications.rst index a7441a7..86d6e14 100644 --- a/umn/source/workloads/configuring_a_container/setting_container_specifications.rst +++ b/umn/source/workloads/configuring_a_container/setting_container_specifications.rst @@ -46,7 +46,7 @@ Configuration Description **Recommended configuration** - Actual available CPU of a node ≥ Sum of CPU limits of all containers on the current node ≥ Sum of CPU requests of all containers on the current node. You can view the actual available CPUs of a node on the CCE console (**Resource Management** > **Nodes** > **Allocatable**). + Actual available CPU of a node >= Sum of CPU limits of all containers on the current node >= Sum of CPU requests of all containers on the current node. You can view the actual available CPUs of a node on the CCE console (**Resource Management** > **Nodes** > **Allocatable**). - Memory quotas: @@ -62,14 +62,14 @@ Configuration Description **Recommended configuration** - Actual available memory of a node ≥ Sum of memory limits of all containers on the current node ≥ Sum of memory requests of all containers on the current node. You can view the actual available memory of a node on the CCE console (**Resource Management** > **Nodes** > **Allocatable**). + Actual available memory of a node >= Sum of memory limits of all containers on the current node >= Sum of memory requests of all containers on the current node. You can view the actual available memory of a node on the CCE console (**Resource Management** > **Nodes** > **Allocatable**). .. note:: The allocatable resources are calculated based on the resource request value (**Request**), which indicates the upper limit of resources that can be requested by pods on this node, but does not indicate the actual available resources of the node. The calculation formula is as follows: - - Allocatable CPU = Total CPU – Requested CPU of all pods – Reserved CPU for other resources - - Allocatable memory = Total memory – Requested memory of all pods – Reserved memory for other resources + - Allocatable CPU = Total CPU - Requested CPU of all pods - Reserved CPU for other resources + - Allocatable memory = Total memory - Requested memory of all pods - Reserved memory for other resources Example ------- diff --git a/umn/source/workloads/configuring_a_container/setting_container_startup_commands.rst b/umn/source/workloads/configuring_a_container/setting_container_startup_commands.rst index e39a1d7..6a4cf5a 100644 --- a/umn/source/workloads/configuring_a_container/setting_container_startup_commands.rst +++ b/umn/source/workloads/configuring_a_container/setting_container_startup_commands.rst @@ -93,7 +93,6 @@ Setting the Startup Command .. figure:: /_static/images/en-us_image_0000001190302089.png :alt: **Figure 1** Setting the startup command and parameters - **Figure 1** Setting the startup command and parameters Example YAML file: @@ -112,7 +111,6 @@ Setting the Startup Command .. figure:: /_static/images/en-us_image_0000001144342236.png :alt: **Figure 2** Setting the startup command - **Figure 2** Setting the startup command .. note:: @@ -133,7 +131,6 @@ Setting the Startup Command .. figure:: /_static/images/en-us_image_0000001190302091.png :alt: **Figure 3** Setting startup arguments - **Figure 3** Setting startup arguments .. note:: @@ -158,7 +155,6 @@ Setting the Startup Command .. figure:: /_static/images/en-us_image_0000001144342238.png :alt: **Figure 4** Checking or editing a YAML file - **Figure 4** Checking or editing a YAML file - After the workload is created, go to the workload list. In the same row as the workload, choose **More** > **Edit YAML**. diff --git a/umn/source/workloads/configuring_a_container/setting_health_check_for_a_container.rst b/umn/source/workloads/configuring_a_container/setting_health_check_for_a_container.rst index 83a2c90..fb2414d 100644 --- a/umn/source/workloads/configuring_a_container/setting_health_check_for_a_container.rst +++ b/umn/source/workloads/configuring_a_container/setting_health_check_for_a_container.rst @@ -20,7 +20,7 @@ Health Check Methods - **HTTP request** - This health check mode is applicable to containers that provide HTTP/HTTPS services. The cluster periodically initiates an HTTP/HTTPS GET request to such containers. If the return code of the HTTP/HTTPS response is within 200–399, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port and an HTTP/HTTPS request path. + This health check mode is applicable to containers that provide HTTP/HTTPS services. The cluster periodically initiates an HTTP/HTTPS GET request to such containers. If the return code of the HTTP/HTTPS response is within 200-399, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port and an HTTP/HTTPS request path. For example, for a container that provides HTTP services, the HTTP check path is **/health-check**, the port is 80, and the host address is optional (which defaults to the container IP address). Here, 172.16.0.186 is used as an example, and we can get such a request: GET http://172.16.0.186:80/health-check. The cluster periodically initiates this request to the container. @@ -36,13 +36,13 @@ Health Check Methods The CLI mode can be used to replace the HTTP request-based and TCP port-based health check. - - For a TCP port, you can write a program script to connect to a container port. If the connection is successful, the script returns **0**. Otherwise, the script returns **–1**. + - For a TCP port, you can write a program script to connect to a container port. If the connection is successful, the script returns **0**. Otherwise, the script returns **-1**. - For an HTTP request, you can write a program script to run the **wget** command for a container. **wget http://127.0.0.1:80/health-check** - Check the return code of the response. If the return code is within 200–399, the script returns **0**. Otherwise, the script returns **–1**. + Check the return code of the response. If the return code is within 200-399, the script returns **0**. Otherwise, the script returns **-1**. .. important:: diff --git a/umn/source/workloads/creating_a_daemonset.rst b/umn/source/workloads/creating_a_daemonset.rst index d4b8faa..a8d2052 100644 --- a/umn/source/workloads/creating_a_daemonset.rst +++ b/umn/source/workloads/creating_a_daemonset.rst @@ -155,7 +155,7 @@ Procedure - **Upgrade Policy**: - **Upgrade Mode**: Only **Rolling upgrade** is supported. During a rolling upgrade, old pods are gradually replaced with new ones. During the upgrade, service traffic is evenly distributed to both pods to ensure service continuity. - - **Maximum Number of Unavailable Pods**: Maximum number of unavailable pods allowed in a rolling upgrade. If the number is equal to the total number of pods, services may be interrupted. Minimum number of alive pods = Total pods – Maximum number of unavailable pods + - **Maximum Number of Unavailable Pods**: Maximum number of unavailable pods allowed in a rolling upgrade. If the number is equal to the total number of pods, services may be interrupted. Minimum number of alive pods = Total pods - Maximum number of unavailable pods - **Graceful Deletion**: @@ -171,7 +171,6 @@ Procedure .. figure:: /_static/images/en-us_image_0220765374.png :alt: **Figure 1** Advanced pod settings - **Figure 1** Advanced pod settings - **Client DNS Configuration**: A CCE cluster has a built-in DNS add-on (CoreDNS) to provide domain name resolution for workloads in the cluster. diff --git a/umn/source/workloads/creating_a_deployment.rst b/umn/source/workloads/creating_a_deployment.rst index e3a3c63..9dd8818 100644 --- a/umn/source/workloads/creating_a_deployment.rst +++ b/umn/source/workloads/creating_a_deployment.rst @@ -176,13 +176,13 @@ CCE provides multiple methods for creating a workload. You can use any of the fo - **Rolling upgrade**: Old pods are gradually replaced with new ones. During the upgrade, service traffic is evenly distributed to both pods to ensure service continuity. - - **Maximum Number of Unavailable Pods**: maximum number of unavailable pods allowed in a rolling upgrade. If the number is equal to the total number of pods, services may be interrupted. Minimum number of alive pods = Total pods – Maximum number of unavailable pods + - **Maximum Number of Unavailable Pods**: maximum number of unavailable pods allowed in a rolling upgrade. If the number is equal to the total number of pods, services may be interrupted. Minimum number of alive pods = Total pods - Maximum number of unavailable pods - **In-place upgrade**: Old pods are deleted before new pods are created. Services will be interrupted during an in-place upgrade. - **Graceful Deletion**: A time window can be set for workload deletion and reserved for executing commands in the pre-stop phase in the lifecycle. If workload processes are not terminated after the time window elapses, the workload will be forcibly deleted. - - **Graceful Time Window (s)**: Set a time window (0–9999s) for pre-stop commands to finish execution before a workload is deleted. The default value is 30s. + - **Graceful Time Window (s)**: Set a time window (0-9999s) for pre-stop commands to finish execution before a workload is deleted. The default value is 30s. - **Scale Order**: Choose **Prioritize new pods** or **Prioritize old pods** based on service requirements. **Prioritize new pods** indicates that new pods will be first deleted when a scale-in is triggered. - **Migration Policy**: When the node where a workload's pods are located is unavailable for the specified amount of time, the pods will be rescheduled to other available nodes. @@ -199,7 +199,6 @@ CCE provides multiple methods for creating a workload. You can use any of the fo .. figure:: /_static/images/en-us_image_0220765374.png :alt: **Figure 1** Advanced pod settings - **Figure 1** Advanced pod settings - **Client DNS Configuration**: A CCE cluster has a built-in DNS add-on (CoreDNS) to provide domain name resolution for workloads in the cluster. diff --git a/umn/source/workloads/creating_a_statefulset.rst b/umn/source/workloads/creating_a_statefulset.rst index 14c07d0..a1d1982 100644 --- a/umn/source/workloads/creating_a_statefulset.rst +++ b/umn/source/workloads/creating_a_statefulset.rst @@ -197,7 +197,7 @@ CCE provides multiple methods for creating a workload. You can use any of the fo - **Graceful Deletion**: A time window can be set for workload deletion and reserved for executing commands in the pre-stop phase in the lifecycle. If workload processes are not terminated after the time window elapses, the workload will be forcibly deleted. - - **Graceful Time Window (s)**: Set a time window (0–9999s) for pre-stop commands to finish execution before a workload is deleted. The default value is 30s. + - **Graceful Time Window (s)**: Set a time window (0-9999s) for pre-stop commands to finish execution before a workload is deleted. The default value is 30s. - **Scale Order**: Choose **Prioritize new pods** or **Prioritize old pods** based on service requirements. **Prioritize new pods** indicates that new pods will be first deleted when a scale-in is triggered. - **Scheduling Policies**: You can combine static global scheduling policies or dynamic runtime scheduling policies as required. For details, see :ref:`Scheduling Policy Overview `. @@ -210,7 +210,6 @@ CCE provides multiple methods for creating a workload. You can use any of the fo .. figure:: /_static/images/en-us_image_0220765374.png :alt: **Figure 1** Advanced pod settings - **Figure 1** Advanced pod settings - **Client DNS Configuration**: A CCE cluster has a built-in DNS add-on (CoreDNS) to provide domain name resolution for workloads in the cluster. diff --git a/umn/source/workloads/managing_workloads_and_jobs.rst b/umn/source/workloads/managing_workloads_and_jobs.rst index 53d542d..0c54978 100644 --- a/umn/source/workloads/managing_workloads_and_jobs.rst +++ b/umn/source/workloads/managing_workloads_and_jobs.rst @@ -237,7 +237,6 @@ If you set **key** to **role** and **value** to **frontend** when using workload .. figure:: /_static/images/en-us_image_0165888686.png :alt: **Figure 1** Label example - **Figure 1** Label example #. Log in to the CCE console. In the navigation pane, choose **Workloads** > **Deployments**. diff --git a/umn/source/workloads/overview.rst b/umn/source/workloads/overview.rst index 4227f20..e264cb0 100644 --- a/umn/source/workloads/overview.rst +++ b/umn/source/workloads/overview.rst @@ -23,7 +23,6 @@ Pods can be used in either of the following ways: .. figure:: /_static/images/en-us_image_0258392378.png :alt: **Figure 1** Pod - **Figure 1** Pod In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller generally uses a pod template to create corresponding pods. @@ -37,7 +36,6 @@ A pod is the smallest and simplest unit that you create or deploy in Kubernetes. .. figure:: /_static/images/en-us_image_0258095884.png :alt: **Figure 2** Relationship between a Deployment and pods - **Figure 2** Relationship between a Deployment and pods A Deployment can contain one or more pods. These pods have the same role. Therefore, the system automatically distributes requests to multiple pods of a Deployment. @@ -77,7 +75,6 @@ DaemonSets are closely related to nodes. If a node becomes faulty, the DaemonSet .. figure:: /_static/images/en-us_image_0258871213.png :alt: **Figure 3** DaemonSet - **Figure 3** DaemonSet Job and Cron Job