cloud-container-engine/umn/source/node_pools/creating_a_node_pool.rst
proposalbot 85e1a6ed92 Changes to cce_umn from docs/doc-exports#770 (Added the support of the OS for fe
Reviewed-by: Eotvos, Oliver <oliver.eotvos@t-systems.com>
Co-authored-by: proposalbot <proposalbot@otc-service.com>
Co-committed-by: proposalbot <proposalbot@otc-service.com>
2023-06-20 14:44:25 +00:00

63 KiB

original_name

cce_10_0012.html

Creating a Node Pool

Scenario

This section describes how to create a node pool and perform operations on the node pool. For details about how a node pool works, see Node Pool Overview <cce_10_0081>.

Notes and Constraints

  • The autoscaler add-on needs to be installed for node auto scaling. For details about the add-on installation and parameter configuration, see autoscaler <cce_10_0154>.

Procedure

  1. Log in to the CCE console.

  2. Click the cluster name and access the cluster console. Choose Nodes in the navigation pane and click the Node Pools tab on the right.

  3. In the upper right corner of the page, click Create Node Pool.

    Basic Settings

    Table 1 Basic settings
    Parameter Description
    Node Pool Name Name of a node pool. By default, the name is in the format of Cluster name-nodepool-Random number. If you do not want to use the default name format, you can customize the name.
    Nodes Number of nodes to be created in this node pool.
    Auto Scaling

    By default, auto scaling is disabled.

    Install the autoscaler add-on <cce_10_0154> to enable auto scaling.

    After you enable auto scaling by switching on image1, nodes in the node pool will be automatically created or deleted based on cluster loads.

    • Maximum Nodes and Minimum Nodes: You can set the maximum and minimum number of nodes to ensure that the number of nodes to be scaled is within a proper range.

    • Priority: Set this parameter based on service requirements. A larger value indicates a higher priority. For example, if this parameter is set to 1 and 4 respectively for node pools A and B, B has a higher priority than A. If the priorities of multiple node pools are set to the same value, for example, 2, the node pools are not prioritized and the system performs scaling based on the minimum resource waste principle.

      Note

      CCE selects a node pool for auto scaling based on the following policies:

      1. CCE uses algorithms to determine whether a node pool meets the conditions to allow scheduling of a pod in pending state, including whether the node resources are greater than requested by the pod, and whether the nodeSelect, nodeAffinity, and taints meet the conditions. In addition, the node pools that fail to be scaled (due to insufficient resources or other reasons) and are still in the 15-minute cool-down interval are filtered.
      2. If multiple node pools meet the scaling requirements, the system checks the priority of each node pool and selects the node pool with the highest priority for scaling. The value ranges from 0 to 100 and the default priority is 0. The value 100 indicates the highest priority, and the value 0 indicates the lowest priority.
      3. If multiple node pools have the same priority or no priority is configured for them, the system selects the node pool that will consume the least resources based on the configured VM specification.
      4. If the VM specifications of multiple node pools are the same but the node pools are deployed in different AZs, the system randomly selects a node pool to trigger scaling.
    • Cooldown Period: Requied. The unit is minute. This field indicates the period during which the nodes added in the current node pool cannot be scaled in.

      Scale-in cooling intervals can be configured in the node pool settings and the autoscaler add-on <cce_10_0154> settings.

      Scale-in cooling interval configured in a node pool

      This interval indicates the period during which nodes added to the current node pool after a scale-out operation cannot be deleted. This interval takes effect at the node pool level.

      Scale-in cooling interval configured in the autoscaler add-on

      The interval after a scale-out indicates the period during which the entire cluster cannot be scaled in after the autoscaler add-on triggers scale-out (due to the unschedulable pods, metrics, and scaling policies). This interval takes effect at the cluster level.

      The interval after a node is deleted indicates the period during which the cluster cannot be scaled in after the autoscaler add-on triggers scale-in. This interval takes effect at the cluster level.

      The interval after a failed scale-in indicates the period during which the cluster cannot be scaled in after the autoscaler add-on triggers scale-in. This interval takes effect at the cluster level.

    Note

    You are advised not to store important data on nodes in a node pool because after auto scaling, data cannot be restored as nodes may be deleted.

    Compute Settings

    You can configure the specifications and OS of a cloud server, on which your containerized applications run.

    Table 2 Configuration parameters
    Parameter Description
    AZ

    AZ where the node is located. Nodes in a cluster can be created in different AZs for higher reliability. The value cannot be changed after the node is created.

    You are advised to select Random to deploy your node in a random AZ based on the selected node flavor.

    An AZ is a physical region where resources use independent power supply and networks. AZs are physically isolated but interconnected through an internal network. To enhance workload availability, create nodes in different AZs.

    Node Type

    CCE cluster:

    • ECS (VM): Containers run on ECSs.

    CCE Turbo cluster:

    • ECS (VM): Containers run on ECSs. Only Trunkport ECSs (models that can be bound with multiple elastic network interfaces (ENIs)) are supported.
    Container Engine

    CCE clusters support Docker and containerd in some scenarios.

    • VPC network clusters of v1.23 and later versions support containerd. Container tunnel network clusters of v1.23.2-r0 and later versions support containerd.
    • For a CCE Turbo cluster, both Docker and containerd are supported. For details, see Mapping between Node OSs and Container Engines <cce_10_0462__section159298451879>.
    Specifications Select a node specification based on service requirements. The available node specifications vary depending on regions or AZs. For details, see the CCE console.
    OS

    Select an OS type. Different types of nodes support different OSs. For details, see Supported Node Specifications <cce_10_0461__section1667513391595>.

    Public image: Select an OS for the node.

    Private image: You can use private images.

    Login Mode
    • Key Pair

      Select the key pair used to log in to the node. You can select a shared key.

      A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create Key Pair..

    Storage Settings

    Configure storage resources on a node for the containers running on it. Set the disk size according to site requirements.

    Table 3 Parameters for storage settings
    Parameter Description
    System Disk

    System disk used by the node OS. The value ranges from 40 GB to 1,024 GB. The default value is 50 GB.

    Encryption: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function. This function is available only in certain regions.

    • Encryption is not selected by default.
    • After you select Encryption, you can select an existing key in the displayed dialog box. If no key is available, click View Key List to create a key. After the key is created, click the refresh icon.
    Data Disk

    At least one data disk is required for the container runtime and kubelet. The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.

    • First data disk: used for container runtime and kubelet components. The value ranges from 20 GB to 32,768 GB. The default value is 100 GB.
    • Other data disks: You can set the data disk size to a value ranging from 10 GB to 32,768 GB. The default value is 100 GB.

    Advanced Settings

    Click Expand to set the following parameters:

    • Allocate Disk Space: Select this option to define the disk space occupied by the container runtime to store the working directories, container image data, and image metadata. For details about how to allocate data disk space, see Data Disk Space Allocation <cce_10_0341>.
    • Encryption: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function. This function is available only in certain regions.
      • Encryption is not selected by default.
      • After you select Encryption, you can select an existing key in the displayed dialog box. If no key is available, click View Key List to create a key. After the key is created, click the refresh icon.

    Adding Multiple Data Disks

    A maximum of four data disks can be added. By default, raw disks are created without any processing. You can also click Expand and select any of the following options:

    • Default: By default, a raw disk is created without any processing.
    • Mount Disk: The data disk is attached to a specified directory.

    Local Disk Description

    If the node flavor is disk-intensive or ultra-high I/O, one data disk can be a local disk.

    Local disks may break down and do not ensure data reliability. It is recommended that you store service data in EVS disks, which are more reliable than local disks.

    Network Settings

    Configure networking resources to allow node and containerized application access.

    Table 4 Configuration parameters
    Parameter Description
    Node Subnet The node subnet selected during cluster creation is used by default. You can choose another subnet instead.
    Node IP Address Random allocation is supported.
    Associate Security Group

    Security group used by the nodes created in the node pool. A maximum of 5 security groups can be selected.

    When a cluster is created, a node security group named {Cluster name}-cce-node-{Random ID} is created and used by default.

    Traffic needs to pass through certain ports in the node security group to ensure node communications. Ensure that you have enabled these ports if you select another security group.

    Advanced Settings

    Configure advanced node capabilities such as labels, taints, and startup command.

    Table 5 Advanced configuration parameters
    Parameter Description
    Kubernetes Label

    Click Add to set the key-value pair attached to the Kubernetes objects (such as pods). A maximum of 20 labels can be added.

    Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see Labels and Selectors.

    Resource Tag

    You can add resource tags to classify resources.

    You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use these tags to improve tagging and resource migration efficiency.

    CCE will automatically create the "CCE-Dynamic-Provisioning-Node=node id" tag.

    Taint

    This parameter is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters:

    • Key: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key.
    • Value: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.).
    • Effect: Available options are NoSchedule, PreferNoSchedule, and NoExecute.

    For details, see Managing Node Taints <cce_10_0352>.

    Note

    For a cluster of v1.19 or earlier, the workload may have been scheduled to a node before the taint is added. To avoid such a situation, select a cluster of v1.19 or later.

    Max. Pods

    Maximum number of pods that can run on the node, including the default system pods.

    This limit prevents the node from being overloaded with pods.

    This number is also decided by other factors. For details, see Maximum Number of Pods That Can Be Created on a Node <cce_10_0348>.

    ECS Group

    An ECS group logically groups ECSs. The ECSs in the same ECS group comply with the same policy associated with the ECS group.

    Anti-affinity: ECSs in an ECS group are deployed on different physical hosts to improve service reliability.

    Select an existing ECS group, or click Add ECS Group to create one. After the ECS group is created, click the refresh button.

    Pre-installation Command

    Enter commands. A maximum of 1,000 characters are allowed.

    The script will be executed before Kubernetes software is installed. Note that if the script is incorrect, Kubernetes software may fail to be installed.

    Post-installation Command

    Enter commands. A maximum of 1,000 characters are allowed.

    The script will be executed after Kubernetes software is installed and will not affect the installation.

    Agency

    An agency is created by the account administrator on the IAM console. By creating an agency, you can share your cloud server resources with another account, or entrust a more professional person or team to manage your resources.

    If no agency is available, click Create Agency on the right to create one.

  4. Click Next: Confirm.

  5. Click Submit.