doc-exports/docs/cce/umn/cce_bestpractice_0317.html
Dong, Qiu Jian 3d9cca138b CCE UMN: Added the support of the OS for features and cluster versions.
Reviewed-by: Eotvos, Oliver <oliver.eotvos@t-systems.com>
Co-authored-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
Co-committed-by: Dong, Qiu Jian <qiujiandong1@huawei.com>
2023-06-20 14:52:27 +00:00

104 lines
15 KiB
HTML

<a name="cce_bestpractice_0317"></a><a name="cce_bestpractice_0317"></a>
<h1 class="topictitle1">Cluster Security</h1>
<div id="body8662426"><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p1117095515514">For security purposes, you are advised to configure a cluster as follows.</p>
<div class="section" id="cce_bestpractice_0317__en-us_topic_0000001226756283_section2556163712524"><h4 class="sectiontitle">Using the CCE Cluster of the Latest Version</h4><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p334392925917">Kubernetes releases a major version in about four months. CCE follows the same frequency as Kubernetes to release major versions. To be specific, a new CCE version is released about three months after a new Kubernetes version is released in the community. For example, Kubernetes v1.19 was released in September 2020 and CCE v1.19 was released in March 2021.</p>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p12343229105914">The latest cluster version has known vulnerabilities fixed or provides a more comprehensive security protection mechanism. You are advised to select the latest cluster version when creating a cluster. Before a cluster version is deprecated and removed, upgrade your cluster to a supported version.</p>
</div>
<div class="section" id="cce_bestpractice_0317__en-us_topic_0000001226756283_section188477111536"><h4 class="sectiontitle">Disabling the Automatic Token Mounting Function of the Default Service Account</h4><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p186241431173414">By default, Kubernetes associates the default service account with every pod. That is, the token is mounted to a container. The container can use this token to pass the authentication by the kube-apiserver and kubelet components. In a cluster with RBAC disabled, the service account who owns the token has the control permissions for the entire cluster. In a cluster with RBAC enabled, the permissions of the service account who owns the token depends on the roles associated by the administrator. The service account's token is generally used by workloads that need to access kube-apiserver, such as coredns, autoscaler, and prometheus. For workloads that do not need to access kube-apiserver, you are advised to disable the automatic association between the service account and token.</p>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p9625163114348">Two methods are available:</p>
<ul id="cce_bestpractice_0317__en-us_topic_0000001226756283_ul1695212371341"><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li18952103743417">Method 1: Set the <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b9857194511518">automountServiceAccountToken</strong> field of the service account to <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b5857184518511">false</strong>. After the configuration is complete, newly created workloads will not be associated with the default service account by default. Set this field for each namespace as required.<pre class="screen" id="cce_bestpractice_0317__en-us_topic_0000001226756283_screen169251958163416">apiVersion: v1
kind: ServiceAccount
metadata:
name: default
automountServiceAccountToken: false
...</pre>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p169601538341">When a workload needs to be associated with a service account, explicitly set the <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b1368885018512">automountServiceAccountToken</strong> field to <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b168995018515">true</strong> in the YAML file of the workload.</p>
<pre class="screen" id="cce_bestpractice_0317__en-us_topic_0000001226756283_screen295517306350">...
spec:
template:
spec:
serviceAccountName: default
automountServiceAccountToken: true
...</pre>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p9434327103516"></p>
</li><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li2018250143416">Method 2: Explicitly disable the function of automatically associating with service accounts for workloads.<pre class="screen" id="cce_bestpractice_0317__en-us_topic_0000001226756283_screen545123953619">...
spec:
template:
spec:
automountServiceAccountToken: false
...</pre>
</li></ul>
</div>
<div class="section" id="cce_bestpractice_0317__en-us_topic_0000001226756283_section16521330135312"><h4 class="sectiontitle">Configuring Proper Cluster Access Permissions for Users</h4><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p1421322620111">CCE allows you to create multiple IAM users. Your account can create different user groups, assign different access permissions to different user groups, and add users to the user groups with corresponding permissions when creating IAM users. In this way, users can control permissions on different regions and assign read-only permissions. Your account can also assign namespace-level permissions for users or user groups. To ensure security, it is advised that minimum user access permissions are assigned.</p>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p815419540271">If you need to create multiple IAM users, configure the permissions of the IAM users and namespaces properly.</p>
</div>
<div class="section" id="cce_bestpractice_0317__en-us_topic_0000001226756283_section5542036155511"><h4 class="sectiontitle">Configuring Resource Quotas for Cluster Namespaces</h4><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p207925320495">CCE provides resource quota management, which allows users to limit the total amount of resources that can be allocated to each namespace. These resources include CPU, memory, storage volumes, pods, Services, Deployments, and StatefulSets. Proper configuration can prevent excessive resources created in a namespace from affecting the stability of the entire cluster.</p>
</div>
<div class="section" id="cce_bestpractice_0317__en-us_topic_0000001226756283_section1396719295520"><h4 class="sectiontitle">Configuring LimitRange for Containers in a Namespace</h4><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p1984219418382">With resource quotas, cluster administrators can restrict the use and creation of resources by namespace. In a namespace, a pod or container can use the maximum CPU and memory resources defined by the resource quota of the namespace. In this case, a pod or container may monopolize all available resources in the namespace. You are advised to configure LimitRange to restrict resource allocation within the namespace. The LimitRange parameter has the following restrictions:</p>
<ul id="cce_bestpractice_0317__en-us_topic_0000001226756283_ul835235018386"><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li135215501386">Limits the minimum and maximum resource usage of each pod or container in a namespace.<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p123651874391"><a name="cce_bestpractice_0317__en-us_topic_0000001226756283_li135215501386"></a><a name="en-us_topic_0000001226756283_li135215501386"></a>For example, create the maximum and minimum CPU usage limits for a pod in a namespace as follows:</p>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p19800112144713">cpu-constraints.yaml</p>
<pre class="screen" id="cce_bestpractice_0317__en-us_topic_0000001226756283_screen942610051819">apiVersion: v1
kind: LimitRange
metadata:
name: cpu-min-max-demo-lr
spec:
limits:
- max:
cpu: "800m"
min:
cpu: "200m"
type: Container</pre>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p7110129194514">Then, run <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b1662075711820">kubectl -n </strong><em id="cce_bestpractice_0317__en-us_topic_0000001226756283_i762205711812">&lt;namespace&gt;</em><strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b97744217197"> create -f </strong><em id="cce_bestpractice_0317__en-us_topic_0000001226756283_i14307835195">cpu-constraints.yaml</em> to complete the creation. If the default CPU usage is not specified for the container, the platform automatically configures the default CPU usage. That is, the default configuration is automatically added after the container is created.</p>
<pre class="screen" id="cce_bestpractice_0317__en-us_topic_0000001226756283_screen1185143774413">...
spec:
limits:
- <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b169955814618">default:</strong>
<strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b1470045894618"> cpu: 800m</strong>
<strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b5701145813462"> defaultRequest:</strong>
<strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b107032058174620"> cpu: 800m</strong>
max:
cpu: 800m
min:
cpu: 200m
type: Container</pre>
</li><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li135216503386">Limits the maximum and minimum storage space that each PersistentVolumeClaim can apply for in a namespace.<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p1168613177117"><a name="cce_bestpractice_0317__en-us_topic_0000001226756283_li135216503386"></a><a name="en-us_topic_0000001226756283_li135216503386"></a>storagelimit.yaml</p>
<pre class="screen" id="cce_bestpractice_0317__en-us_topic_0000001226756283_screen876920585499">apiVersion: v1
kind: LimitRange
metadata:
name: storagelimit
spec:
limits:
- type: PersistentVolumeClaim
max:
storage: 2Gi
min:
storage: 1Gi</pre>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p145851619110">Then, run <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b166053081912">kubectl -n </strong><em id="cce_bestpractice_0317__en-us_topic_0000001226756283_i1067311303199">&lt;namespace&gt; </em><strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b107351236141911">create -f </strong><em id="cce_bestpractice_0317__en-us_topic_0000001226756283_i3737123615199">storagelimit.yaml</em> to complete the creation.</p>
</li></ul>
</div>
<div class="section" id="cce_bestpractice_0317__en-us_topic_0000001226756283_section13967110185719"><h4 class="sectiontitle">Configuring Network Isolation in a Cluster</h4><ul id="cce_bestpractice_0317__en-us_topic_0000001226756283_ul1786982355710"><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li14869123125713">Container tunnel network<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p923219216146"><a name="cce_bestpractice_0317__en-us_topic_0000001226756283_li14869123125713"></a><a name="en-us_topic_0000001226756283_li14869123125713"></a>If networks need to be isolated between namespaces in a cluster or between workloads in the same namespace, you can configure network policies to isolate the networks. </p>
</li><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li17869142335714">Cloud Native Network 2.0<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p1691384325712"><a name="cce_bestpractice_0317__en-us_topic_0000001226756283_li17869142335714"></a><a name="en-us_topic_0000001226756283_li17869142335714"></a>In the Cloud Native Network 2.0 model, you can configure security groups to isolate networks between pods. For details, see <a href="https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0288.html" target="_blank" rel="noopener noreferrer">SecurityGroup</a>.</p>
</li><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li17895010563">VPC network<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p811841416567"><a name="cce_bestpractice_0317__en-us_topic_0000001226756283_li17895010563"></a><a name="en-us_topic_0000001226756283_li17895010563"></a>Network isolation is not supported.</p>
</li></ul>
</div>
<div class="section" id="cce_bestpractice_0317__en-us_topic_0000001226756283_section5480716175815"><h4 class="sectiontitle">Enabling the Webhook Authentication Mode with kubelet</h4><div class="notice" id="cce_bestpractice_0317__en-us_topic_0000001226756283_note983131514410"><span class="noticetitle"><img src="public_sys-resources/notice_3.0-en-us.png"> </span><div class="noticebody"><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p683171514417">CCE clusters of v1.15.6-r1 or earlier are involved, whereas versions later than v1.15.6-r1 are not.</p>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p17731357155">Upgrade the CCE cluster version to 1.13 or 1.15 and enable the RBAC capability for the cluster. If the version is 1.13 or later, no upgrade is required.</p>
</div></div>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p14882451758">When creating a node, you can enable the kubelet authentication mode by injecting the <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b2040431134717">postinstall</strong> file (by setting the kubelet startup parameter <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b199401430193211">--authorization-node=Webhook</strong>).</p>
<ol id="cce_bestpractice_0317__en-us_topic_0000001226756283_ol016854617318"><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li141439336611"><span>Run the following command to create clusterrolebinding:</span><p><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p394410334614"><strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b1482265083710">kubectl create clusterrolebinding kube-apiserver-kubelet-admin --clusterrole=system:kubelet-api-admin --user=system:kube-apiserver</strong></p>
</p></li><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li19541153419419"><span>For an existing node, log in to the node, change <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b1237994173311">authorization mode</strong> in <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b127021045163310">/var/paas/kubernetes/kubelet/kubelet_config.yaml</strong> on the node to <strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b10117204843318">Webhook</strong>, and restart kubelet.</span><p><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p7262194418410"><strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b1780817573373">sed -i s/AlwaysAllow/Webhook/g /var/paas/kubernetes/kubelet/kubelet_config.yaml; systemctl restart kubelet</strong></p>
</p></li><li id="cce_bestpractice_0317__en-us_topic_0000001226756283_li1430516535111"><span>For a new node, add the following command to the post-installation script to change the kubelet permission mode:</span><p><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p918592114518"><strong id="cce_bestpractice_0317__en-us_topic_0000001226756283_b667171143815">sed -i s/AlwaysAllow/Webhook/g /var/paas/kubernetes/kubelet/kubelet_config.yaml; systemctl restart kubelet</strong></p>
<p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p17968471271"></p>
</p></li></ol>
</div>
<div class="section" id="cce_bestpractice_0317__en-us_topic_0000001226756283_section1730432053118"><h4 class="sectiontitle">Uninstalling web-terminal After Use</h4><p id="cce_bestpractice_0317__en-us_topic_0000001226756283_p18400172813110">The web-terminal add-on can be used to manage CCE clusters. Keep the login password secure and uninstall the add-on when it is no longer needed.</p>
</div>
</div>
<div>
<div class="familylinks">
<div class="parentlink"><strong>Parent topic:</strong> <a href="cce_bestpractice_0315.html">Security</a></div>
</div>
</div>