diff --git a/umn/source/_static/images/en-us_image_0000001460905374.png b/umn/source/_static/images/en-us_image_0000001460905374.png deleted file mode 100644 index 50d2b8f..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001460905374.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001461224886.png b/umn/source/_static/images/en-us_image_0000001461224886.png deleted file mode 100644 index fd35a0f..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001461224886.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517743380.png b/umn/source/_static/images/en-us_image_0000001517743380.png deleted file mode 100644 index d98fcd6..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517743380.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517743496.png b/umn/source/_static/images/en-us_image_0000001517743496.png deleted file mode 100644 index 8d6c69b..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517743496.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517743540.png b/umn/source/_static/images/en-us_image_0000001517743540.png deleted file mode 100644 index 6b9a2a2..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517743540.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517903020.png b/umn/source/_static/images/en-us_image_0000001517903020.png deleted file mode 100644 index 6887683..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517903020.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517903036.png b/umn/source/_static/images/en-us_image_0000001517903036.png deleted file mode 100644 index 0bb483b..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517903036.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517903056.png b/umn/source/_static/images/en-us_image_0000001517903056.png deleted file mode 100644 index c301af9..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517903056.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517903060.png b/umn/source/_static/images/en-us_image_0000001517903060.png deleted file mode 100644 index 9421b22..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517903060.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517903088.png b/umn/source/_static/images/en-us_image_0000001517903088.png deleted file mode 100644 index 2252614..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517903088.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517903124.png b/umn/source/_static/images/en-us_image_0000001517903124.png deleted file mode 100644 index 87446d8..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517903124.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001517903252.png b/umn/source/_static/images/en-us_image_0000001517903252.png deleted file mode 100644 index 2da6ba3..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001517903252.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001518062524.png b/umn/source/_static/images/en-us_image_0000001518062524.png deleted file mode 100644 index 65e1184..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001518062524.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001518062540.png b/umn/source/_static/images/en-us_image_0000001518062540.png deleted file mode 100644 index 4cff8a7..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001518062540.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001518062636.png b/umn/source/_static/images/en-us_image_0000001518062636.png deleted file mode 100644 index 3306881..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001518062636.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001518062756.png b/umn/source/_static/images/en-us_image_0000001518062756.png deleted file mode 100644 index 5bfbc6c..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001518062756.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001518222492.png b/umn/source/_static/images/en-us_image_0000001518222492.png deleted file mode 100644 index d09e52e..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001518222492.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001518222592.png b/umn/source/_static/images/en-us_image_0000001518222592.png deleted file mode 100644 index f9d93b4..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001518222592.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001568822709.png b/umn/source/_static/images/en-us_image_0000001568822709.png deleted file mode 100644 index 2ba7758..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001568822709.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001568822869.png b/umn/source/_static/images/en-us_image_0000001568822869.png deleted file mode 100644 index 0d3c91e..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001568822869.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001568822925.png b/umn/source/_static/images/en-us_image_0000001568822925.png deleted file mode 100644 index 11cc5e2..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001568822925.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001568822965.png b/umn/source/_static/images/en-us_image_0000001568822965.png deleted file mode 100644 index 184d45e..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001568822965.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001568902533.png b/umn/source/_static/images/en-us_image_0000001568902533.png deleted file mode 100644 index b52a62d..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001568902533.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001568902557.png b/umn/source/_static/images/en-us_image_0000001568902557.png deleted file mode 100644 index 954c474..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001568902557.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001568902649.png b/umn/source/_static/images/en-us_image_0000001568902649.png deleted file mode 100644 index ebc6976..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001568902649.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001568902669.png b/umn/source/_static/images/en-us_image_0000001568902669.png deleted file mode 100644 index 2ba7758..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001568902669.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001569022837.png b/umn/source/_static/images/en-us_image_0000001569022837.png deleted file mode 100644 index 2512244..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001569022837.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001569022901.png b/umn/source/_static/images/en-us_image_0000001569022901.png deleted file mode 100644 index 827836e..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001569022901.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001569022929.png b/umn/source/_static/images/en-us_image_0000001569022929.png deleted file mode 100644 index ece3f19..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001569022929.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001569022957.png b/umn/source/_static/images/en-us_image_0000001569022957.png deleted file mode 100644 index 55f1ee8..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001569022957.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001569022961.png b/umn/source/_static/images/en-us_image_0000001569022961.png deleted file mode 100644 index b7ea66b..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001569022961.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001569023013.png b/umn/source/_static/images/en-us_image_0000001569023013.png deleted file mode 100644 index ebc6976..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001569023013.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001569023025.png b/umn/source/_static/images/en-us_image_0000001569023025.png deleted file mode 100644 index ebc6976..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001569023025.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001569182569.jpg b/umn/source/_static/images/en-us_image_0000001569182569.jpg deleted file mode 100644 index 7747408..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001569182569.jpg and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001569182625.png b/umn/source/_static/images/en-us_image_0000001569182625.png deleted file mode 100644 index ebc6976..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001569182625.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001579008782.png b/umn/source/_static/images/en-us_image_0000001579008782.png deleted file mode 100644 index bbd7fce..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001579008782.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001629186693.png b/umn/source/_static/images/en-us_image_0000001629186693.png deleted file mode 100644 index 59c35f2..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001629186693.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001568822637.png b/umn/source/_static/images/en-us_image_0000001647417220.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822637.png rename to umn/source/_static/images/en-us_image_0000001647417220.png diff --git a/umn/source/_static/images/en-us_image_0000001569182513.png b/umn/source/_static/images/en-us_image_0000001647417256.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182513.png rename to umn/source/_static/images/en-us_image_0000001647417256.png diff --git a/umn/source/_static/images/en-us_image_0000001517743364.png b/umn/source/_static/images/en-us_image_0000001647417268.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743364.png rename to umn/source/_static/images/en-us_image_0000001647417268.png diff --git a/umn/source/_static/images/en-us_image_0000001569182497.gif b/umn/source/_static/images/en-us_image_0000001647417272.gif similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182497.gif rename to umn/source/_static/images/en-us_image_0000001647417272.gif diff --git a/umn/source/_static/images/en-us_image_0000001647417292.png b/umn/source/_static/images/en-us_image_0000001647417292.png new file mode 100644 index 0000000..3ce3b37 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647417292.png differ diff --git a/umn/source/_static/images/en-us_image_0000001647417300.png b/umn/source/_static/images/en-us_image_0000001647417300.png new file mode 100644 index 0000000..d9e8cde Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647417300.png differ diff --git a/umn/source/_static/images/en-us_image_0000001647417328.png b/umn/source/_static/images/en-us_image_0000001647417328.png new file mode 100644 index 0000000..8073047 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647417328.png differ diff --git a/umn/source/_static/images/en-us_image_0000001517903016.png b/umn/source/_static/images/en-us_image_0000001647417440.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517903016.png rename to umn/source/_static/images/en-us_image_0000001647417440.png diff --git a/umn/source/_static/images/en-us_image_0000001647417448.png b/umn/source/_static/images/en-us_image_0000001647417448.png new file mode 100644 index 0000000..8f3c9f0 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647417448.png differ diff --git a/umn/source/_static/images/en-us_image_0000001647417468.png b/umn/source/_static/images/en-us_image_0000001647417468.png new file mode 100644 index 0000000..0ee4bb1 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647417468.png differ diff --git a/umn/source/_static/images/en-us_image_0000001517903028.png b/umn/source/_static/images/en-us_image_0000001647417504.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517903028.png rename to umn/source/_static/images/en-us_image_0000001647417504.png diff --git a/umn/source/_static/images/en-us_image_0000001518062664.png b/umn/source/_static/images/en-us_image_0000001647417520.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062664.png rename to umn/source/_static/images/en-us_image_0000001647417520.png diff --git a/umn/source/_static/images/en-us_image_0000001517743600.png b/umn/source/_static/images/en-us_image_0000001647417524.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743600.png rename to umn/source/_static/images/en-us_image_0000001647417524.png diff --git a/umn/source/_static/images/en-us_image_0000001568822773.png b/umn/source/_static/images/en-us_image_0000001647417536.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822773.png rename to umn/source/_static/images/en-us_image_0000001647417536.png diff --git a/umn/source/_static/images/en-us_image_0000001568822825.png b/umn/source/_static/images/en-us_image_0000001647417544.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822825.png rename to umn/source/_static/images/en-us_image_0000001647417544.png diff --git a/umn/source/_static/images/en-us_image_0000001569022905.png b/umn/source/_static/images/en-us_image_0000001647417588.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569022905.png rename to umn/source/_static/images/en-us_image_0000001647417588.png diff --git a/umn/source/_static/images/en-us_image_0000001517903064.png b/umn/source/_static/images/en-us_image_0000001647417596.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517903064.png rename to umn/source/_static/images/en-us_image_0000001647417596.png diff --git a/umn/source/_static/images/en-us_image_0000001517903068.png b/umn/source/_static/images/en-us_image_0000001647417600.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517903068.png rename to umn/source/_static/images/en-us_image_0000001647417600.png diff --git a/umn/source/_static/images/en-us_image_0000001517743544.png b/umn/source/_static/images/en-us_image_0000001647417636.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743544.png rename to umn/source/_static/images/en-us_image_0000001647417636.png diff --git a/umn/source/_static/images/en-us_image_0000001518062704.png b/umn/source/_static/images/en-us_image_0000001647417648.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062704.png rename to umn/source/_static/images/en-us_image_0000001647417648.png diff --git a/umn/source/_static/images/en-us_image_0000001647417744.png b/umn/source/_static/images/en-us_image_0000001647417744.png new file mode 100644 index 0000000..3e6e245 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647417744.png differ diff --git a/umn/source/_static/images/en-us_image_0000001569182741.png b/umn/source/_static/images/en-us_image_0000001647417772.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182741.png rename to umn/source/_static/images/en-us_image_0000001647417772.png diff --git a/umn/source/_static/images/en-us_image_0000001518222716.png b/umn/source/_static/images/en-us_image_0000001647417776.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518222716.png rename to umn/source/_static/images/en-us_image_0000001647417776.png diff --git a/umn/source/_static/images/en-us_image_0000001517743628.png b/umn/source/_static/images/en-us_image_0000001647417792.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743628.png rename to umn/source/_static/images/en-us_image_0000001647417792.png diff --git a/umn/source/_static/images/en-us_image_0000001568902601.png b/umn/source/_static/images/en-us_image_0000001647417808.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902601.png rename to umn/source/_static/images/en-us_image_0000001647417808.png diff --git a/umn/source/_static/images/en-us_image_0000001518062812.png b/umn/source/_static/images/en-us_image_0000001647417812.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062812.png rename to umn/source/_static/images/en-us_image_0000001647417812.png diff --git a/umn/source/_static/images/en-us_image_0000001569023045.png b/umn/source/_static/images/en-us_image_0000001647417816.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569023045.png rename to umn/source/_static/images/en-us_image_0000001647417816.png diff --git a/umn/source/_static/images/en-us_image_0000001647417828.png b/umn/source/_static/images/en-us_image_0000001647417828.png new file mode 100644 index 0000000..198a6e0 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647417828.png differ diff --git a/umn/source/_static/images/en-us_image_0000001568902653.png b/umn/source/_static/images/en-us_image_0000001647417836.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902653.png rename to umn/source/_static/images/en-us_image_0000001647417836.png diff --git a/umn/source/_static/images/en-us_image_0000001517743624.png b/umn/source/_static/images/en-us_image_0000001647417852.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743624.png rename to umn/source/_static/images/en-us_image_0000001647417852.png diff --git a/umn/source/_static/images/en-us_image_0000001518062796.png b/umn/source/_static/images/en-us_image_0000001647417932.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062796.png rename to umn/source/_static/images/en-us_image_0000001647417932.png diff --git a/umn/source/_static/images/en-us_image_0000001517743652.png b/umn/source/_static/images/en-us_image_0000001647417936.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743652.png rename to umn/source/_static/images/en-us_image_0000001647417936.png diff --git a/umn/source/_static/images/en-us_image_0000001647576484.png b/umn/source/_static/images/en-us_image_0000001647576484.png new file mode 100644 index 0000000..1d8c937 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647576484.png differ diff --git a/umn/source/_static/images/en-us_image_0000001517902940.png b/umn/source/_static/images/en-us_image_0000001647576500.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517902940.png rename to umn/source/_static/images/en-us_image_0000001647576500.png diff --git a/umn/source/_static/images/en-us_image_0000001647576552.png b/umn/source/_static/images/en-us_image_0000001647576552.png new file mode 100644 index 0000000..f81efee Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647576552.png differ diff --git a/umn/source/_static/images/en-us_image_0000001517743552.png b/umn/source/_static/images/en-us_image_0000001647576596.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743552.png rename to umn/source/_static/images/en-us_image_0000001647576596.png diff --git a/umn/source/_static/images/en-us_image_0000001647576692.png b/umn/source/_static/images/en-us_image_0000001647576692.png new file mode 100644 index 0000000..d4ef410 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647576692.png differ diff --git a/umn/source/_static/images/en-us_image_0000001518062612.png b/umn/source/_static/images/en-us_image_0000001647576696.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062612.png rename to umn/source/_static/images/en-us_image_0000001647576696.png diff --git a/umn/source/_static/images/en-us_image_0000001518222536.png b/umn/source/_static/images/en-us_image_0000001647576700.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518222536.png rename to umn/source/_static/images/en-us_image_0000001647576700.png diff --git a/umn/source/_static/images/en-us_image_0000001517743452.png b/umn/source/_static/images/en-us_image_0000001647576704.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743452.png rename to umn/source/_static/images/en-us_image_0000001647576704.png diff --git a/umn/source/_static/images/en-us_image_0000001568822741.png b/umn/source/_static/images/en-us_image_0000001647576708.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822741.png rename to umn/source/_static/images/en-us_image_0000001647576708.png diff --git a/umn/source/_static/images/en-us_image_0000001568902489.png b/umn/source/_static/images/en-us_image_0000001647576720.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902489.png rename to umn/source/_static/images/en-us_image_0000001647576720.png diff --git a/umn/source/_static/images/en-us_image_0000001647576724.png b/umn/source/_static/images/en-us_image_0000001647576724.png new file mode 100644 index 0000000..5a0e760 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647576724.png differ diff --git a/umn/source/_static/images/en-us_image_0000001517903168.png b/umn/source/_static/images/en-us_image_0000001647576792.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517903168.png rename to umn/source/_static/images/en-us_image_0000001647576792.png diff --git a/umn/source/_static/images/en-us_image_0000001518222604.png b/umn/source/_static/images/en-us_image_0000001647576848.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518222604.png rename to umn/source/_static/images/en-us_image_0000001647576848.png diff --git a/umn/source/_static/images/en-us_image_0000001518222636.png b/umn/source/_static/images/en-us_image_0000001647576860.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518222636.png rename to umn/source/_static/images/en-us_image_0000001647576860.png diff --git a/umn/source/_static/images/en-us_image_0000001647576864.png b/umn/source/_static/images/en-us_image_0000001647576864.png new file mode 100644 index 0000000..14ef7c0 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647576864.png differ diff --git a/umn/source/_static/images/en-us_image_0000001569182621.png b/umn/source/_static/images/en-us_image_0000001647576892.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182621.png rename to umn/source/_static/images/en-us_image_0000001647576892.png diff --git a/umn/source/_static/images/en-us_image_0000001517903128.png b/umn/source/_static/images/en-us_image_0000001647576916.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517903128.png rename to umn/source/_static/images/en-us_image_0000001647576916.png diff --git a/umn/source/_static/images/en-us_image_0000001568902577.png b/umn/source/_static/images/en-us_image_0000001647576960.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902577.png rename to umn/source/_static/images/en-us_image_0000001647576960.png diff --git a/umn/source/_static/images/en-us_image_0000001518222700.png b/umn/source/_static/images/en-us_image_0000001647577020.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518222700.png rename to umn/source/_static/images/en-us_image_0000001647577020.png diff --git a/umn/source/_static/images/en-us_image_0000001568902661.png b/umn/source/_static/images/en-us_image_0000001647577036.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902661.png rename to umn/source/_static/images/en-us_image_0000001647577036.png diff --git a/umn/source/_static/images/en-us_image_0000001518062772.png b/umn/source/_static/images/en-us_image_0000001647577048.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062772.png rename to umn/source/_static/images/en-us_image_0000001647577048.png diff --git a/umn/source/_static/images/en-us_image_0000001517743660.png b/umn/source/_static/images/en-us_image_0000001647577072.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743660.png rename to umn/source/_static/images/en-us_image_0000001647577072.png diff --git a/umn/source/_static/images/en-us_image_0000001517903240.png b/umn/source/_static/images/en-us_image_0000001647577080.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517903240.png rename to umn/source/_static/images/en-us_image_0000001647577080.png diff --git a/umn/source/_static/images/en-us_image_0000001518222732.png b/umn/source/_static/images/en-us_image_0000001647577100.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518222732.png rename to umn/source/_static/images/en-us_image_0000001647577100.png diff --git a/umn/source/_static/images/en-us_image_0000001517743636.png b/umn/source/_static/images/en-us_image_0000001647577104.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743636.png rename to umn/source/_static/images/en-us_image_0000001647577104.png diff --git a/umn/source/_static/images/en-us_image_0000001569182773.png b/umn/source/_static/images/en-us_image_0000001647577116.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182773.png rename to umn/source/_static/images/en-us_image_0000001647577116.png diff --git a/umn/source/_static/images/en-us_image_0000001518062816.png b/umn/source/_static/images/en-us_image_0000001647577164.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062816.png rename to umn/source/_static/images/en-us_image_0000001647577164.png diff --git a/umn/source/_static/images/en-us_image_0000001518062644.png b/umn/source/_static/images/en-us_image_0000001647577176.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062644.png rename to umn/source/_static/images/en-us_image_0000001647577176.png diff --git a/umn/source/_static/images/en-us_image_0000001647577184.png b/umn/source/_static/images/en-us_image_0000001647577184.png new file mode 100644 index 0000000..76738ba Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001647577184.png differ diff --git a/umn/source/_static/images/en-us_image_0000001568822961.png b/umn/source/_static/images/en-us_image_0000001647577200.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822961.png rename to umn/source/_static/images/en-us_image_0000001647577200.png diff --git a/umn/source/_static/images/en-us_image_0000001629926113.png b/umn/source/_static/images/en-us_image_0000001654936892.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001629926113.png rename to umn/source/_static/images/en-us_image_0000001654936892.png diff --git a/umn/source/_static/images/en-us_image_0000001667910920.png b/umn/source/_static/images/en-us_image_0000001667910920.png new file mode 100644 index 0000000..2db0198 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001667910920.png differ diff --git a/umn/source/_static/images/en-us_image_0000001690672798.png b/umn/source/_static/images/en-us_image_0000001690672798.png new file mode 100644 index 0000000..2043add Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001690672798.png differ diff --git a/umn/source/_static/images/en-us_image_0000001691644354.png b/umn/source/_static/images/en-us_image_0000001691644354.png new file mode 100644 index 0000000..f1336b6 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001691644354.png differ diff --git a/umn/source/_static/images/en-us_image_0000001518062492.png b/umn/source/_static/images/en-us_image_0000001695736889.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062492.png rename to umn/source/_static/images/en-us_image_0000001695736889.png diff --git a/umn/source/_static/images/en-us_image_0000001695736909.png b/umn/source/_static/images/en-us_image_0000001695736909.png new file mode 100644 index 0000000..342a508 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695736909.png differ diff --git a/umn/source/_static/images/en-us_image_0000001517743372.png b/umn/source/_static/images/en-us_image_0000001695736933.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743372.png rename to umn/source/_static/images/en-us_image_0000001695736933.png diff --git a/umn/source/_static/images/en-us_image_0000001695736965.png b/umn/source/_static/images/en-us_image_0000001695736965.png new file mode 100644 index 0000000..4d11dcf Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695736965.png differ diff --git a/umn/source/_static/images/en-us_image_0000001517743384.png b/umn/source/_static/images/en-us_image_0000001695736981.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743384.png rename to umn/source/_static/images/en-us_image_0000001695736981.png diff --git a/umn/source/_static/images/en-us_image_0000001695736989.png b/umn/source/_static/images/en-us_image_0000001695736989.png new file mode 100644 index 0000000..45fcf25 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695736989.png differ diff --git a/umn/source/_static/images/en-us_image_0000001569182677.png b/umn/source/_static/images/en-us_image_0000001695736993.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182677.png rename to umn/source/_static/images/en-us_image_0000001695736993.png diff --git a/umn/source/_static/images/en-us_image_0000001569182553.png b/umn/source/_static/images/en-us_image_0000001695737013.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182553.png rename to umn/source/_static/images/en-us_image_0000001695737013.png diff --git a/umn/source/_static/images/en-us_image_0000001568822717.png b/umn/source/_static/images/en-us_image_0000001695737033.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822717.png rename to umn/source/_static/images/en-us_image_0000001695737033.png diff --git a/umn/source/_static/images/en-us_image_0000001569182549.png b/umn/source/_static/images/en-us_image_0000001695737041.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182549.png rename to umn/source/_static/images/en-us_image_0000001695737041.png diff --git a/umn/source/_static/images/en-us_image_0000001568822733.png b/umn/source/_static/images/en-us_image_0000001695737085.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822733.png rename to umn/source/_static/images/en-us_image_0000001695737085.png diff --git a/umn/source/_static/images/en-us_image_0000001695737101.png b/umn/source/_static/images/en-us_image_0000001695737101.png new file mode 100644 index 0000000..52cb81b Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695737101.png differ diff --git a/umn/source/_static/images/en-us_image_0000001695737145.jpg b/umn/source/_static/images/en-us_image_0000001695737145.jpg new file mode 100644 index 0000000..3614fc4 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695737145.jpg differ diff --git a/umn/source/_static/images/en-us_image_0000001569182589.png b/umn/source/_static/images/en-us_image_0000001695737165.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182589.png rename to umn/source/_static/images/en-us_image_0000001695737165.png diff --git a/umn/source/_static/images/en-us_image_0000001568902509.png b/umn/source/_static/images/en-us_image_0000001695737169.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902509.png rename to umn/source/_static/images/en-us_image_0000001695737169.png diff --git a/umn/source/_static/images/en-us_image_0000001517743464.png b/umn/source/_static/images/en-us_image_0000001695737185.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743464.png rename to umn/source/_static/images/en-us_image_0000001695737185.png diff --git a/umn/source/_static/images/en-us_image_0000001569022889.png b/umn/source/_static/images/en-us_image_0000001695737193.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569022889.png rename to umn/source/_static/images/en-us_image_0000001695737193.png diff --git a/umn/source/_static/images/en-us_image_0000001518062672.png b/umn/source/_static/images/en-us_image_0000001695737201.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062672.png rename to umn/source/_static/images/en-us_image_0000001695737201.png diff --git a/umn/source/_static/images/en-us_image_0000001568822793.png b/umn/source/_static/images/en-us_image_0000001695737253.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822793.png rename to umn/source/_static/images/en-us_image_0000001695737253.png diff --git a/umn/source/_static/images/en-us_image_0000001695737257.png b/umn/source/_static/images/en-us_image_0000001695737257.png new file mode 100644 index 0000000..c70f748 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695737257.png differ diff --git a/umn/source/_static/images/en-us_image_0000001568902541.png b/umn/source/_static/images/en-us_image_0000001695737281.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902541.png rename to umn/source/_static/images/en-us_image_0000001695737281.png diff --git a/umn/source/_static/images/en-us_image_0000001517743520.png b/umn/source/_static/images/en-us_image_0000001695737349.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743520.png rename to umn/source/_static/images/en-us_image_0000001695737349.png diff --git a/umn/source/_static/images/en-us_image_0000001569022933.png b/umn/source/_static/images/en-us_image_0000001695737357.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569022933.png rename to umn/source/_static/images/en-us_image_0000001695737357.png diff --git a/umn/source/_static/images/en-us_image_0000001569182673.png b/umn/source/_static/images/en-us_image_0000001695737369.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182673.png rename to umn/source/_static/images/en-us_image_0000001695737369.png diff --git a/umn/source/_static/images/en-us_image_0000001695737417.png b/umn/source/_static/images/en-us_image_0000001695737417.png new file mode 100644 index 0000000..cd3b00a Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695737417.png differ diff --git a/umn/source/_static/images/en-us_image_0000001518222708.png b/umn/source/_static/images/en-us_image_0000001695737421.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518222708.png rename to umn/source/_static/images/en-us_image_0000001695737421.png diff --git a/umn/source/_static/images/en-us_image_0000001569023029.png b/umn/source/_static/images/en-us_image_0000001695737425.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569023029.png rename to umn/source/_static/images/en-us_image_0000001695737425.png diff --git a/umn/source/_static/images/en-us_image_0000001517743672.png b/umn/source/_static/images/en-us_image_0000001695737489.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743672.png rename to umn/source/_static/images/en-us_image_0000001695737489.png diff --git a/umn/source/_static/images/en-us_image_0000001569022977.png b/umn/source/_static/images/en-us_image_0000001695737505.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569022977.png rename to umn/source/_static/images/en-us_image_0000001695737505.png diff --git a/umn/source/_static/images/en-us_image_0000001518222740.png b/umn/source/_static/images/en-us_image_0000001695737509.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518222740.png rename to umn/source/_static/images/en-us_image_0000001695737509.png diff --git a/umn/source/_static/images/en-us_image_0000001517743644.png b/umn/source/_static/images/en-us_image_0000001695737529.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743644.png rename to umn/source/_static/images/en-us_image_0000001695737529.png diff --git a/umn/source/_static/images/en-us_image_0000001569023069.png b/umn/source/_static/images/en-us_image_0000001695737589.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569023069.png rename to umn/source/_static/images/en-us_image_0000001695737589.png diff --git a/umn/source/_static/images/en-us_image_0000001568822957.png b/umn/source/_static/images/en-us_image_0000001695737593.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822957.png rename to umn/source/_static/images/en-us_image_0000001695737593.png diff --git a/umn/source/_static/images/en-us_image_0000001568902689.png b/umn/source/_static/images/en-us_image_0000001695737597.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902689.png rename to umn/source/_static/images/en-us_image_0000001695737597.png diff --git a/umn/source/_static/images/en-us_image_0000001569022797.png b/umn/source/_static/images/en-us_image_0000001695896197.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569022797.png rename to umn/source/_static/images/en-us_image_0000001695896197.png diff --git a/umn/source/_static/images/en-us_image_0000001569022781.png b/umn/source/_static/images/en-us_image_0000001695896201.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569022781.png rename to umn/source/_static/images/en-us_image_0000001695896201.png diff --git a/umn/source/_static/images/en-us_image_0000001569182505.png b/umn/source/_static/images/en-us_image_0000001695896213.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182505.png rename to umn/source/_static/images/en-us_image_0000001695896213.png diff --git a/umn/source/_static/images/en-us_image_0000001695896249.png b/umn/source/_static/images/en-us_image_0000001695896249.png new file mode 100644 index 0000000..603c946 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695896249.png differ diff --git a/umn/source/_static/images/en-us_image_0000001568822693.png b/umn/source/_static/images/en-us_image_0000001695896253.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822693.png rename to umn/source/_static/images/en-us_image_0000001695896253.png diff --git a/umn/source/_static/images/en-us_image_0000001569022881.png b/umn/source/_static/images/en-us_image_0000001695896365.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569022881.png rename to umn/source/_static/images/en-us_image_0000001695896365.png diff --git a/umn/source/_static/images/en-us_image_0000001517743432.png b/umn/source/_static/images/en-us_image_0000001695896373.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743432.png rename to umn/source/_static/images/en-us_image_0000001695896373.png diff --git a/umn/source/_static/images/en-us_image_0000001517903048.png b/umn/source/_static/images/en-us_image_0000001695896409.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517903048.png rename to umn/source/_static/images/en-us_image_0000001695896409.png diff --git a/umn/source/_static/images/en-us_image_0000001695896445.png b/umn/source/_static/images/en-us_image_0000001695896445.png new file mode 100644 index 0000000..5db165c Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695896445.png differ diff --git a/umn/source/_static/images/en-us_image_0000001517743460.png b/umn/source/_static/images/en-us_image_0000001695896449.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517743460.png rename to umn/source/_static/images/en-us_image_0000001695896449.png diff --git a/umn/source/_static/images/en-us_image_0000001518062624.png b/umn/source/_static/images/en-us_image_0000001695896453.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062624.png rename to umn/source/_static/images/en-us_image_0000001695896453.png diff --git a/umn/source/_static/images/en-us_image_0000001568902521.png b/umn/source/_static/images/en-us_image_0000001695896485.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902521.png rename to umn/source/_static/images/en-us_image_0000001695896485.png diff --git a/umn/source/_static/images/en-us_image_0000001695896529.png b/umn/source/_static/images/en-us_image_0000001695896529.png new file mode 100644 index 0000000..b6535de Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695896529.png differ diff --git a/umn/source/_static/images/en-us_image_0000001695896533.png b/umn/source/_static/images/en-us_image_0000001695896533.png new file mode 100644 index 0000000..39cc62a Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695896533.png differ diff --git a/umn/source/_static/images/en-us_image_0000001518062684.png b/umn/source/_static/images/en-us_image_0000001695896569.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062684.png rename to umn/source/_static/images/en-us_image_0000001695896569.png diff --git a/umn/source/_static/images/en-us_image_0000001569022913.png b/umn/source/_static/images/en-us_image_0000001695896581.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569022913.png rename to umn/source/_static/images/en-us_image_0000001695896581.png diff --git a/umn/source/_static/images/en-us_image_0000001628843805.png b/umn/source/_static/images/en-us_image_0000001695896617.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001628843805.png rename to umn/source/_static/images/en-us_image_0000001695896617.png diff --git a/umn/source/_static/images/en-us_image_0000001695896633.png b/umn/source/_static/images/en-us_image_0000001695896633.png new file mode 100644 index 0000000..92e2615 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695896633.png differ diff --git a/umn/source/_static/images/en-us_image_0000001518222608.png b/umn/source/_static/images/en-us_image_0000001695896709.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518222608.png rename to umn/source/_static/images/en-us_image_0000001695896709.png diff --git a/umn/source/_static/images/en-us_image_0000001568822905.png b/umn/source/_static/images/en-us_image_0000001695896713.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822905.png rename to umn/source/_static/images/en-us_image_0000001695896713.png diff --git a/umn/source/_static/images/en-us_image_0000001569023033.png b/umn/source/_static/images/en-us_image_0000001695896721.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569023033.png rename to umn/source/_static/images/en-us_image_0000001695896721.png diff --git a/umn/source/_static/images/en-us_image_0000001695896725.png b/umn/source/_static/images/en-us_image_0000001695896725.png new file mode 100644 index 0000000..f0282b3 Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001695896725.png differ diff --git a/umn/source/_static/images/en-us_image_0000001518062808.png b/umn/source/_static/images/en-us_image_0000001695896741.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001518062808.png rename to umn/source/_static/images/en-us_image_0000001695896741.png diff --git a/umn/source/_static/images/en-us_image_0000001569023085.png b/umn/source/_static/images/en-us_image_0000001695896837.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569023085.png rename to umn/source/_static/images/en-us_image_0000001695896837.png diff --git a/umn/source/_static/images/en-us_image_0000001568822917.png b/umn/source/_static/images/en-us_image_0000001695896849.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568822917.png rename to umn/source/_static/images/en-us_image_0000001695896849.png diff --git a/umn/source/_static/images/en-us_image_0000001568902637.png b/umn/source/_static/images/en-us_image_0000001695896853.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001568902637.png rename to umn/source/_static/images/en-us_image_0000001695896853.png diff --git a/umn/source/_static/images/en-us_image_0000001517903200.png b/umn/source/_static/images/en-us_image_0000001695896861.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001517903200.png rename to umn/source/_static/images/en-us_image_0000001695896861.png diff --git a/umn/source/_static/images/en-us_image_0000001569182781.png b/umn/source/_static/images/en-us_image_0000001695896869.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001569182781.png rename to umn/source/_static/images/en-us_image_0000001695896869.png diff --git a/umn/source/_static/images/en-us_image_0000001701704285.png b/umn/source/_static/images/en-us_image_0000001701704285.png deleted file mode 100644 index b0fe69b..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001701704285.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001715625689.png b/umn/source/_static/images/en-us_image_0000001715625689.png new file mode 100644 index 0000000..b7ea91b Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001715625689.png differ diff --git a/umn/source/_static/images/en-us_image_0000001715987941.png b/umn/source/_static/images/en-us_image_0000001715987941.png deleted file mode 100644 index 3425c13..0000000 Binary files a/umn/source/_static/images/en-us_image_0000001715987941.png and /dev/null differ diff --git a/umn/source/_static/images/en-us_image_0000001668036886.png b/umn/source/_static/images/en-us_image_0000001716141253.png similarity index 100% rename from umn/source/_static/images/en-us_image_0000001668036886.png rename to umn/source/_static/images/en-us_image_0000001716141253.png diff --git a/umn/source/_static/images/en-us_image_0000001726718109.png b/umn/source/_static/images/en-us_image_0000001726718109.png new file mode 100644 index 0000000..ca8265d Binary files /dev/null and b/umn/source/_static/images/en-us_image_0000001726718109.png differ diff --git a/umn/source/add-ons/autoscaler.rst b/umn/source/add-ons/autoscaler.rst index a522c9a..f2355c5 100644 --- a/umn/source/add-ons/autoscaler.rst +++ b/umn/source/add-ons/autoscaler.rst @@ -30,7 +30,7 @@ autoscaler controls auto scale-out and scale-in. Auto scale-out will be performed when: - Node resources are insufficient. - - No node affinity policy is set in the pod scheduling configuration. That is, if a node has been configured as an affinity node for pods, no node will not be automatically added when pods cannot be scheduled. For details about how to configure the node affinity policy, see :ref:`Scheduling Policy (Affinity/Anti-affinity) `. + - No node affinity policy is set in the pod scheduling configuration. If a node has been configured as an affinity node for pods, no node will not be automatically added when pods cannot be scheduled. For details about how to configure the node affinity policy, see :ref:`Scheduling Policy (Affinity/Anti-affinity) `. - When the cluster meets the node scaling policy, cluster scale-out is also triggered. For details, see :ref:`Creating a Node Scaling Policy `. @@ -42,24 +42,31 @@ autoscaler controls auto scale-out and scale-in. When a cluster node is idle for a period of time (10 minutes by default), cluster scale-in is triggered, and the node is automatically deleted. However, a node cannot be deleted from a cluster if the following pods exist: - - Pods that do not meet specific requirements set in PodDisruptionBudget + - Pods that do not meet specific requirements set in Pod Disruption Budgets (`PodDisruptionBudget `__) - Pods that cannot be scheduled to other nodes due to constraints such as affinity and anti-affinity policies - Pods that have the **cluster-autoscaler.kubernetes.io/safe-to-evict: 'false'** annotation - - Pods (except those created by kube-system DaemonSet) that exist in the kube-system namespace on the node + - Pods (except those created by DaemonSets in the kube-system namespace) that exist in the kube-system namespace on the node - Pods that are not created by the controller (Deployment/ReplicaSet/job/StatefulSet) -Notes and Constraints ---------------------- + .. note:: + + When a node meets the scale-in conditions, autoscaler adds the **DeletionCandidateOfClusterAutoscaler** taint to the node in advance to prevent pods from being scheduled to the node. After the autoscaler add-on is uninstalled, if the taint still exists on the node, manually delete it. + +Constraints +----------- -- Only clusters of v1.9.10-r2 and later support autoscaler. - Ensure that there are sufficient resources for installing the add-on. - The default node pool does not support auto scaling. For details, see :ref:`Description of DefaultPool `. +- When autoscaler is used, some taints or annotations may affect auto scaling. Therefore, do not use the following taints or annotations in clusters: + + - **ignore-taint.cluster-autoscaler.kubernetes.io**: The taint works on nodes. Kubernetes-native autoscaler supports protection against abnormal scale outs and periodically evaluates the proportion of available nodes in the cluster. When the proportion of non-ready nodes exceeds 45%, protection will be triggered. In this case, all nodes with the **ignore-taint.cluster-autoscaler.kubernetes.io** taint in the cluster are filtered out from the autoscaler template and recorded as non-ready nodes, which affects cluster scaling. + - **cluster-autoscaler.kubernetes.io/enable-ds-eviction**: The annotation works on pods, which determines whether DaemonSet pods can be evicted by autoscaler. For details, see `Well-Known Labels, Annotations and Taints `__. Installing the Add-on --------------------- -#. Log in to the CCE console, click the cluster name, and access the cluster console. Choose **Add-ons** in the navigation pane, locate **autoscaler** on the right, and click **Install**. -#. Configure add-on installation parameters. +#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **autoscaler** on the right, and click **Install**. +#. On the **Install Add-on** page, configure the specifications. .. table:: **Table 1** Specifications configuration @@ -74,14 +81,24 @@ Installing the Add-on | | | | | - **Single**: The add-on is deployed with only one pod. | | | - **HA50**: The add-on is deployed with two pods, serving a cluster with 50 nodes and ensuring high availability. | - | | - **HA200**: The add-on is deployed with two pods, serving a cluster with 50 nodes and ensuring high availability. Each pod uses more resources than those of the **HA50** specification. | + | | - **HA200**: The add-on is deployed with two pods, serving a cluster with 200 nodes and ensuring high availability. Each pod uses more resources than those of the **HA50** specification. | | | - **Custom**: You can customize the number of pods and specifications as required. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Multi AZ | - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. | - | | - **Required**: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run. | + | Pods | Number of pods that will be created to match the selected add-on specifications. | + | | | + | | If you select **Custom**, you can adjust the number of pods as required. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Multi-AZ | - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. | + | | - **Required**: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Containers | CPU and memory quotas of the container allowed for the selected add-on specifications. | + | | | + | | If you select **Custom**, you can adjust the container specifications as required. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. table:: **Table 2** Parameter configuration +#. Configure the add-on parameters. + + .. table:: **Table 2** Parameters +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | @@ -106,11 +123,11 @@ Installing the Add-on | | | | | .. note:: | | | | - | | If both auto scale-out and scale-in exist in a cluster, you are advised to set **How long after a scale-out that a scale-in evaluation resumes** to 0 minutes. This can prevent the node scale-in from being blocked due to continuous scale-out of some node pools or retries upon a scale-out failure, resulting in unexpected waste of node resources. | + | | If both auto scale-out and scale-in exist in a cluster, set **How long after a scale-out that a scale-in evaluation resumes** to 0 minutes. This can prevent the node scale-in from being blocked due to continuous scale-out of some node pools or retries upon a scale-out failure, resulting in unexpected waste of node resources. | | | | | | How long after the node deletion that a scale-in evaluation resumes. Default value: 10 minutes. | | | | - | | How long after a scale-in failure that a scale-in evaluation resumes. Default value: 3 minutes. For details about the impact and relationship between the scale-in cooling intervals configured in the node pool and autoscaler, see :ref:`Description of the Scale-In Cool-Down Period `. | + | | How long after a scale-in failure that a scale-in evaluation resumes. Default value: 3 minutes. For details about the impact and relationship between the scale-in cooling intervals configured in the node pool and autoscaler, see :ref:`Scale-In Cool-Down Period `. | | | | | | - **Max. Nodes for Batch Deletion**: Maximum number of empty nodes that can be deleted at the same time. Default value: 10. | | | | @@ -131,10 +148,21 @@ Installing the Add-on #. After the configuration is complete, click **Install**. +Components +---------- + +.. table:: **Table 3** autoscaler + + =================== ==================================== ============= + Container Component Description Resource Type + =================== ==================================== ============= + autoscaler Auto scaling for Kubernetes clusters Deployment + =================== ==================================== ============= + .. _cce_10_0154__section59676731017: -Description of the Scale-In Cool-Down Period --------------------------------------------- +Scale-In Cool-Down Period +------------------------- Scale-in cooling intervals can be configured in the node pool settings and the autoscaler add-on settings. diff --git a/umn/source/add-ons/coredns_system_resource_add-on_mandatory.rst b/umn/source/add-ons/coredns_system_resource_add-on_mandatory.rst index 3fd0f80..8e27e3f 100644 --- a/umn/source/add-ons/coredns_system_resource_add-on_mandatory.rst +++ b/umn/source/add-ons/coredns_system_resource_add-on_mandatory.rst @@ -2,19 +2,19 @@ .. _cce_10_0129: -coredns (System Resource Add-On, Mandatory) +CoreDNS (System Resource Add-On, Mandatory) =========================================== Introduction ------------ -The coredns add-on is a DNS server that provides domain name resolution services for Kubernetes clusters. coredns chains plug-ins to provide additional features. +CoreDNS is a DNS server that provides domain name resolution services for Kubernetes clusters. CoreDNS chains plug-ins to provide additional features. -coredns is an open-source software and has been a part of CNCF. It provides a means for cloud services to discover each other in cloud-native deployments. Each of the plug-ins chained by coredns provides a particular DNS function. You can integrate coredns with only the plug-ins you need to make it fast, efficient, and flexible. When used in a Kubernetes cluster, coredns can automatically discover services in the cluster and provide domain name resolution for these services. By working with DNS server, coredns can resolve external domain names for workloads in a cluster. +CoreDNS is an open-source software and has been a part of CNCF. It provides a means for cloud services to discover each other in cloud-native deployments. Each of the plug-ins chained by CoreDNS provides a particular DNS function. You can integrate CoreDNS with only the plug-ins you need to make it fast, efficient, and flexible. When used in a Kubernetes cluster, CoreDNS can automatically discover services in the cluster and provide domain name resolution for these services. By working with DNS server, CoreDNS can resolve external domain names for workloads in a cluster. -**coredns is a system resource add-on. It is installed by default when a cluster of Kubernetes v1.11 or later is created.** +**This add-on is installed by default during cluster creation.** -Kubernetes v1.11 and later back CoreDNS as the official default DNS for all clusters going forward. +Kubernetes backs CoreDNS as the official default DNS for all clusters going forward. CoreDNS official website: https://coredns.io/ @@ -24,148 +24,181 @@ Open source community: https://github.com/coredns/coredns For details, see :ref:`DNS `. -Notes and Constraints ---------------------- +Constraints +----------- -When coredns is running properly or being upgraded, ensure that the number of available nodes is greater than or equal to the number of coredns instances and all coredns instances are running. Otherwise, the upgrade will fail. +When CoreDNS is running properly or being upgraded, ensure that the number of available nodes is greater than or equal to the number of CoreDNS instances and all CoreDNS instances are running. Otherwise, the upgrade will fail. Installing the Add-on --------------------- This add-on has been installed by default. If it is uninstalled due to some reasons, you can reinstall it by performing the following steps: -#. Log in to the CCE console, click the cluster name, and access the cluster console. Choose **Add-ons** in the navigation pane, locate **coredns** on the right, and click **Install**. +#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **coredns** on the right, and click **Install**. -#. On the **Install Add-on** page, select the add-on specifications and set related parameters. +#. On the **Install Add-on** page, configure the specifications. - .. table:: **Table 1** coredns add-on parameters + .. table:: **Table 1** CoreDNS parameters - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | Add-on Specifications | Concurrent domain name resolution ability. Select add-on specifications that best fit your needs. | - | | | - | | If you select **Custom qps**, the domain name resolution QPS provided by CoreDNS is positively correlated with the CPU consumption. Adjust the number of pods and container CPU/memory quotas as required. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Pods | Number of pods that will be created to match the selected add-on specifications. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Multi AZ | - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. | - | | - **Required**: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Containers | CPU and memory quotas of the container allowed for the selected add-on specifications. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameters | - **parameterSyncStrategy**: indicates whether to configure consistency check when an add-on is upgraded. | - | | | - | | - **ensureConsistent**: indicates that the configuration consistency check is enabled. If the configuration recorded in the cluster is inconsistent with the actual configuration, the add-on cannot be upgraded. | - | | - **force**: indicates that the configuration consistency check is ignored during an upgrade. Ensure that the current effective configuration is the same as the original configuration. After the add-on is upgraded, restore the value of **parameterSyncStrategy** to **ensureConsistent** and enable the configuration consistency check again. | - | | | - | | - **stub_domains**: A domain name server for a user-defined domain name. The format is a key-value pair. The key is a suffix of DNS domain name, and the value is one or more DNS IP addresses. | - | | | - | | - **upstream_nameservers**: IP address of the upstream DNS server. | - | | | - | | - **servers**: The servers configuration is available since coredns 1.23.1. You can customize the servers configuration. For details, see `dns-custom-nameservers `__. **plugins** indicates the configuration of each component in coredns (https://coredns.io/manual/plugins/). You are advised to retain the default configurations in common scenarios to prevent CoreDNS from being unavailable due to configuration errors. Each plugin component contains **name**, **parameters** (optional), and **configBlock** (optional). The format of the generated Corefile is as follows: | - | | | - | | $name $parameters { | - | | | - | | $configBlock | - | | | - | | } | - | | | - | | :ref:`Table 2 ` describes common plugins. | - | | | - | | Example: | - | | | - | | .. code-block:: | - | | | - | | { | - | | "servers": [ | - | | { | - | | "plugins": [ | - | | { | - | | "name": "bind", | - | | "parameters": "{$POD_IP}" | - | | }, | - | | { | - | | "name": "cache", | - | | "parameters": 30 | - | | }, | - | | { | - | | "name": "errors" | - | | }, | - | | { | - | | "name": "health", | - | | "parameters": "{$POD_IP}:8080" | - | | }, | - | | { | - | | "configBlock": "pods insecure\nfallthrough in-addr.arpa ip6.arpa", | - | | "name": "kubernetes", | - | | "parameters": "cluster.local in-addr.arpa ip6.arpa" | - | | }, | - | | { | - | | "name": "loadbalance", | - | | "parameters": "round_robin" | - | | }, | - | | { | - | | "name": "prometheus", | - | | "parameters": "{$POD_IP}:9153" | - | | }, | - | | { | - | | "configBlock": "policy random", | - | | "name": "forward", | - | | "parameters": ". /etc/resolv.conf" | - | | }, | - | | { | - | | "name": "reload" | - | | }, | - | | { | - | | "name": "log" | - | | } | - | | ], | - | | "port": 5353, | - | | "zones": [ | - | | { | - | | "zone": "." | - | | } | - | | ] | - | | } | - | | ], | - | | "stub_domains": { | - | | "acme.local": [ | - | | "1.2.3.4", | - | | "6.7.8.9" | - | | ] | - | | }, | - | | "upstream_nameservers": ["8.8.8.8", "8.8.4.4"] | - | | } | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=================================================================================================================================================================================================================+ + | Add-on Specifications | Concurrent domain name resolution ability. Select add-on specifications that best fit your needs. | + | | | + | | If you select **Custom qps**, the domain name resolution QPS provided by CoreDNS is positively correlated with the CPU consumption. Adjust the number of pods and container CPU/memory quotas as required. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Pods | Number of pods that will be created to match the selected add-on specifications. | + | | | + | | If you select **Custom qps**, you can adjust the number of pods as required. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Multi-AZ | - **Preferred**: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ. | + | | - **Required**: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Containers | CPU and memory quotas of the container allowed for the selected add-on specifications. | + | | | + | | If you select **Custom qps**, you can adjust the container specifications as required. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_10_0129__table1420814384015: +#. Configure the add-on parameters. - .. table:: **Table 2** Default plugin configuration of the active zone of coredns + .. table:: **Table 2** CoreDNS add-on parameters - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | plugin Name | Description | - +=============+======================================================================================================================================================================================+ - | bind | Host IP address listened by coredns. You are advised to retain the default value **{$POD_IP}**. | - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | cache | DNS cache is enabled. | - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | errors | Errors are logged to stdout. | - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | health | Health check configuration. The current listening IP address is {$POD_IP}:8080. Retain the default value. Otherwise, the coredns health check fails and coredns restarts repeatedly. | - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes | CoreDNS Kubernetes plug-in, which provides the service parsing capability in a cluster. | - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | loadbalance | Round-robin DNS load balancer that randomizes the order of A, AAAA, and MX records in the answer. | - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | prometheus | Port for obtaining coredns metrics. The default zone listening IP address is {$\ *POD_IP*}:9153. Retain the default value. Otherwise, CloudScope cannot collect coredns metrics. | - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | forward | Any queries that are not within the cluster domain of Kubernetes will be forwarded to predefined resolvers (/etc/resolv.conf). | - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | reload | The changed Corefile can be automatically reloaded. After editing the ConfigMap, wait for two minutes for the modification to take effect. | - +-------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=========================================================================================================================================================================================================================================================================================================================================================+ + | Stub domain settings | A domain name server for a custom domain name. The format is a key-value pair. The key is a domain name suffix, and the value is one or more DNS IP addresses, for example, **acme.local -- 1.2.3.4,6.7.8.9**. | + | | | + | | For details, see :ref:`Configuring the Stub Domain for CoreDNS `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Advanced settings | - **parameterSyncStrategy**: indicates whether to configure consistency check when an add-on is upgraded. | + | | | + | | - **ensureConsistent**: indicates that the configuration consistency check is enabled. If the configuration recorded in the cluster is inconsistent with the actual configuration, the add-on cannot be upgraded. | + | | - **force**: indicates that the configuration consistency check is ignored during an upgrade. Ensure that the current effective configuration is the same as the original configuration. After the add-on is upgraded, restore the value of **parameterSyncStrategy** to **ensureConsistent** and enable the configuration consistency check again. | + | | - **inherit**: indicates that differentiated configurations are automatically inherited during an upgrade. After the add-on is upgraded, restore the value of **parameterSyncStrategy** to **ensureConsistent** and enable the configuration consistency check again. | + | | | + | | - **stub_domains**: A domain name server for a user-defined domain name. The format is a key-value pair. The key is a suffix of DNS domain name, and the value is one or more DNS IP addresses. | + | | | + | | - **upstream_nameservers**: IP address of the upstream DNS server. | + | | | + | | - servers:The servers configuration has been available since CoreDNS 1.23.1. You can customize the servers configuration. For details, see `dns-custom-nameservers `__. | + | | | + | | **plugins** indicates the configuration of each component in CoreDNS. Retain the default settings typically to prevent CoreDNS from being unavailable due to configuration errors. Each plugin component contains **name**, **parameters** (optional), and **configBlock** (optional). The format of the generated Corefile is as follows: | + | | | + | | .. code-block:: | + | | | + | | $name $parameters { | + | | $configBlock | + | | } | + | | | + | | :ref:`Table 3 ` describes common plugins. For details, see `Plugins `__. | + | | | + | | Example: | + | | | + | | .. code-block:: | + | | | + | | { | + | | "servers": [ | + | | { | + | | "plugins": [ | + | | { | + | | "name": "bind", | + | | "parameters": "{$POD_IP}" | + | | }, | + | | { | + | | "name": "cache", | + | | "parameters": 30 | + | | }, | + | | { | + | | "name": "errors" | + | | }, | + | | { | + | | "name": "health", | + | | "parameters": "{$POD_IP}:8080" | + | | }, | + | | { | + | | "name": "ready", | + | | "{$POD_IP}:8081" | + | | }, | + | | { | + | | "configBlock": "pods insecure\nfallthrough in-addr.arpa ip6.arpa", | + | | "name": "kubernetes", | + | | "parameters": "cluster.local in-addr.arpa ip6.arpa" | + | | }, | + | | { | + | | "name": "loadbalance", | + | | "parameters": "round_robin" | + | | }, | + | | { | + | | "name": "prometheus", | + | | "parameters": "{$POD_IP}:9153" | + | | }, | + | | { | + | | "configBlock": "policy random", | + | | "name": "forward", | + | | "parameters": ". /etc/resolv.conf" | + | | }, | + | | { | + | | "name": "reload" | + | | } | + | | ], | + | | "port": 5353, | + | | "zones": [ | + | | { | + | | "zone": "." | + | | } | + | | ] | + | | } | + | | ], | + | | "stub_domains": { | + | | "acme.local": [ | + | | "1.2.3.4", | + | | "6.7.8.9" | + | | ] | + | | }, | + | | "upstream_nameservers": ["8.8.8.8", "8.8.4.4"] | + | | } | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -#. After the preceding configurations are complete, click **Install**. + .. _cce_10_0129__table0209443564: + + .. table:: **Table 3** Default plugin configuration of the active zone of CoreDNS + + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | plugin Name | Description | + +=============+==============================================================================================================================================================================================================================================================================+ + | bind | Host IP address listened by CoreDNS. You are advised to retain the default value **{$POD_IP}**. For details, see `bind `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | cache | DNS cache is enabled. For details, see `cache `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | errors | Errors are logged to stdout. For details, see `errors `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | health | Health check configuration. The current listening IP address is *{$POD_IP}*\ **:8080**. Retain the default setting. Otherwise, the CoreDNS health check fails and CoreDNS restarts repeatedly. For details, see `health `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ready | Whether the backend server is ready to receive traffic. The current listening port is {$POD_IP}:8081. If the backend server is not ready, CoreDNS suspends DNS resolution until the backend server is ready. For details, see `ready `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes | CoreDNS Kubernetes plug-in, which provides the service parsing capability in a cluster. For details, see `kubernetes `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | loadbalance | Round-robin DNS load balancer that randomizes the order of A, AAAA, and MX records in the answer. For details, see `loadbalance `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | prometheus | Port for obtaining CoreDNS metrics. The default zone listening IP address is *{$POD_IP}*\ **:9153**. Retain the default setting. Otherwise, prometheus cannot collect CoreDNS metrics. For details about, see `prometheus `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | forward | Any queries that are not within the cluster domain of Kubernetes will be forwarded to predefined resolvers (**/etc/resolv.conf**). For details, see `forward `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | reload | The changed Corefile can be automatically reloaded. After editing the ConfigMap, wait for 2 minutes for the modification to take effect. For details, see `reload `__. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Click **Install**. + +Components +---------- + +.. table:: **Table 4** CoreDNS components + + =================== ======================= ============= + Container Component Description Resource Type + =================== ======================= ============= + CoreDNS DNS server for clusters Deployment + =================== ======================= ============= How Does Domain Name Resolution Work in Kubernetes? --------------------------------------------------- @@ -188,16 +221,16 @@ DNS policies can be set on a per-pod basis. Currently, Kubernetes supports four **With stub domain configurations**: If stub domains and upstream DNS servers are configured, DNS queries are routed according to the following flow: -#. The query is first sent to the DNS caching layer in coredns. +#. The query is first sent to the DNS caching layer in CoreDNS. #. From the caching layer, the suffix of the request is examined and then the request is forwarded to the corresponding DNS: - - Names with the cluster suffix, for example, **.cluster.local**: The request is sent to coredns. + - Names with the cluster suffix, for example, **.cluster.local**: The request is sent to CoreDNS. - Names with the stub domain suffix, for example, **.acme.local**: The request is sent to the configured custom DNS resolver that listens, for example, on 1.2.3.4. - Names that do not match the suffix (for example, **widget.com**): The request is forwarded to the upstream DNS. -.. figure:: /_static/images/en-us_image_0000001568902577.png +.. figure:: /_static/images/en-us_image_0000001647576960.png :alt: **Figure 1** Routing **Figure 1** Routing diff --git a/umn/source/add-ons/everest_system_resource_add-on_mandatory.rst b/umn/source/add-ons/everest_system_resource_add-on_mandatory.rst index 472ff66..64a2095 100644 --- a/umn/source/add-ons/everest_system_resource_add-on_mandatory.rst +++ b/umn/source/add-ons/everest_system_resource_add-on_mandatory.rst @@ -8,15 +8,15 @@ everest (System Resource Add-On, Mandatory) Introduction ------------ -Everest is a cloud native container storage system. Based on the Container Storage Interface (CSI), clusters of Kubernetes v1.15.6 or later obtain access to cloud storage services. +everest is a cloud native container storage system, which enables clusters of Kubernetes v1.15.6 or later to access cloud storage services through the Container Storage Interface. **everest is a system resource add-on. It is installed by default when a cluster of Kubernetes v1.15 or later is created.** -Notes and Constraints ---------------------- +Constraints +----------- - If your cluster is upgraded from v1.13 to v1.15, :ref:`storage-driver ` is replaced by everest (v1.1.6 or later) for container storage. The takeover does not affect the original storage functions. -- In version 1.2.0 of the everest add-on, **key authentication** is optimized when OBS is used. After the everest add-on is upgraded from a version earlier than 1.2.0, you need to restart all workloads that use OBS in the cluster. Otherwise, workloads may not be able to use OBS. +- In version 1.2.0 of the everest add-on, **key authentication** is optimized when OBS is used. After the everest add-on is upgraded from a version earlier than 1.2.0, restart all workloads that use OBS in the cluster. Otherwise, workloads may not be able to use OBS. - By default, this add-on is installed in **clusters of v1.15 and later**. For clusters of v1.13 and earlier, the :ref:`storage-driver ` add-on is installed by default. Installing the Add-on @@ -26,48 +26,112 @@ This add-on has been installed by default. If it is uninstalled due to some reas #. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **everest** on the right, and click **Install**. -#. Select **Standalone**, **HA**, or **Custom** for **Add-on Specifications**. +#. On the **Install Add-on** page, configure the specifications. - The everest add-on contains the following containers. You can adjust the specifications as required. + .. table:: **Table 1** everest parameters - - **everest-csi-controller**: A Deployment workload. This container is responsible for creating, deleting, snapshotting, expanding, attaching, and detaching volumes. If the cluster version is 1.19 or later and the add-on version is 1.2.\ *x*, the pod of the everest-csi-driver component also has an everest-localvolume-manager container by default. This container manages the creation of LVM storage pools and local PVs on the node. + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===============================================================================================================================================================================================================================================+ + | Add-on Specifications | Select **Single**, **Custom**, or **HA** for **Add-on Specifications**. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Pods | Number of pods that will be created to match the selected add-on specifications. | + | | | + | | If you select **Custom**, you can adjust the number of pods as required. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Multi-AZ | - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. | + | | - **Required**: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Containers | The everest add-on contains the everest-csi-controller and everest-csi-driver components. For details, see :ref:`Components `. | + | | | + | | If you select **Custom**, you can adjust the component specifications as required. The CPU and memory request values can be increased based on the number of nodes and PVCs. For details, see :ref:`Table 2 `. | + | | | + | | In non-typical scenarios, the formulas for estimating the limit values are as follows: | + | | | + | | - everest-csi-controller | + | | | + | | - CPU limit: 250m for 200 or fewer nodes, 350m for 1000 nodes, and 500m for 2000 nodes | + | | - Memory limit = (200 MiB + Number of nodes x 1 MiB + Number of PVCs x 0.2 MiB) x 1.2 | + | | | + | | - everest-csi-driver | + | | | + | | - CPU limit: 300 m for 200 or fewer nodes, 500 m for 1,000 nodes, and 800 m for 2,000 nodes | + | | - Memory limit: 300 MiB for 200 or fewer nodes, 600 MiB for 1000 nodes, and 900 MiB for 2000 nodes | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. note:: + .. _cce_10_0066__table10463555206: - If you select **Custom**, the recommended **everest-csi-controller** memory configuration is as follows: + .. table:: **Table 2** Recommended configuration limits in typical scenarios - - If the number of pods and PVCs is less than 2000, set the memory upper limit to 600 MiB. - - If the number of pods and PVCs is less than 5000, set the memory upper limit to 1 GiB. + +------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+ + | Configuration Scenario | | | everest-csi-controller | | everest-csi-driver | | + +========================+==========+==================+===========================================================+==============================================================+===========================================================+==============================================================+ + | Nodes | PVs/PVCs | Add-on Instances | CPU (The limit value is the same as the requested value.) | Memory (The limit value is the same as the requested value.) | CPU (The limit value is the same as the requested value.) | Memory (The limit value is the same as the requested value.) | + +------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+ + | 50 | 1000 | 2 | 250 m | 600 MiB | 300 m | 300 MiB | + +------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+ + | 200 | 1,000 | 2 | 250 m | 1 GiB | 300 m | 300 MiB | + +------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+ + | 1000 | 1000 | 2 | 350 m | 2 GiB | 500 m | 600 MiB | + +------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+ + | 1000 | 5000 | 2 | 450 m | 3 GiB | 500 m | 600 MiB | + +------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+ + | 2000 | 5000 | 2 | 550 m | 4 GiB | 800 m | 900 MiB | + +------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+ + | 2000 | 10,000 | 2 | 650 m | 5 GiB | 800 m | 900 MiB | + +------------------------+----------+------------------+-----------------------------------------------------------+--------------------------------------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+ - - **everest-csi-driver**: A DaemonSet workload. This container is responsible for mounting and unmounting PVs and resizing file systems. If the add-on version is 1.2.\ *x* and the region where the cluster is located supports node-attacher, the pod of the everest-csi-driver component also contains an everest-node-attacher container. This container is responsible for distributed EVS attaching. This configuration item is available in some regions. +#. Configure the add-on parameters. - .. note:: + .. table:: **Table 3** everest add-on parameters - If you select **Custom**, it is recommended that the **everest-csi-driver** memory limit be greater than or equal to 300 MiB. If the value is too small, the add-on container cannot be started and the add-on is unavailable. + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +====================================+===================================================================================================================================================================================================================================+ + | csi_attacher_worker_threads | Number of worker nodes that can concurrently attach EVS volumes. The default value is **60**. | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | csi_attacher_detach_worker_threads | Number of worker nodes that can concurrently detach EVS volumes. The default value is **60**. | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | volume_attaching_flow_ctrl | Maximum number of EVS volumes that can be attached by the everest add-on within 1 minute. The default value is **0**, indicating that the performance of attaching EVS volumes is determined by the underlying storage resources. | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | cluster_id | Cluster ID | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | default_vpc_id | ID of the VPC to which the cluster belongs | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | disable_auto_mount_secret | Whether the default AK/SK can be used when an object bucket or parallel file system is mounted. The default value is **false**. | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | enable_node_attacher | Whether to enable the attacher on the agent to process the `VolumeAttachment `__. | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | flow_control | This field is left blank by default. You do not need to configure this parameter. | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | over_subscription | Overcommitment ratio of the local storage pool (**local_storage**). The default value is **80**. If the size of the local storage pool is 100 GB, it can be overcommitted to 180 GB. | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | project_id | ID of the project to which a cluster belongs | + +------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -#. Whether to deploy the add-on instance across multiple AZs. + .. note:: - - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. - - **Required**: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run. + In everest 1.2.26 or later, the performance of attaching a large number of EVS volumes has been optimized. The following parameters can be configured: -#. Set related parameters. + - csi_attacher_worker_threads + - csi_attacher_detach_worker_threads + - volume_attaching_flow_ctrl - In everest 1.2.26 or later, the performance of attaching a large number of EVS volumes is optimized. The following three parameters are provided: - - - **csi_attacher_worker_threads**: number of workers that can concurrently mount EVS volumes. The default value is **60**. - - **csi_attacher_detach_worker_threads**: number of workers that can concurrently unmount EVS volumes. The default value is **60**. - - **volume_attaching_flow_ctrl**: maximum number of EVS volumes that can be mounted by the everest add-on within one minute. The default value is **0**, indicating that the EVS volume mounting performance is determined by the underlying storage resources. - - The preceding three parameters are associated with each other and are constrained by the underlying storage resources in the region where the cluster is located. If you want to mount a large number of volumes (more than 500 EVS volumes per minute), you can contact the customer service personnel and configure the parameters under their guidance to prevent the everest add-on from running abnormally due to improper parameter settings. - - Other parameters - - - **cluster_id**: cluster ID - - **default_vpc_id**: ID of the VPC to which the data warehouse cluster belongs - - **disable_auto_mount_secret**: indicates whether the default AK/SK can be used when an object bucket or parallel file system is mounted. The default value is **false**. - - **enable_node_attacher**: indicates whether to enable the attacher on the agent to process the `VolumeAttachment `__. - - **flow_control**: This parameter is left blank by default. - - **over_subscription**: overcommitment ratio of the local storage pool (**local_storage**). The default value is **80**. If the size of the local storage pool is 100 GB, you can overcommit 180 GB. - - **project_id**: ID of the project to which the cluster belongs. + The preceding parameters are associated with each other and are constrained by the underlying storage resources in the region where the cluster is located. To attach a large number of volumes (more than 500 EVS volumes per minute), contact customer service and configure the parameters under their guidance to prevent the everest add-on from running abnormally due to improper parameter settings. #. Click **Install**. + +.. _cce_10_0066__section0377457163618: + +Components +---------- + +.. table:: **Table 4** everest components + + +------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+ + | Container Component | Description | Resource Type | + +========================+================================================================================================================================================================================================================================================================================================================================================================================+===============+ + | everest-csi-controller | Used to create, delete, snapshot, expand, attach, and detach storage volumes. If the cluster version is 1.19 or later and the add-on version is 1.2.\ *x*, the pod of the everest-csi-controller component also has an everest-localvolume-manager container by default. This container manages the creation of LVM storage pools and local PVs on the node. | Deployment | + +------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+ + | everest-csi-driver | Used to mount and unmount PVs and resize file systems. If the add-on version is 1.2.\ *x* and the region where the cluster is located supports node-attacher, the pod of the everest-csi-driver component also contains an everest-node-attacher container. This container is responsible for distributed EVS attaching. This configuration item is available in some regions. | DaemonSet | + +------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+ diff --git a/umn/source/add-ons/gpu-beta.rst b/umn/source/add-ons/gpu-beta.rst index 9803358..2fc10eb 100644 --- a/umn/source/add-ons/gpu-beta.rst +++ b/umn/source/add-ons/gpu-beta.rst @@ -8,49 +8,75 @@ gpu-beta Introduction ------------ -gpu-beta is a device management add-on that supports GPUs in containers. If GPU nodes are used in the cluster, the gpu-beta add-on must be installed. +gpu-beta is a device management add-on that supports GPUs in containers. If GPU nodes are used in the cluster, this add-on must be installed. -Notes and Constraints ---------------------- +Constraints +----------- - The driver to be downloaded must be a **.run** file. - Only NVIDIA Tesla drivers are supported, not GRID drivers. - When installing or reinstalling the add-on, ensure that the driver download address is correct and accessible. CCE does not verify the address validity. - The gpu-beta add-on only enables you to download the driver and execute the installation script. The add-on status only indicates that how the add-on is running, not whether the driver is successfully installed. +- CCE does not guarantee the compatibility between the GPU driver version and the CDUA library version of your application. You need to check the compatibility by yourself. +- If a GPU driver has been added to a custom OS image, CCE cannot ensure that the GPU driver is compatible with other GPU components such as the monitoring components used in CCE. Installing the Add-on --------------------- -#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **gpu-beta** on the right, and click **Install**. -#. Configure the driver link. +#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **gpu-beta** or **gpu-device-plugin** on the right, and click **Install**. +#. On the **Install Add-on** page, configure the specifications. - .. important:: + .. table:: **Table 1** Add-on specifications - - If the download link is a public network address, for example, **https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run**, bind an EIP to each GPU node. For details about how to obtain the driver link, see :ref:`Obtaining the Driver Link from Public Network `. - - If the download link is an OBS URL, you do not need to bind an EIP to GPU nodes. - - Ensure that the NVIDIA driver version matches the GPU node. - - After the driver version is changed, restart the node for the change to take effect. + +-----------------------------------+----------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+========================================================================================+ + | Add-on Specifications | Select **Default** or **Custom**. | + +-----------------------------------+----------------------------------------------------------------------------------------+ + | Containers | CPU and memory quotas of the container allowed for the selected add-on specifications. | + | | | + | | If you select **Custom**, you can adjust the container specifications as required. | + +-----------------------------------+----------------------------------------------------------------------------------------+ + +#. Configure the add-on parameters. + + - **NVIDIA Driver**: Enter the link for downloading the NVIDIA driver. All GPU nodes in the cluster will use this driver. + + .. important:: + + - If the download link is a public network address, for example, **https://us.download.nvidia.com/tesla/470.103.01/NVIDIA-Linux-x86_64-470.103.01.run**, bind an EIP to each GPU node. For details about how to obtain the driver link, see :ref:`Obtaining the Driver Link from Public Network `. + - If the download link is an OBS URL, you do not need to bind an EIP to GPU nodes. For details about how to obtain the driver link, see :ref:`Obtaining the Driver Link from OBS `. + - Ensure that the NVIDIA driver version matches the GPU node. + - After the driver version is changed, restart the node for the change to take effect. #. Click **Install**. + .. note:: + + Uninstalling the add-on will clear the GPU driver on the nodes. As a result, GPU pods newly scheduled to the nodes cannot run properly, but running GPU pods are not affected. + Verifying the Add-on -------------------- After the add-on is installed, run the **nvidia-smi** command on the GPU node and the container that schedules GPU resources to verify the availability of the GPU device and driver. -GPU node: +- GPU node: -.. code-block:: + .. code-block:: - cd /opt/cloud/cce/nvidia/bin && ./nvidia-smi + # If the add-on version is earlier than 2.0.0, run the following command: + cd /opt/cloud/cce/nvidia/bin && ./nvidia-smi -Container: + # If the add-on version is 2.0.0 or later and the driver installation path is changed, run the following command: + cd /usr/local/nvidia/bin && ./nvidia-smi -.. code-block:: +- Container: - cd /usr/local/nvidia/bin && ./nvidia-smi + .. code-block:: -If GPU information is returned, the device is available and the add-on is successfully installed. + cd /usr/local/nvidia/bin && ./nvidia-smi + +If GPU information is returned, the device is available and the add-on has been installed. |image1| @@ -68,7 +94,7 @@ Obtaining the Driver Link from Public Network .. _cce_10_0141__fig11696366517: - .. figure:: /_static/images/en-us_image_0000001518062808.png + .. figure:: /_static/images/en-us_image_0000001695896741.png :alt: **Figure 1** Setting parameters **Figure 1** Setting parameters @@ -77,7 +103,7 @@ Obtaining the Driver Link from Public Network .. _cce_10_0141__fig7873421145213: - .. figure:: /_static/images/en-us_image_0000001517743660.png + .. figure:: /_static/images/en-us_image_0000001647577072.png :alt: **Figure 2** Driver information **Figure 2** Driver information @@ -90,9 +116,35 @@ Obtaining the Driver Link from Public Network .. _cce_10_0141__fig5901194614534: - .. figure:: /_static/images/en-us_image_0000001517903240.png + .. figure:: /_static/images/en-us_image_0000001647577080.png :alt: **Figure 3** Obtaining the link **Figure 3** Obtaining the link -.. |image1| image:: /_static/images/en-us_image_0000001518062812.png +.. _cce_10_0141__section14922133914508: + +Obtaining the Driver Link from OBS +---------------------------------- + +#. Upload the driver to OBS and set the driver file to public read. + + .. note:: + + When the node is restarted, the driver will be downloaded and installed again. Ensure that the OBS bucket link of the driver is valid. + +#. In the bucket list, click a bucket name, and then the **Overview** page of the bucket is displayed. +#. In the navigation pane, choose **Objects**. +#. Select the name of the target object and copy the driver link on the object details page. + +Components +---------- + +.. table:: **Table 2** GPU component + + +-------------------------+----------------------------------------------------+---------------+ + | Container Component | Description | Resource Type | + +=========================+====================================================+===============+ + | nvidia-driver-installer | Used for installing an NVIDIA driver on GPU nodes. | DaemonSet | + +-------------------------+----------------------------------------------------+---------------+ + +.. |image1| image:: /_static/images/en-us_image_0000001647417812.png diff --git a/umn/source/add-ons/index.rst b/umn/source/add-ons/index.rst index e622c8b..95e469c 100644 --- a/umn/source/add-ons/index.rst +++ b/umn/source/add-ons/index.rst @@ -6,14 +6,14 @@ Add-ons ======= - :ref:`Overview ` -- :ref:`coredns (System Resource Add-On, Mandatory) ` -- :ref:`storage-driver (System Resource Add-On, Discarded) ` +- :ref:`CoreDNS (System Resource Add-On, Mandatory) ` - :ref:`everest (System Resource Add-On, Mandatory) ` - :ref:`npd ` - :ref:`autoscaler ` - :ref:`metrics-server ` - :ref:`gpu-beta ` -- :ref:`volcano ` +- :ref:`Volcano ` +- :ref:`storage-driver (System Resource Add-On, Discarded) ` .. toctree:: :maxdepth: 1 @@ -21,10 +21,10 @@ Add-ons overview coredns_system_resource_add-on_mandatory - storage-driver_system_resource_add-on_discarded everest_system_resource_add-on_mandatory npd autoscaler metrics-server gpu-beta volcano + storage-driver_system_resource_add-on_discarded diff --git a/umn/source/add-ons/metrics-server.rst b/umn/source/add-ons/metrics-server.rst index 5212d15..e5d9628 100644 --- a/umn/source/add-ons/metrics-server.rst +++ b/umn/source/add-ons/metrics-server.rst @@ -9,7 +9,7 @@ From version 1.8 onwards, Kubernetes provides resource usage metrics, such as th metrics-server is an aggregator for monitoring data of core cluster resources. You can quickly install this add-on on the CCE console. -After metrics-server is installed, you can create an HPA policy on the **Workload Scaling** tab page of the **Auto Scaling** page. For details, see :ref:`Creating an HPA Policy for Workload Auto Scaling `. +After metrics-server is installed, you can create an HPA policy on the **Workload Scaling** tab page of the **Auto Scaling** page. For details, see :ref:`HPA `. The official community project and documentation are available at https://github.com/kubernetes-sigs/metrics-server. @@ -17,14 +17,36 @@ Installing the Add-on --------------------- #. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **metrics-server** on the right, and click **Install**. -#. Select **Single**, **Custom**, or **HA** for **Add-on Specifications**. +#. On the **Install Add-on** page, configure the specifications. - - **Pods**: Set the number of pods based on service requirements. - - **Multi AZ**: + .. table:: **Table 1** metrics-server configuration - - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. - - **Required**: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run. - - - **Containers**: Set a proper container quota based on service requirements. + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+==========================================================================================================================================================================================================================+ + | Add-on Specifications | Select **Single**, **Custom**, or **HA** for **Add-on Specifications**. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Pods | Number of pods that will be created to match the selected add-on specifications. | + | | | + | | If you select **Custom**, you can adjust the number of pods as required. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Multi-AZ | - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. | + | | - **Required**: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Containers | CPU and memory quotas of the container allowed for the selected add-on specifications. | + | | | + | | If you select **Custom**, you can adjust the container specifications as required. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ #. Click **Install**. + +Components +---------- + +.. table:: **Table 2** metrics-server components + + +----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+ + | Component | Description | Resource Type | + +================+============================================================================================================================================================================+===============+ + | metrics-server | Aggregator for the monitored data of cluster core resources, which is used to collect and aggregate resource usage metrics obtained through the Metrics API in the cluster | Deployment | + +----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+ diff --git a/umn/source/add-ons/npd.rst b/umn/source/add-ons/npd.rst index a858441..dd88359 100644 --- a/umn/source/add-ons/npd.rst +++ b/umn/source/add-ons/npd.rst @@ -16,10 +16,10 @@ Constraints ----------- - When using this add-on, do not format or partition node disks. -- Each npd process occupies 30 mCPUs and 100 MiB memory. +- Each npd process occupies 30 m CPU and 100 MB memory. -Permission Description ----------------------- +Permissions +----------- To monitor kernel logs, the npd add-on needs to read the host **/dev/kmsg**. Therefore, the privileged mode must be enabled. For details, see `privileged `__. @@ -31,81 +31,55 @@ In addition, CCE mitigates risks according to the least privilege principle. Onl Installing the Add-on --------------------- -#. Log in to the CCE console and access the cluster. Choose **Add-ons** from the navigation pane, locate **npd** on the right, and click **Install**. +#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **npd** on the right, and click **Install**. #. On the **Install Add-on** page, configure the specifications. .. table:: **Table 1** npd configuration - +-----------------------+----------------------------------------------------------------------------+ - | Parameter | Description | - +=======================+============================================================================+ - | Add-on specifications | The specifications can be **Custom**. | - +-----------------------+----------------------------------------------------------------------------+ - | Number of pods | If you select **Custom**, adjust the number of pods as required. | - +-----------------------+----------------------------------------------------------------------------+ - | Containers | If you select **Custom**, adjust the container specifications as required. | - +-----------------------+----------------------------------------------------------------------------+ + +-----------------------+------------------------------------------------------------------------------------+ + | Parameter | Description | + +=======================+====================================================================================+ + | Add-on Specifications | The specifications can be **Custom**. | + +-----------------------+------------------------------------------------------------------------------------+ + | Pods | If you select **Custom**, you can adjust the number of pods as required. | + +-----------------------+------------------------------------------------------------------------------------+ + | Containers | If you select **Custom**, you can adjust the container specifications as required. | + +-----------------------+------------------------------------------------------------------------------------+ #. Configure the add-on parameters. - Only 1.16.0 and later versions support the configurations. + Only v1.16.0 and later versions support the configurations. - .. table:: **Table 2** npd add-on parameters + .. table:: **Table 2** npd parameters - +-----------------------------------+------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+==================================================================================================================+ - | common.image.pullPolicy | An image pulling policy. The default value is **IfNotPresent**. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------+ - | feature_gates | A feature gate | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------+ - | npc.maxTaintedNode | Check how many nodes can npc add taints to for avoiding the impact when a single fault occurs on multiple nodes. | - | | | - | | The int format and percentage format are supported. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------+ - | npc.nodeAffinity | Node affinity of the controller | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------+ - -#. Configure scheduling policies of the add-on. + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===========================================================================================================================+ + | common.image.pullPolicy | An image pulling policy. The default value is **IfNotPresent**. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------+ + | feature_gates | A feature gate | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------+ + | npc.maxTaintedNode | The maximum number of nodes that npc can add taints to when a single fault occurs on multiple nodes for minimizing impact | + | | | + | | The value can be in int or percentage format. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------+ + | npc.nodeAffinity | Node affinity of the controller | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------+ .. note:: - - Scheduling policies do not take effect on add-on instances of the DaemonSet type. - - When configuring multi-AZ deployment or node affinity, ensure that there are nodes meeting the scheduling policy and that resources are sufficient in the cluster. Otherwise, the add-on cannot run. - - .. table:: **Table 3** Configurations for add-on scheduling - - +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+====================================================================================================================================================================================================================================================+ - | Multi AZ | - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. | - | | - **Required**: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run. | - +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Node affinity | - **Incompatibility**: Node affinity is disabled for the add-on. | - | | | - | | - **Node Affinity**: Specify the nodes where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy. | - | | | - | | - **Specified Node Pool Scheduling**: Specify the node pool where the add-on is deployed. If you do not specify the node pool, the add-on will be randomly scheduled based on the default cluster scheduling policy. | - | | | - | | - Customize affinity: Enter the labels of the nodes where the add-on is to be deployed to implement more flexible scheduling policies. If not entered, random scheduling will be performed based on the default scheduling policy of the cluster. | - | | | - | | If multiple custom affinity policies are configured, ensure that there are nodes that meet all the affinity policies in the cluster. Otherwise, the add-on cannot run. | - +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Taints and Tolerations | Using both taints and tolerations allows (not forcibly) the Deployment pod of the add-on to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. | - | | | - | | The add-on adds the default tolerance policy for the **node.kubernetes.io/not-ready** and **node.kubernetes.io/unreachable** taints, respectively. The tolerance time window is 60s. | - +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + Only some parameters are listed here. For more information, see the details of the open-source project node-problem-detector. #. Click **Install**. Components ---------- -.. table:: **Table 4** npd component +.. table:: **Table 3** npd components +-------------------------+------------------------------------------------------------+---------------+ - | Component | Description | Resource Type | + | Container Component | Description | Resource Type | +=========================+============================================================+===============+ | node-problem-controller | Isolate faults basically based on fault detection results. | Deployment | +-------------------------+------------------------------------------------------------+---------------+ @@ -125,7 +99,7 @@ Check items cover events and statuses. For event-related check items, when a problem occurs, npd reports an event to the API server. The event type can be **Normal** (normal event) or **Warning** (abnormal event). - .. table:: **Table 5** Event-related check items + .. table:: **Table 4** Event-related check items +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------+ | Check Item | Function | Description | @@ -151,11 +125,11 @@ Check items cover events and statuses. - Status-related - For status-related check items, when a problem occurs, npd reports an event to the API server and changes the node status synchronously. This function can be used together with :ref:`Node-problem-controller fault isolation ` to isolate nodes. + For status-related check items, when a problem occurs, npd reports an event to the API server and changes the node status synchronously. This function can be used together with :ref:`Node-problem-controller fault isolation ` to isolate nodes. **If the check period is not specified in the following check items, the default period is 30 seconds.** - .. table:: **Table 6** Checking system components + .. table:: **Table 5** Checking system components +-----------------------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ | Check Item | Function | Description | @@ -191,7 +165,7 @@ Check items cover events and statuses. | KubeProxyProblem | | | +-----------------------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ - .. table:: **Table 7** Checking system metrics + .. table:: **Table 6** Checking system metrics +--------------------------------+------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+ | Check Item | Function | Description | @@ -211,7 +185,7 @@ Check items cover events and statuses. | | | | | | | Currently, additional data disks are not supported. | +--------------------------------+------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | Insufficient file handles | Check whether FD file handles are used up. | - Default threshold: 90% | + | Insufficient file handles | Check if the FD file handles are used up. | - Default threshold: 90% | | | | - Usage: the first value in **/proc/sys/fs/file-nr** | | FDProblem | | - Maximum value: the third value in **/proc/sys/fs/file-nr** | +--------------------------------+------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+ @@ -224,7 +198,7 @@ Check items cover events and statuses. | PIDProblem | | - Maximum value: smaller value between **/proc/sys/kernel/pid_max** and **/proc/sys/kernel/threads-max**. | +--------------------------------+------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+ - .. table:: **Table 8** Checking the storage + .. table:: **Table 7** Checking the storage +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Check Item | Function | Description | @@ -319,44 +293,44 @@ Check items cover events and statuses. | | | If I/O requests are not responded and the **await** data is not updated, this check item is invalid. | +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. table:: **Table 9** Other check items + .. table:: **Table 8** Other check items - +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | Check Item | Function | Description | - +==========================+=========================================================================================================================================================================================================+=========================================================================================================================================+ - | Abnormal NTP | Check whether the node clock synchronization service ntpd or chronyd is running properly and whether a system time drift is caused. | Default clock offset threshold: 8000 ms | - | | | | - | NTPProblem | | | - +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | Process D error | Check whether there is a process D on the node. | Default threshold: 10 abnormal processes detected for three consecutive times | - | | | | - | ProcessD | | Source: | - | | | | - | | | - /proc/{PID}/stat | - | | | - Alternately, you can run the **ps aux** command. | - | | | | - | | | Exceptional scenario: ProcessD ignores the resident D processes (heartbeat and update) on which the SDI driver on the BMS node depends. | - +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | Process Z error | Check whether the node has processes in Z state. | | - | | | | - | ProcessZ | | | - +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | ResolvConf error | Check whether the ResolvConf file is lost. | Object: **/etc/resolv.conf** | - | | | | - | ResolvConfFileProblem | Check whether the ResolvConf file is normal. | | - | | | | - | | Exceptional definition: No upstream domain name resolution server (nameserver) is included. | | - +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ - | Existing scheduled event | Check whether scheduled live migration events exist on the node. A live migration plan event is usually triggered by a hardware fault and is an automatic fault rectification method at the IaaS layer. | Source: | - | | | | - | ScheduledEvent | Typical scenario: The host is faulty. For example, the fan is damaged or the disk has bad sectors. As a result, live migration is triggered for VMs. | - http://169.254.169.254/meta-data/latest/events/scheduled | - | | | | - | | | This check item is an Alpha feature and is disabled by default. | - +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Check Item | Function | Description | + +==========================+=========================================================================================================================================================================================================+========================================================================================================================================================+ + | Abnormal NTP | Check whether the node clock synchronization service ntpd or chronyd is running properly and whether a system time drift is caused. | Default clock offset threshold: 8000 ms | + | | | | + | NTPProblem | | | + +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Process D error | Check whether there is a process D on the node. | Default threshold: 10 abnormal processes detected for three consecutive times | + | | | | + | ProcessD | | Source: | + | | | | + | | | - /proc/{PID}/stat | + | | | - Alternately, you can run the **ps aux** command. | + | | | | + | | | Exceptional scenario: The ProcessD check item ignores the resident D processes (heartbeat and update) on which the SDI driver on the BMS node depends. | + +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Process Z error | Check whether the node has processes in Z state. | | + | | | | + | ProcessZ | | | + +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ResolvConf error | Check whether the ResolvConf file is lost. | Object: **/etc/resolv.conf** | + | | | | + | ResolvConfFileProblem | Check whether the ResolvConf file is normal. | | + | | | | + | | Exceptional definition: No upstream domain name resolution server (nameserver) is included. | | + +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Existing scheduled event | Check whether scheduled live migration events exist on the node. A live migration plan event is usually triggered by a hardware fault and is an automatic fault rectification method at the IaaS layer. | Source: | + | | | | + | ScheduledEvent | Typical scenario: The host is faulty. For example, the fan is damaged or the disk has bad sectors. As a result, live migration is triggered for VMs. | - http://169.254.169.254/meta-data/latest/events/scheduled | + | | | | + | | | This check item is an Alpha feature and is disabled by default. | + +--------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+ The kubelet component has the following default check items, which have bugs or defects. You can fix them by upgrading the cluster or using npd. - .. table:: **Table 10** Default kubelet check items + .. table:: **Table 9** Default kubelet check items +-----------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Check Item | Function | Description | @@ -375,7 +349,7 @@ Check items cover events and statuses. | DiskPressure | | | +-----------------------------+------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. _cce_10_0132__en-us_topic_0000001244261007_section1471610580474: +.. _cce_10_0132__section1471610580474: Node-problem-controller Fault Isolation --------------------------------------- @@ -388,7 +362,7 @@ Node-problem-controller Fault Isolation The open source NPD plug-in provides fault detection but not fault isolation. CCE enhances the node-problem-controller (NPC) based on the open source NPD. This component is implemented based on the Kubernetes `node controller `__. For faults reported by NPD, NPC automatically adds taints to nodes for node fault isolation. -.. table:: **Table 11** Parameters +.. table:: **Table 10** Parameters +-----------------------+--------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | Default | diff --git a/umn/source/add-ons/overview.rst b/umn/source/add-ons/overview.rst index cdb5d04..3869bca 100644 --- a/umn/source/add-ons/overview.rst +++ b/umn/source/add-ons/overview.rst @@ -9,18 +9,18 @@ CCE provides multiple types of add-ons to extend cluster functions and meet feat .. important:: - CCE uses Helm templates to deploy add-ons. To modify or upgrade an add-on, perform operations on the **Add-ons** page or use open APIs. Do not directly modify resources related to add-ons in the background. Otherwise, add-on exceptions or other unexpected problems may occur. + CCE uses Helm charts to deploy add-ons. To modify or upgrade an add-on, perform operations on the **Add-ons** page or use open add-on management APIs. Do not directly modify resources related to add-ons in the background. Otherwise, add-on exceptions or other unexpected problems may occur. .. table:: **Table 1** Add-on list +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Add-on Name | Introduction | +=========================================================================+=================================================================================================================================================================================================================================================================================================================================+ - | :ref:`coredns (System Resource Add-On, Mandatory) ` | The coredns add-on is a DNS server that provides domain name resolution services for Kubernetes clusters. coredns chains plug-ins to provide additional features. | + | :ref:`CoreDNS (System Resource Add-On, Mandatory) ` | CoreDNS is a DNS server that provides domain name resolution for Kubernetes clusters through chain plug-ins. | +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | :ref:`storage-driver (System Resource Add-On, Discarded) ` | storage-driver is a FlexVolume driver used to support IaaS storage services such as EVS, SFS, and OBS. | +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`everest (System Resource Add-On, Mandatory) ` | Everest is a cloud native container storage system. Based on the Container Storage Interface (CSI), clusters of Kubernetes v1.15.6 or later obtain access to cloud storage services. | + | :ref:`everest (System Resource Add-On, Mandatory) ` | everest is a cloud native container storage system, which enables clusters of Kubernetes v1.15.6 or later to use cloud storage through the Container Storage Interface (CSI). | +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | :ref:`npd ` | node-problem-detector (npd for short) is an add-on that monitors abnormal events of cluster nodes and connects to a third-party monitoring platform. It is a daemon running on each node. It collects node issues from different daemons and reports them to the API server. The npd add-on can run as a DaemonSet or a daemon. | +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -28,7 +28,95 @@ CCE provides multiple types of add-ons to extend cluster functions and meet feat +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | :ref:`metrics-server ` | metrics-server is an aggregator for monitoring data of core cluster resources. | +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`gpu-beta ` | gpu-beta is a device management add-on that supports GPUs in containers. It supports only NVIDIA drivers. | + | :ref:`gpu-device-plugin (formerly gpu-beta) ` | gpu-device-plugin is a device management add-on that supports GPUs in containers. It supports only NVIDIA drivers. | +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | :ref:`volcano ` | Volcano provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management, serving end users through computing frameworks for different industries, such as AI, big data, gene sequencing, and rendering. | +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +Add-on Lifecycle +---------------- + +An add-on lifecycle involves all the statuses of the add-on from installation to uninstallation. + +.. table:: **Table 2** Add-on statuses + + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Status | Attribute | Description | + +=======================+=======================+==============================================================================================================================================================================+ + | Running | Stable state | The add-on is running properly, all add-on instances are deployed properly, and the add-on can be used properly. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Partially ready | Stable state | The add-on is running properly, but some add-on instances are not properly deployed. In this state, the add-on functions may be unavailable. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Unavailable | Stable state | The add-on malfunctions, and all add-on instances are not properly deployed. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Installing | Intermediate state | The add-on is being deployed. | + | | | | + | | | If all instances cannot be scheduled due to incorrect add-on configuration or insufficient resources, the system sets the add-on status to **Unavailable** 10 minutes later. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Installation failed | Stable state | Install add-on failed. Uninstall it and try again. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Upgrading | Intermediate state | The add-on is being upgraded. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Upgrade failed | Stable state | Upgrade add-on failed. Upgrade it again, or uninstall it and try again. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Rolling back | Intermediate state | The add-on is rolling back. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Rollback failed | Stable state | The add-on rollback failed. Retry the rollback, or uninstall it and try again. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Deleting | Intermediate state | The add-on is being deleted. | + | | | | + | | | If this state stays for a long time, an exception occurred. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Deletion failed | Stable state | Delete add-on failed. Try again. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Unknown | Stable state | No add-on chart found. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. note:: + + When an add-on is in an intermediate state such as **Installing** or **Deleting**, you are not allowed to edit or uninstall the add-on. + +Related Operations +------------------ + +You can perform the operations described in :ref:`Table 3 ` on the **Add-ons** page. + +.. _cce_10_0277__table1619535674020: + +.. table:: **Table 3** Related operations + + +-----------------------+---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +=======================+=======================================+====================================================================================================================================+ + | Install | Install a specified add-on. | #. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose **Add-ons**. | + | | | | + | | | #. Click **Install** under the target add-on. | + | | | | + | | | Each add-on has different configuration parameters. For details, see the corresponding chapter. | + | | | | + | | | #. Click **OK**. | + +-----------------------+---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------+ + | Upgrade | Upgrade an add-on to the new version. | #. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose **Add-ons**. | + | | | | + | | | #. If an add-on can be upgraded, the **Upgrade** button is displayed under it. | + | | | | + | | | Click **Upgrade**. Each add-on has different configuration parameters. For details, see the corresponding chapter. | + | | | | + | | | #. Click **OK**. | + +-----------------------+---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------+ + | Edit | Edit add-on parameters. | #. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose **Add-ons**. | + | | | | + | | | #. Click **Edit** under the target add-on. | + | | | | + | | | Each add-on has different configuration parameters. For details, see the corresponding chapter. | + | | | | + | | | #. Click **OK**. | + +-----------------------+---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------+ + | Uninstall | Uninstall an add-on from the cluster. | #. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose **Add-ons**. | + | | | | + | | | #. Click **Uninstall** under the target add-on. | + | | | | + | | | #. In the displayed dialog box, click **Yes**. | + | | | | + | | | This operation cannot be undone. | + +-----------------------+---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/add-ons/storage-driver_system_resource_add-on_discarded.rst b/umn/source/add-ons/storage-driver_system_resource_add-on_discarded.rst index 5f212e1..bb655d6 100644 --- a/umn/source/add-ons/storage-driver_system_resource_add-on_discarded.rst +++ b/umn/source/add-ons/storage-driver_system_resource_add-on_discarded.rst @@ -12,11 +12,11 @@ storage-driver functions as a standard Kubernetes FlexVolume plug-in to allow co **storage-driver is a system resource add-on. It is installed by default when a cluster of Kubernetes v1.13 or earlier is created.** -Notes and Constraints ---------------------- +Constraints +----------- -- For clusters created in CCE, Kubernetes v1.15.11 is a transitional version in which the FlexVolume plug-in (storage-driver) is compatible with the CSI plug-in (:ref:`everest `). Clusters of v1.17 and later versions do not support FlexVolume anymore. You need to use the everest add-on. -- The FlexVolume plug-in will be maintained by Kubernetes developers, but new functionality will only be added to CSI. You are advised not to create storage that connects to the FlexVolume plug-in (storage-driver) in CCE anymore. Otherwise, the storage resources may not function normally. +- For clusters created in CCE, Kubernetes v1.15.11 is a transitional version in which the FlexVolume add-on (storage-driver) is compatible with the CSI add-on (:ref:`everest `). Clusters of v1.17 and later versions do not support FlexVolume anymore. Use the everest add-on. +- The FlexVolume add-on will be maintained by Kubernetes developers, but new functionality will only be added to :ref:`everest (System Resource Add-On, Mandatory) `. Do not create CCE storage that connects to the FlexVolume add-on (storage-driver) anymore. Otherwise, storage may malfunction. - This add-on can be installed only in **clusters of v1.13 or earlier**. By default, the :ref:`everest ` add-on is installed when clusters of v1.15 or later are created. .. note:: @@ -30,5 +30,5 @@ This add-on has been installed by default. If it is uninstalled due to some reas If storage-driver is not installed in a cluster, perform the following steps to install it: -#. Log in to the CCE console, click the cluster name, and access the cluster console. Choose **Add-ons** in the navigation pane, locate **storage-driver** on the right, and click **Install**. +#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **storage-driver** on the right, and click **Install**. #. Click **Install** to install the add-on. Note that the storage-driver has no configurable parameters and can be directly installed. diff --git a/umn/source/add-ons/volcano.rst b/umn/source/add-ons/volcano.rst index 359789c..63e23d4 100644 --- a/umn/source/add-ons/volcano.rst +++ b/umn/source/add-ons/volcano.rst @@ -2,15 +2,15 @@ .. _cce_10_0193: -volcano +Volcano ======= Introduction ------------ -Volcano is a batch processing platform based on Kubernetes. It provides a series of features required by machine learning, deep learning, bioinformatics, genomics, and other big data applications, as a powerful supplement to Kubernetes capabilities. +`Volcano `__ is a batch processing platform based on Kubernetes. It provides a series of features required by machine learning, deep learning, bioinformatics, genomics, and other big data applications, as a powerful supplement to Kubernetes capabilities. -Volcano provides general-purpose, high-performance computing capabilities, such as job scheduling engine, heterogeneous chip management, and job running management, serving end users through computing frameworks for different industries, such as AI, big data, gene sequencing, and rendering. (Volcano has been open-sourced in GitHub.) +Volcano provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management, serving end users through computing frameworks for different industries, such as AI, big data, gene sequencing, and rendering. Volcano provides job scheduling, job management, and queue management for computing applications. Its main features are as follows: @@ -18,42 +18,84 @@ Volcano provides job scheduling, job management, and queue management for comput - Advanced scheduling capabilities are provided for batch computing and high-performance computing scenarios, including group scheduling, preemptive priority scheduling, packing, resource reservation, and task topology. - Queues can be effectively managed for scheduling jobs. Complex job scheduling capabilities such as queue priority and multi-level queues are supported. -Open source community: https://github.com/volcano-sh/volcano +Volcano has been open-sourced in GitHub at https://github.com/volcano-sh/volcano. + +Install and configure the Volcano add-on in CCE clusters. For details, see :ref:`Volcano Scheduling `. + +.. note:: + + When using Volcano as a scheduler, use it to schedule all workloads in the cluster. This prevents resource scheduling conflicts caused by simultaneous working of multiple schedulers. Installing the Add-on --------------------- -#. Log in to the CCE console, click the cluster name, and access the cluster console. Choose **Add-ons** in the navigation pane, locate **volcano** on the right, and click **Install**. +#. Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane, locate **volcano** on the right, and click **Install**. -#. Select **Standalone**, **Custom**, or **HA** for **Add-on Specifications**. +#. On the **Install Add-on** page, configure the specifications. - If you select **Custom**, the recommended values of **volcano-controller** and **volcano-scheduler** are as follows: + .. table:: **Table 1** Volcano specifications - - If the number of nodes is less than 100, retain the default configuration. That is, the CPU request value is **500m**, and the limit value is **2000m**. The memory request value is **500Mi**, and the limit value is **2000Mi**. - - If the number of nodes is greater than 100, increase the CPU request value by **500m** and the memory request value by **1000Mi** each time 100 nodes (10000 pods) are added. You are advised to increase the CPU limit value by **1500m** and the memory limit by **1000Mi**. + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+========================================================================================================================================================================================================================================================================================================================================================================+ + | Add-on Specifications | Select **Single**, **Custom**, or **HA** for **Add-on Specifications**. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Pods | Number of pods that will be created to match the selected add-on specifications. | + | | | + | | If you select **Custom**, you can adjust the number of pods as required. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Multi-AZ | - **Preferred**: Deployment pods of the add-on will be preferentially scheduled to nodes in different AZs. If all the nodes in the cluster are deployed in the same AZ, the pods will be scheduled to that AZ. | + | | - **Required**: Deployment pods of the add-on will be forcibly scheduled to nodes in different AZs. If there are fewer AZs than pods, the extra pods will fail to run. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Containers | CPU and memory quotas of the container allowed for the selected add-on specifications. | + | | | + | | If you select **Custom**, the recommended values for **volcano-controller** and **volcano-scheduler** are as follows: | + | | | + | | - If the number of nodes is less than 100, retain the default configuration. The requested CPU is 500 m, and the limit is 2000 m. The requested memory is 500 MiB, and the limit is 2000 MiB. | + | | - If the number of nodes is greater than 100, increase the requested CPU by 500 m and the requested memory by 1000 MiB each time 100 nodes (10,000 pods) are added. Increase the CPU limit by 1500 m and the memory limit by 1000 MiB. | + | | | + | | .. note:: | + | | | + | | Recommended formula for calculating the request value: | + | | | + | | - CPU request value: Calculate the number of target nodes multiplied by the number of target pods, perform interpolation search based on the number of nodes in the cluster multiplied by the number of target pods in :ref:`Table 2 `, and round up the request value and limit value that are closest to the specifications. | + | | | + | | For example, for 2000 nodes and 20,000 pods, Number of target nodes x Number of target pods = 40 million, which is close to the specification of 700/70000 (Number of cluster nodes x Number of pods = 49 million). According to the following table, you are advised to set the CPU request value to 4000 m and the limit value to 5500 m. | + | | | + | | - Memory request value: It is recommended that 2.4 GiB memory be allocated to every 1,000 nodes and 1 GiB memory be allocated to every 10,000 pods. The memory request value is the sum of these two values. (The obtained value may be different from the recommended value in :ref:`Table 2 `. You can use either of them.) | + | | | + | | Memory request = Number of target nodes/1000 x 2.4 GiB + Number of target pods/10000 x 1 GiB | + | | | + | | For example, for 2000 nodes and 20,000 pods, the memory request value is 6.8 GiB, that is, 2000/1000 x 2.4 GiB + 20000/10000 x 1 GiB. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. table:: **Table 1** Recommended values for volcano-controller and volcano-scheduler + .. _cce_10_0193__table4742829185912: - +--------------------+----------------+--------------+--------------------+------------------+ - | Number of Node/Pod | CPU Request(m) | CPU Limit(m) | Memory Request(Mi) | Memory Limit(Mi) | - +====================+================+==============+====================+==================+ - | 50/5k | 500 | 2000 | 500 | 2000 | - +--------------------+----------------+--------------+--------------------+------------------+ - | 100/1w | 1000 | 2500 | 1500 | 2500 | - +--------------------+----------------+--------------+--------------------+------------------+ - | 200/2w | 1500 | 3000 | 2500 | 3500 | - +--------------------+----------------+--------------+--------------------+------------------+ - | 300/3w | 2000 | 3500 | 3500 | 4500 | - +--------------------+----------------+--------------+--------------------+------------------+ - | 400/4w | 2500 | 4000 | 4500 | 5500 | - +--------------------+----------------+--------------+--------------------+------------------+ + .. table:: **Table 2** Recommended values for volcano-controller and volcano-scheduler -#. Determine whether to deploy the add-on pods across multiple AZs. + +-------------------------+-----------------+---------------+----------------------+--------------------+ + | Nodes/Pods in a Cluster | CPU Request (m) | CPU Limit (m) | Memory Request (MiB) | Memory Limit (MiB) | + +=========================+=================+===============+======================+====================+ + | 50/5,000 | 500 | 2000 | 500 | 2000 | + +-------------------------+-----------------+---------------+----------------------+--------------------+ + | 100/10,000 | 1000 | 2500 | 1500 | 2500 | + +-------------------------+-----------------+---------------+----------------------+--------------------+ + | 200/20,000 | 1500 | 3000 | 2500 | 3500 | + +-------------------------+-----------------+---------------+----------------------+--------------------+ + | 300/30,000 | 2000 | 3500 | 3500 | 4500 | + +-------------------------+-----------------+---------------+----------------------+--------------------+ + | 400/40,000 | 2500 | 4000 | 4500 | 5500 | + +-------------------------+-----------------+---------------+----------------------+--------------------+ + | 500/50,000 | 3000 | 4500 | 5500 | 6500 | + +-------------------------+-----------------+---------------+----------------------+--------------------+ + | 600/60,000 | 3500 | 5000 | 6500 | 7500 | + +-------------------------+-----------------+---------------+----------------------+--------------------+ + | 700/70,000 | 4000 | 5500 | 7500 | 8500 | + +-------------------------+-----------------+---------------+----------------------+--------------------+ - - **Preferred**: Deployment pods of the add-on are preferentially scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, the pods are scheduled to a single AZ. - - **Required**: Deployment pods of the add-on are forcibly scheduled to nodes in different AZs. If the nodes in the cluster do not meet the requirements of multiple AZs, not all pods can run. +#. Configure the add-on parameters. -#. Configure parameters of the volcano default scheduler. For details, see :ref:`Table 2 `. + Configure parameters of the default volcano scheduler. For details, see :ref:`Table 4 `. .. code-block:: @@ -78,89 +120,232 @@ Installing the Add-on - name: 'nodeemptydirvolume' - name: 'nodeCSIscheduling' - name: 'networkresource' + tolerations: + - effect: NoExecute + key: node.kubernetes.io/not-ready + operator: Exists + tolerationSeconds: 60 + - effect: NoExecute + key: node.kubernetes.io/unreachable + operator: Exists + tolerationSeconds: 60 + + .. table:: **Table 3** Advanced Volcano configuration parameters + + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | Plug-in | Function | Description | Demonstration | + +========================+============================================================================================================================================================================================================+===========================================================================================================================================================================================================================+=========================================================================+ + | default_scheduler_conf | Used to schedule pods. It consists of a series of actions and plug-ins and features high scalability. You can specify and implement actions and plug-ins based on your requirements. | It consists of actions and tiers. | None | + | | | | | + | | | - **actions**: defines the types and sequence of actions to be executed by the scheduler. | | + | | | - **tiers**: configures the plug-in list. | | + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | actions | Actions to be executed in each scheduling phase. The configured action sequence is the scheduler execution sequence. For details, see `Actions `__. | The following options are supported: | .. code-block:: | + | | | | | + | | The scheduler traverses all jobs to be scheduled and performs actions such as enqueue, allocate, preempt, reclaim, and backfill in the configured sequence to find the most appropriate node for each job. | - **enqueue**: uses a series of filtering algorithms to filter out tasks to be scheduled and sends them to the queue to wait for scheduling. After this action, the task status changes from **pending** to **inqueue**. | actions: 'allocate, backfill' | + | | | - **allocate**: selects the most suitable node based on a series of pre-selection and selection algorithms. | | + | | | - **preempt**: performs preemption scheduling for tasks with higher priorities in the same queue based on priority rules. | .. note:: | + | | | - **backfill**: schedules pending tasks as much as possible to maximize the utilization of node resources. | | + | | | | When configuring **actions**, use either **preempt** or **enqueue**. | + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | plugins | Implementation details of algorithms in actions based on different scenarios. For details, see `Plugins `__. | For details, see :ref:`Table 4 `. | None | + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | tolerations | Tolerance of the add-on to node taints. | By default, the add-on can run on nodes with the **node.kubernetes.io/not-ready** or **node.kubernetes.io/unreachable** taint and the taint effect value is **NoExecute**, but it'll be evicted in 60 seconds. | .. code-block:: | + | | | | | + | | | | tolerations: | + | | | | - effect: NoExecute | + | | | | key: node.kubernetes.io/not-ready | + | | | | operator: Exists | + | | | | tolerationSeconds: 60 | + | | | | - effect: NoExecute | + | | | | key: node.kubernetes.io/unreachable | + | | | | operator: Exists | + | | | | tolerationSeconds: 60 | + +------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ .. _cce_10_0193__table562185146: - .. table:: **Table 2** Volcano Plugins + .. table:: **Table 4** Supported plug-ins - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | Add-on | Function | Description | Demonstration | - +============================+=============================================================================================================================================================================================================================+==========================================================================================================================+=============================================================+ - | binpack | Schedules pods to nodes with high resource utilization to reduce resource fragments. | - **binpack.weight**: Weight of the binpack plugin. | .. code-block:: | - | | | - **binpack.cpu**: ratio of CPU resources to all resources. Defaults to **1**. | | - | | | - **binpack.memory**: Ratio of memory resources to all resources. Defaults to **1**. | - plugins: | - | | | - **binpack.resources**: resource type. | - name: binpack | - | | | | arguments: | - | | | | binpack.weight: 10 | - | | | | binpack.cpu: 1 | - | | | | binpack.memory: 1 | - | | | | binpack.resources: nvidia.com/gpu, example.com/foo | - | | | | binpack.resources.nvidia.com/gpu: 2 | - | | | | binpack.resources.example.com/foo: 3 | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | conformance | The conformance plugin considers that the tasks in namespace **kube-system** have a higher priority. These tasks will not be preempted. | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | gang | The gang plugin considers a group of pods as a whole to allocate resources. | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | priority | The priority plugin schedules pods based on the custom workload priority. | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | overcommit | Resources in a cluster are scheduled after being accumulated in a certain multiple to improve the workload enqueuing efficiency. If all workloads are Deployments, remove this plugin or set the raising factor to **2.0**. | **overcommit-factor**: Raising factor. Defaults to **1.2**. | .. code-block:: | - | | | | | - | | | | - plugins: | - | | | | - name: overcommit | - | | | | arguments: | - | | | | overcommit-factor: 2.0 | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | drf | The DRF plugin schedules resources based on the container group Dominate Resource. The smallest Dominate Resource would be selected for priority scheduling. | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | predicates | Determines whether a task is bound to a node by using a series of evaluation algorithms, such as node/pod affinity, taint tolerance, node port repetition, volume limits, and volume zone matching. | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | nodeorder | The nodeorder plugin scores all nodes for a task by using a series of scoring algorithms. | - **nodeaffinity.weight**: Pods are scheduled based on the node affinity. Defaults to **1**. | .. code-block:: | - | | | - **podaffinity.weight**: Pods are scheduled based on the pod affinity. Defaults to **1**. | | - | | | - **leastrequested.weight**: Pods are scheduled to the node with the least resources. Defaults to **1**. | - plugins: | - | | | - **balancedresource.weight**: Pods are scheduled to the node with balanced resource. Defaults to **1**. | - name: nodeorder | - | | | - **mostrequested.weight**: Pods are scheduled to the node with the most requested resources. Defaults to **0**. | arguments: | - | | | - **tainttoleration.weight**: Pods are scheduled to the node with a high taint tolerance. Defaults to **1**. | leastrequested.weight: 1 | - | | | - **imagelocality.weight**: Pods are scheduled to the node where the required images exist. Defaults to **1**. | mostrequested.weight: 0 | - | | | - **selectorspread.weight**: Pods are evenly scheduled to different nodes. Defaults to **0**. | nodeaffinity.weight: 1 | - | | | - **volumebinding.weight**: Pods are scheduled to the node with the local PV delayed binding policy. Defaults to **1**. | podaffinity.weight: 1 | - | | | - **podtopologyspread.weight**: Pods are scheduled based on the pod topology. Defaults to **2**. | balancedresource.weight: 1 | - | | | | tainttoleration.weight: 1 | - | | | | imagelocality.weight: 1 | - | | | | volumebinding.weight: 1 | - | | | | podtopologyspread.weight: 2 | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | cce-gpu-topology-predicate | GPU-topology scheduling preselection algorithm | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | cce-gpu-topology-priority | GPU-topology scheduling priority algorithm | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | cce-gpu | Works with the gpu add-on of CCE to support GPU resource allocation and decimal GPU configuration. | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | numaaware | NUMA topology scheduling | weight: Weight of the numa-aware plugin. | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | networkresource | The ENI requirement node can be preselected and filtered. The parameters are transferred by CCE and do not need to be manually configured. | NetworkType: Network type (eni or vpc-router). | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | nodelocalvolume | The nodelocalvolume plugin filters out nodes that do not meet local volume requirements can be filtered out. | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | nodeemptydirvolume | The nodeemptydirvolume plugin filters out nodes that do not meet the emptyDir requirements. | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ - | nodeCSIscheduling | The nodeCSIscheduling plugin filters out nodes that have the everest component exception. | ``-`` | ``-`` | - +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | Plug-in | Function | Description | Demonstration | + +============================+===================================================================================================================================================================================================================================================================================+=============================================================================================================================================================================================================================================================================+=============================================================+ + | binpack | Schedule pods to nodes with high resource usage (not allocating pods to light-loaded nodes) to reduce resource fragments. | **arguments**: | .. code-block:: | + | | | | | + | | | - **binpack.weight**: weight of the binpack plug-in. | - plugins: | + | | | - **binpack.cpu**: ratio of CPUs to all resources. The parameter value defaults to **1**. | - name: binpack | + | | | - **binpack.memory**: ratio of memory resources to all resources. The parameter value defaults to **1**. | arguments: | + | | | - **binpack.resources**: other custom resource types requested by the pod, for example, **nvidia.com/gpu**. Multiple types can be configured and be separated by commas (,). | binpack.weight: 10 | + | | | - **binpack.resources.**\ **: weight of your custom resource in all resources. Multiple types of resources can be added. ** indicates the resource type defined in **binpack.resources**, for example, **binpack.resources.nvidia.com/gpu**. | binpack.cpu: 1 | + | | | | binpack.memory: 1 | + | | | | binpack.resources: nvidia.com/gpu, example.com/foo | + | | | | binpack.resources.nvidia.com/gpu: 2 | + | | | | binpack.resources.example.com/foo: 3 | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | conformance | Prevent key pods, such as the pods in the **kube-system** namespace from being preempted. | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: 'priority' | + | | | | - name: 'gang' | + | | | | enablePreemptable: false | + | | | | - name: 'conformance' | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | gang | Consider a group of pods as a whole for resource allocation. This plug-in checks whether the number of scheduled pods in a job meets the minimum requirements for running the job. If yes, all pods in the job will be scheduled. If no, the pods will not be scheduled. | **enablePreemptable**: | .. code-block:: | + | | | | | + | | .. note:: | - **true**: Preemption enabled | - plugins: | + | | | - **false**: Preemption not enabled | - name: priority | + | | If a gang scheduling policy is used, if the remaining resources in the cluster are greater than or equal to half of the minimum number of resources for running a job but less than the minimum of resources for running the job, autoscaler scale-outs will not be triggered. | | - name: gang | + | | | | enablePreemptable: false | + | | | | - name: conformance | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | priority | Schedule based on custom load priorities. | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: priority | + | | | | - name: gang | + | | | | enablePreemptable: false | + | | | | - name: conformance | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | overcommit | Resources in a cluster are scheduled after being accumulated in a certain multiple to improve the workload enqueuing efficiency. If all workloads are Deployments, remove this plugin or set the raising factor to **2.0**. | **arguments**: | .. code-block:: | + | | | | | + | | .. note:: | - **overcommit-factor**: inflation factor, which defaults to **1.2**. | - plugins: | + | | | | - name: overcommit | + | | This plug-in is supported in Volcano 1.6.5 and later versions. | | arguments: | + | | | | overcommit-factor: 2.0 | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | drf | The Dominant Resource Fairness (DRF) scheduling algorithm, which schedules jobs based on their dominant resource share. Jobs with a smaller resource share will be scheduled with a higher priority. | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: 'drf' | + | | | | - name: 'predicates' | + | | | | - name: 'nodeorder' | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | predicates | Determine whether a task is bound to a node by using a series of evaluation algorithms, such as node/pod affinity, taint tolerance, node repetition, volume limits, and volume zone matching. | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: 'drf' | + | | | | - name: 'predicates' | + | | | | - name: 'nodeorder' | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | nodeorder | A common algorithm for selecting nodes. Nodes are scored in simulated resource allocation to find the most suitable node for the current job. | Scoring parameters: | .. code-block:: | + | | | | | + | | | - **nodeaffinity.weight**: Pods are scheduled based on node affinity. This parameter defaults to **1**. | - plugins: | + | | | - **podaffinity.weight**: Pods are scheduled based on pod affinity. This parameter defaults to **1**. | - name: nodeorder | + | | | - **leastrequested.weight**: Pods are scheduled to the node with the least requested resources. This parameter defaults to **1**. | arguments: | + | | | - **balancedresource.weight**: Pods are scheduled to the node with balanced resource allocation. This parameter defaults to **1**. | leastrequested.weight: 1 | + | | | - **mostrequested.weight**: Pods are scheduled to the node with the most requested resources. This parameter defaults to **0**. | mostrequested.weight: 0 | + | | | - **tainttoleration.weight**: Pods are scheduled to the node with a high taint tolerance. This parameter defaults to **1**. | nodeaffinity.weight: 1 | + | | | - **imagelocality.weight**: Pods are scheduled to the node where the required images exist. This parameter defaults to **1**. | podaffinity.weight: 1 | + | | | - **selectorspread.weight**: Pods are evenly scheduled to different nodes. This parameter defaults to **0**. | balancedresource.weight: 1 | + | | | - **volumebinding.weight**: Pods are scheduled to the node with the local PV delayed binding policy. This parameter defaults to **1**. | tainttoleration.weight: 1 | + | | | - **podtopologyspread.weight**: Pods are scheduled based on the pod topology. This parameter defaults to **2**. | imagelocality.weight: 1 | + | | | | volumebinding.weight: 1 | + | | | | podtopologyspread.weight: 2 | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | cce-gpu-topology-predicate | GPU-topology scheduling preselection algorithm | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: 'cce-gpu-topology-predicate' | + | | | | - name: 'cce-gpu-topology-priority' | + | | | | - name: 'cce-gpu' | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | cce-gpu-topology-priority | GPU-topology scheduling priority algorithm | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: 'cce-gpu-topology-predicate' | + | | | | - name: 'cce-gpu-topology-priority' | + | | | | - name: 'cce-gpu' | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | cce-gpu | GPU resource allocation that supports decimal GPU configurations by working with the gpu add-on. | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: 'cce-gpu-topology-predicate' | + | | | | - name: 'cce-gpu-topology-priority' | + | | | | - name: 'cce-gpu' | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | numa-aware | NUMA affinity scheduling. | **arguments**: | .. code-block:: | + | | | | | + | | | - **weight**: weight of the numa-aware plug-in | - plugins: | + | | | | - name: 'nodelocalvolume' | + | | | | - name: 'nodeemptydirvolume' | + | | | | - name: 'nodeCSIscheduling' | + | | | | - name: 'networkresource' | + | | | | arguments: | + | | | | NetworkType: vpc-router | + | | | | - name: numa-aware | + | | | | arguments: | + | | | | weight: 10 | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | networkresource | The ENI requirement node can be preselected and filtered. The parameters are transferred by CCE and do not need to be manually configured. | **arguments**: | .. code-block:: | + | | | | | + | | | - **NetworkType**: network type (**eni** or **vpc-router**) | - plugins: | + | | | | - name: 'nodelocalvolume' | + | | | | - name: 'nodeemptydirvolume' | + | | | | - name: 'nodeCSIscheduling' | + | | | | - name: 'networkresource' | + | | | | arguments: | + | | | | NetworkType: vpc-router | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | nodelocalvolume | Filter out nodes that do not meet local volume requirements. | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: 'nodelocalvolume' | + | | | | - name: 'nodeemptydirvolume' | + | | | | - name: 'nodeCSIscheduling' | + | | | | - name: 'networkresource' | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | nodeemptydirvolume | Filter out nodes that do not meet the emptyDir requirements. | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: 'nodelocalvolume' | + | | | | - name: 'nodeemptydirvolume' | + | | | | - name: 'nodeCSIscheduling' | + | | | | - name: 'networkresource' | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ + | nodeCSIscheduling | Filter out nodes with malfunctional everest. | None | .. code-block:: | + | | | | | + | | | | - plugins: | + | | | | - name: 'nodelocalvolume' | + | | | | - name: 'nodeemptydirvolume' | + | | | | - name: 'nodeCSIscheduling' | + | | | | - name: 'networkresource' | + +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------+ #. Click **Install**. -Modifying the volcano-scheduler Configuration Using the Console ---------------------------------------------------------------- +Components +---------- + +.. table:: **Table 5** Volcano components + + +---------------------+-------------------------------------------------------------------------------------------------------------------+---------------+ + | Container Component | Description | Resource Type | + +=====================+===================================================================================================================+===============+ + | volcano-scheduler | Schedule pods. | Deployment | + +---------------------+-------------------------------------------------------------------------------------------------------------------+---------------+ + | volcano-controller | Synchronize CRDs. | Deployment | + +---------------------+-------------------------------------------------------------------------------------------------------------------+---------------+ + | volcano-admission | Webhook server, which verifies and modifies resources such as pods and jobs | Deployment | + +---------------------+-------------------------------------------------------------------------------------------------------------------+---------------+ + | volcano-agent | Cloud native hybrid agent, which is used for node QoS assurance, CPU burst, and dynamic resource oversubscription | DaemonSet | + +---------------------+-------------------------------------------------------------------------------------------------------------------+---------------+ + | resource-exporter | Report the NUMA topology information of nodes. | DaemonSet | + +---------------------+-------------------------------------------------------------------------------------------------------------------+---------------+ + +Modifying the volcano-scheduler Configurations Using the Console +---------------------------------------------------------------- + +Volcano scheduler is the component responsible for pod scheduling. It consists of a series of actions and plug-ins. Actions should be executed in every step. Plugins provide the action algorithm details in different scenarios. volcano-scheduler is highly scalable. You can specify and implement actions and plug-ins based on your requirements. Volcano allows you to configure the scheduler during installation, upgrade, and editing. The configuration will be synchronized to volcano-scheduler-configmap. -This section describes how to configure the volcano scheduler. +This section describes how to configure volcano-scheduler. .. note:: - Only Volcano of v1.7.1 and later support this function. On the new plug-in page, options such as **plugins.eas_service** and **resource_exporter_enable** are replaced by **default_scheduler_conf**. + Only Volcano of v1.7.1 and later support this function. On the new plugin page, options such as **plugins.eas_service** and **resource_exporter_enable** are replaced by **default_scheduler_conf**. -Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane. On the right of the page, locate **volcano** and click **Install** or **Upgrade**. In the **Parameters** area, configure the volcano scheduler parameters. +Log in to the CCE console and access the cluster console. Choose **Add-ons** in the navigation pane. On the right of the page, locate **volcano** and click **Install** or **Upgrade**. In the **Parameters** area, configure the volcano-scheduler parameters. - Using **resource_exporter**: @@ -235,7 +420,7 @@ Log in to the CCE console and access the cluster console. Choose **Add-ons** in "server_key": "" } - After this function is enabled, you can use the functions of the numa-aware plug-in and resource_exporter at the same time. + After this function is enabled, you can use the functions of the numa-aware plugin and resource_exporter at the same time. - Using **eas_service**: @@ -395,8 +580,8 @@ Log in to the CCE console and access the cluster console. Choose **Add-ons** in "server_key": "" } -Retaining the Original volcano-scheduler-configmap Configuration ----------------------------------------------------------------- +Retaining the Original volcano-scheduler-configmap Configurations +----------------------------------------------------------------- If you want to use the original configuration after the plug-in is upgraded, perform the following steps: @@ -436,7 +621,7 @@ If you want to use the original configuration after the plug-in is upgraded, per - name: nodeCSIscheduling - name: networkresource -#. Enter the customized content in the **Parameters** on the console. +#. Enter the customized content in the **Parameters** area on the console. .. code-block:: @@ -518,3 +703,22 @@ If you want to use the original configuration after the plug-in is upgraded, per .. note:: When this function is used, the original content in volcano-scheduler-configmap will be overwritten. Therefore, you must check whether volcano-scheduler-configmap has been modified during the upgrade. If yes, synchronize the modification to the upgrade page. + +Uninstalling the Volcano Add-on +------------------------------- + +After the add-on is uninstalled, all custom Volcano resources (:ref:`Table 6 `) will be deleted, including the created resources. Reinstalling the add-on will not inherit or restore the tasks before the uninstallation. It is a good practice to uninstall the Volcano add-on only when no custom Volcano resources are being used in the cluster. + +.. _cce_10_0193__table148801381540: + +.. table:: **Table 6** Custom Volcano resources + + ============ ===================== =========== ============== + Item API Group API Version Resource Level + ============ ===================== =========== ============== + Command bus.volcano.sh v1alpha1 Namespaced + Job batch.volcano.sh v1alpha1 Namespaced + Numatopology nodeinfo.volcano.sh v1alpha1 Cluster + PodGroup scheduling.volcano.sh v1beta1 Namespaced + Queue scheduling.volcano.sh v1beta1 Cluster + ============ ===================== =========== ============== diff --git a/umn/source/auto_scaling/overview.rst b/umn/source/auto_scaling/overview.rst index b10fff2..5e48a71 100644 --- a/umn/source/auto_scaling/overview.rst +++ b/umn/source/auto_scaling/overview.rst @@ -12,7 +12,7 @@ Context More and more applications are developed based on Kubernetes. It becomes increasingly important to quickly scale out applications on Kubernetes to cope with service peaks and to scale in applications during off-peak hours to save resources and reduce costs. -In a Kubernetes cluster, auto scaling involves pods and nodes. A pod is an application instance. Each pod contains one or more containers and runs on a node (VM or bare-metal server). If a cluster does not have sufficient nodes to run new pods, you need to add nodes to the cluster to ensure service running. +In a Kubernetes cluster, auto scaling involves pods and nodes. A pod is an application instance. Each pod contains one or more containers and runs on a node (VM or bare-metal server). If a cluster does not have sufficient nodes to run new pods, add nodes to the cluster to ensure service running. In CCE, auto scaling is used for online services, large-scale computing and training, deep learning GPU or shared GPU training and inference, periodic load changes, and many other scenarios. @@ -31,11 +31,11 @@ Components .. table:: **Table 1** Workload scaling components - +------+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ - | Type | Component Name | Component Description | Reference | - +======+=====================================+====================================================================================================================================================================================+=======================================================================+ - | HPA | :ref:`metrics-server ` | A built-in component of Kubernetes, which enables horizontal scaling of pods. It adds the application-level cooldown time window and scaling threshold functions based on the HPA. | :ref:`Creating an HPA Policy for Workload Auto Scaling ` | - +------+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ + +------+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------+ + | Type | Component Name | Component Description | Reference | + +======+=====================================+====================================================================================================================================================================================+==========================+ + | HPA | :ref:`metrics-server ` | A built-in component of Kubernetes, which enables horizontal scaling of pods. It adds the application-level cooldown time window and scaling threshold functions based on the HPA. | :ref:`HPA ` | + +------+-------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------+ **Node scaling components are described as follows:** diff --git a/umn/source/auto_scaling/scaling_a_node/creating_a_node_scaling_policy.rst b/umn/source/auto_scaling/scaling_a_node/creating_a_node_scaling_policy.rst index 3ee3258..4cdebc3 100644 --- a/umn/source/auto_scaling/scaling_a_node/creating_a_node_scaling_policy.rst +++ b/umn/source/auto_scaling/scaling_a_node/creating_a_node_scaling_policy.rst @@ -17,15 +17,19 @@ Prerequisites Before using the node scaling function, you must install the :ref:`autoscaler ` add-on of v1.13.8 or later in the cluster. -Notes and Constraints ---------------------- +Constraints +----------- - Auto scaling policies apply to node pools. When the number of nodes in a node pool is 0 and the scaling policy is based on CPU or memory usage, node scaling is not triggered. +- When autoscaler is used, some taints or annotations may affect auto scaling. Therefore, do not use the following taints or annotations in clusters: + + - **ignore-taint.cluster-autoscaler.kubernetes.io**: The taint works on nodes. Kubernetes-native autoscaler supports protection against abnormal scale outs and periodically evaluates the proportion of available nodes in the cluster. When the proportion of non-ready nodes exceeds 45%, protection will be triggered. In this case, all nodes with the **ignore-taint.cluster-autoscaler.kubernetes.io** taint in the cluster are filtered out from the autoscaler template and recorded as non-ready nodes, which affects cluster scaling. + - **cluster-autoscaler.kubernetes.io/enable-ds-eviction**: The annotation works on pods, which determines whether DaemonSet pods can be evicted by autoscaler. For details, see `Well-Known Labels, Annotations and Taints `__. Procedure --------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Node Scaling** in the navigation pane. - If **Uninstalled** is displayed next to the add-on name, click **Install**, set add-on parameters as required, and click **Install** to install the add-on. diff --git a/umn/source/auto_scaling/scaling_a_node/managing_node_scaling_policies.rst b/umn/source/auto_scaling/scaling_a_node/managing_node_scaling_policies.rst index e99d4b6..4e2e9cb 100644 --- a/umn/source/auto_scaling/scaling_a_node/managing_node_scaling_policies.rst +++ b/umn/source/auto_scaling/scaling_a_node/managing_node_scaling_policies.rst @@ -15,7 +15,7 @@ Viewing a Node Scaling Policy You can view the associated node pool, rules, and scaling history of a node scaling policy and rectify faults according to the error information displayed. -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Node Scaling** in the navigation pane and click |image1| in front of the policy to be viewed. #. In the expanded area, the **Associated Node Pools**, **Rules**, and **Scaling History** tab pages are displayed. If the policy is abnormal, locate and rectify the fault based on the error information. @@ -23,14 +23,14 @@ You can view the associated node pool, rules, and scaling history of a node scal You can also disable or enable auto scaling on the **Node Pools** page. - a. Log in to the CCE console and access the cluster console. - b. In the navigation pane, choose **Nodes** and switch to the **Node Pools** tab page. - c. Click **Edit** of the node pool to be operated. In the **Edit Node Pool** dialog box that is displayed, set the limits of the number of nodes. + a. Log in to the CCE console and click the cluster name to access the cluster console. + b. In the navigation pane, choose **Nodes** and switch to the **Node Pools** tab. + c. Locate the row containing the target node pool and click **Update Node Pool**. In the window that slides out from the right, enable **Auto Scaling**, and configure **Max. Nodes**, **Min. Nodes**, and **Cooldown Period**. Deleting a Node Scaling Policy ------------------------------ -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Node Scaling** in the navigation pane and choose **More** > **Delete** next to the policy to be deleted. #. In the **Delete Node Scaling Policy** dialog box displayed, confirm whether to delete the policy. #. Click **Yes** to delete the policy. @@ -38,7 +38,7 @@ Deleting a Node Scaling Policy Editing a Node Scaling Policy ----------------------------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Node Scaling** in the navigation pane and click **Edit** in the **Operation** column of the policy to be edited. #. On the **Edit Node Scaling Policy** page displayed, modify policy parameter values listed in :ref:`Table 1 `. #. After the configuration is complete, click **OK**. @@ -46,7 +46,7 @@ Editing a Node Scaling Policy Cloning a Node Scaling Policy ----------------------------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Node Scaling** in the navigation pane and choose **More** > **Clone** next to the policy to be cloned. #. On the **Clone Node Scaling Policy** page displayed, certain parameters have been cloned. Add or modify other policy parameters based on service requirements. #. Click **OK**. @@ -54,8 +54,8 @@ Cloning a Node Scaling Policy Enabling or Disabling a Node Scaling Policy ------------------------------------------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Node Scaling** in the navigation pane and click **Disable** in the **Operation** column of the policy to be disabled. If the policy is in the disabled state, click **Enable** in the **Operation** column of the policy. #. In the dialog box displayed, confirm whether to disable or enable the node policy. -.. |image1| image:: /_static/images/en-us_image_0000001517743464.png +.. |image1| image:: /_static/images/en-us_image_0000001695896485.png diff --git a/umn/source/auto_scaling/scaling_a_node/node_scaling_mechanisms.rst b/umn/source/auto_scaling/scaling_a_node/node_scaling_mechanisms.rst index 0fab9e6..d3c1859 100644 --- a/umn/source/auto_scaling/scaling_a_node/node_scaling_mechanisms.rst +++ b/umn/source/auto_scaling/scaling_a_node/node_scaling_mechanisms.rst @@ -5,9 +5,9 @@ Node Scaling Mechanisms ======================= -Kubernetes HPA is designed for pods. However, if the cluster resources are insufficient, you can only add nodes. Scaling of cluster nodes could be laborious. Now with clouds, you can add or delete nodes by simply calling APIs. +HPA is designed for pod-level scaling and can dynamically adjust the number of replicas based on workload metrics. However, if cluster resources are insufficient and new replicas cannot run, you can only scale out the cluster. -`autoscaler `__ is a component provided by Kubernetes for auto scaling of cluster nodes based on the pod scheduling status and resource usage. +`autoscaler `__ is an auto scaling component provided by Kubernetes. It automatically scales in or out nodes in a cluster based on the pod scheduling status and resource usage. It supports multiple scaling modes, such as multi-AZ, multi-pod-specifications, metric triggering, and periodic triggering, to meet the requirements of different node scaling scenarios. Prerequisites ------------- @@ -17,20 +17,27 @@ Before using the node scaling function, you must install the :ref:`autoscaler `__) + - Pods that cannot be scheduled to other nodes due to constraints such as affinity and anti-affinity policies + - Pods that have the **cluster-autoscaler.kubernetes.io/safe-to-evict: 'false'** annotation + - Pods (except those created by DaemonSets in the kube-system namespace) that exist in the kube-system namespace on the node + - Pods that are not created by the controller (Deployment/ReplicaSet/job/StatefulSet) + + .. note:: + + When a node meets the scale-in conditions, autoscaler adds the **DeletionCandidateOfClusterAutoscaler** taint to the node in advance to prevent pods from being scheduled to the node. After the autoscaler add-on is uninstalled, if the taint still exists on the node, manually delete it. autoscaler Architecture ----------------------- @@ -39,7 +46,7 @@ autoscaler Architecture .. _cce_10_0296__fig114831750115719: -.. figure:: /_static/images/en-us_image_0000001569182553.png +.. figure:: /_static/images/en-us_image_0000001695737013.png :alt: **Figure 1** autoscaler architecture **Figure 1** autoscaler architecture @@ -50,10 +57,34 @@ autoscaler Architecture - **Simulator**: Finds the nodes that meet the scale-in conditions in the scale-in scenario. - **Expander**: Selects an optimal node from the node pool picked out by the Estimator based on the user-defined policy in the scale-out scenario. Currently, the Expander has the following policies: - - **Random**: Selects a node pool randomly. If you have not specified a policy, **Random** is set by default. - - **most-Pods**: Selects the node pool that can host the largest number of unschedulable pods after the scale-out. If multiple node pools meet the requirement, a random node pool will be selected. - - **least-waste**: Selects the node pool that has the least CPU or memory resource waste after scale-out. - - **price**: Selects the node pool in which the to-be-added nodes cost least for scale-out. - - **priority**: Selects the node pool with the highest weight. The weights are user-defined. + .. table:: **Table 1** **Expander policies supported by CCE** -Currently, CCE supports all policies except **price**. By default, CCE add-ons use the **least-waste** policy. + +-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Policy | Description | Application Scenario | Example | + +=================+====================================================================================================================================================================================================================================================+==================================================================================================================================================================================================================================================================================================+=========================================================================================================================================================================================================+ + | Random | Randomly selects a schedulable node pool to perform the scale-out. | This policy is typically used as a basic backup for other complex policies. Only use this policy if the other policies cannot be used. | Assume that auto scaling is enabled for node pools 1 and 2 in the cluster and the scale-out upper limit is not reached. The policy for scaling out the number of replicas for a workload is as follows: | + | | | | | + | | | | #. Pending pods trigger the autoscaler to determine the scale-out process. | + | | | | #. autoscaler simulates the scheduling phase and evaluates that the pending pods can be scheduled to the added nodes in both node pools 1 and 2. | + | | | | #. autoscaler randomly selects node pool 1 or node pool 2 for scale-out. | + +-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | most-pods | A combined policy. It takes precedence over the random policy. | This policy is based on the maximum number of pods that can be scheduled. | Assume that auto scaling is enabled for node pools 1 and 2 in the cluster and the scale-out upper limit is not reached. The policy for scaling out the number of replicas for a workload is as follows: | + | | | | | + | | Preferentially selects the node pool that can schedule the most pods after scale-out. If multiple node pools meet the condition, the random policy is used for further decision-making. | | #. Pending pods trigger the autoscaler to determine the scale-out process. | + | | | | #. autoscaler simulates the scheduling phase and evaluates that some pending pods can be scheduled to the added nodes in both node pools 1 and 2. | + | | | | #. autoscaler evaluates that node pool 1 can schedule 20 new pods and node pool 2 can schedule only 10 new pods after scale-out. Therefore, autoscaler selects node pool 1 for scale-out. | + +-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | least-waste | A combined policy. It takes precedence over the random policy. | This policy uses the minimum waste score of CPU or memory resources as the selection criteria. | Assume that auto scaling is enabled for node pools 1 and 2 in the cluster and the scale-out upper limit is not reached. The policy for scaling out the number of replicas for a workload is as follows: | + | | | | | + | | autoscaler evaluates the overall CPU or memory allocation rate of the node pools and selects the node pool with the minimum CPU or memory waste. If multiple node pools meet the condition, the random policy is used for further decision-making. | The formula for calculating the minimum waste score (wastedScore) is as follows: | #. Pending pods trigger the autoscaler to determine the scale-out process. | + | | | | #. autoscaler simulates the scheduling phase and evaluates that some pending pods can be scheduled to the added nodes in both node pools 1 and 2. | + | | | - wastedCPU = (Total number of CPUs of the nodes to be scaled out - Total number of CPUs of the pods to be scheduled)/Total number of CPUs of the nodes to be scaled out | #. autoscaler evaluates that the minimum waste score of node pool 1 after scale-out is smaller than that of node pool 2. Therefore, autoscaler selects node pool 1 for scale-out. | + | | | - wastedMemory = (Total memory size of nodes to be scaled out - Total memory size of pods to be scheduled)/Total memory size of nodes to be scaled out | | + | | | - wastedScore = wastedCPU + wastedMemory | | + +-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | priority | A combined policy. The priorities for the policies are as follows: priority > least-waste > random. | This policy allows you to configure and manage the priorities of node pools or scaling groups through the console or API, while the least-waste policy can reduce the resource waste ratio in common scenarios. This policy has wider applicability and is used as the default selection policy. | Assume that auto scaling is enabled for node pools 1 and 2 in the cluster and the scale-out upper limit is not reached. The policy for scaling out the number of replicas for a workload is as follows: | + | | | | | + | | It is an enhanced least-waste policy configured based on the node pool or scaling group priority. If multiple node pools meet the condition, the least-waste policy is used for further decision-making. | | #. Pending pods trigger the autoscaler to determine the scale-out process. | + | | | | #. autoscaler simulates the scheduling phase and evaluates that some pending pods can be scheduled to the added nodes in both node pools 1 and 2. | + | | | | #. autoscaler evaluates that node pool 1 has a higher priority than node pool 2. Therefore, autoscaler selects node pool 1 for scale-out. | + +-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/auto_scaling/scaling_a_workload/creating_an_hpa_policy_for_workload_auto_scaling.rst b/umn/source/auto_scaling/scaling_a_workload/hpa.rst similarity index 91% rename from umn/source/auto_scaling/scaling_a_workload/creating_an_hpa_policy_for_workload_auto_scaling.rst rename to umn/source/auto_scaling/scaling_a_workload/hpa.rst index 9bff10e..4703a5c 100644 --- a/umn/source/auto_scaling/scaling_a_workload/creating_an_hpa_policy_for_workload_auto_scaling.rst +++ b/umn/source/auto_scaling/scaling_a_workload/hpa.rst @@ -2,18 +2,20 @@ .. _cce_10_0208: -Creating an HPA Policy for Workload Auto Scaling -================================================ +HPA +=== Horizontal Pod Autoscaling (HPA) in Kubernetes implements horizontal scaling of pods. In a CCE HPA policy, you can configure different cooldown time windows and scaling thresholds for different applications based on the Kubernetes HPA. Prerequisites ------------- -To use HPA policies, you need to install an add-on that can provide the metrics API, such as metrics-server and prometheus. +To use HPA, install an add-on that provides metrics APIs. Select one of the following add-ons based on your cluster version and actual requirements. -Notes and Constraints ---------------------- +- :ref:`metrics-server `: provides basic resource usage metrics, such as container CPU and memory usage. It is supported by all cluster versions. + +Constraints +----------- - HPA policies can be created only for clusters of v1.13 or later. @@ -21,14 +23,14 @@ Notes and Constraints For clusters of v1.19.10 and later, if an HPA policy is used to scale out a workload with EVS volume mounted, a new pod cannot be started because EVS disks cannot be attached. -Procedure ---------- +Creating an HPA Policy +---------------------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. In the navigation pane, choose **Workload Scaling**. Then click **Create HPA Policy** in the upper right corner. -#. Set policy parameters. +#. Configure the parameters. .. _cce_10_0208__table8638121213265: @@ -49,7 +51,7 @@ Procedure +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Cooldown Period | Interval between a scale-in and a scale-out. The unit is minute. **The interval cannot be shorter than 1 minute.** | | | | - | | **This parameter is supported only from clusters of v1.15 to v1.23.** | + | | **This parameter is supported only in clusters of v1.15 to v1.23.** | | | | | | This parameter indicates the interval between consecutive scaling operations. The cooldown period ensures that a scaling operation is initiated only when the previous one is completed and the system is running stably. | +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -82,14 +84,10 @@ Procedure +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Custom Policy (supported only in clusters of v1.15 or later) | .. note:: | | | | - | | Before setting a custom policy, you need to install an add-on that supports custom metric collection in the cluster, for example, prometheus add-on. | + | | Before creating a custom policy, install an add-on that supports custom metric collection (for example, prometheus) in the cluster. Ensure that the add-on can collect and report the custom metrics of the workloads. | | | | | | - **Metric Name**: name of the custom metric. You can select a name as prompted. | - | | | - | | For details, see :ref:`Custom Monitoring `. | - | | | | | - **Metric Source**: Select an object type from the drop-down list. You can select **Pod**. | - | | | | | - **Desired Value**: the average metric value of all pods. Number of pods to be scaled (rounded up) = (Current metric value/Desired value) x Number of current pods | | | | | | .. note:: | diff --git a/umn/source/auto_scaling/scaling_a_workload/index.rst b/umn/source/auto_scaling/scaling_a_workload/index.rst index b6508fa..88590f9 100644 --- a/umn/source/auto_scaling/scaling_a_workload/index.rst +++ b/umn/source/auto_scaling/scaling_a_workload/index.rst @@ -6,7 +6,7 @@ Scaling a Workload ================== - :ref:`Workload Scaling Mechanisms ` -- :ref:`Creating an HPA Policy for Workload Auto Scaling ` +- :ref:`HPA ` - :ref:`Managing Workload Scaling Policies ` .. toctree:: @@ -14,5 +14,5 @@ Scaling a Workload :hidden: workload_scaling_mechanisms - creating_an_hpa_policy_for_workload_auto_scaling + hpa managing_workload_scaling_policies diff --git a/umn/source/auto_scaling/scaling_a_workload/managing_workload_scaling_policies.rst b/umn/source/auto_scaling/scaling_a_workload/managing_workload_scaling_policies.rst index 77d4239..8356785 100644 --- a/umn/source/auto_scaling/scaling_a_workload/managing_workload_scaling_policies.rst +++ b/umn/source/auto_scaling/scaling_a_workload/managing_workload_scaling_policies.rst @@ -16,8 +16,8 @@ Checking an HPA Policy You can view the rules, status, and events of an HPA policy and handle exceptions based on the error information displayed. #. Log in to the CCE console and click the cluster name to access the cluster console. -#. In the navigation pane, choose **Workload Scaling**. On the **HPA Policies** tab page, click |image1| next to the target HPA policy. -#. In the expanded area, you can view the **Rules**, **Status**, and **Events** tab pages. If the policy is abnormal, locate and rectify the fault based on the error information. +#. In the navigation pane, choose **Policies**. On the **HPA Policies** tab page, click |image1| next to the target HPA policy. +#. In the expanded area, view the **Rule** and **Status** tabs. Click **View Events** in the **Operation** column. If the policy malfunctions, locate and rectify the fault based on the error message displayed on the page. .. note:: @@ -25,7 +25,7 @@ You can view the rules, status, and events of an HPA policy and handle exception a. Log in to the CCE console and click the cluster name to access the cluster console. b. In the navigation pane, choose **Workloads**. Click the workload name to view its details. - c. On the workload details page, switch to the **Auto Scaling** tab page to view the HPA policies. You can also view the scaling policies you configured in **Workload Scaling**. + c. On the workload details page, click the **Auto Scaling** tab to view the HPA policies. You can also view the scaling policies you configured in the **Workload Scaling** page. .. table:: **Table 1** Event types and names @@ -65,22 +65,22 @@ Updating an HPA Policy An HPA policy is used as an example. #. Log in to the CCE console and click the cluster name to access the cluster console. -#. In the navigation pane, choose **Workload Scaling**. Click **More** > **Edit** in the **Operation** column of the target HPA policy. -#. On the **Edit HPA Policy** page, set the policy parameters listed in :ref:`Table 1 `. +#. On the cluster console, choose **Workload Scaling** in the navigation pane. Locate the row that contains the target policy and choose **More** > **Edit** in the **Operation** column. +#. On the **Edit HPA Policy** page, configure the parameters as listed in :ref:`Table 1 `. #. Click **OK**. Editing the YAML File (HPA Policy) ---------------------------------- #. Log in to the CCE console and click the cluster name to access the cluster console. -#. In the navigation pane, choose **Workload Scaling**. Click **Edit YAML** in the **Operation** column of the target HPA policy. +#. In the navigation pane, choose **Policies**. Choose **Edit YAML** in the **Operation** column of the target HPA policy. #. In the **Edit YAML** dialog box displayed, edit or download the YAML file. Deleting an HPA Policy ---------------------- #. Log in to the CCE console and click the cluster name to access the cluster console. -#. In the navigation pane, choose **Workload Scaling**. Click **More** > **Delete** in the **Operation** column of the target policy. +#. In the navigation pane, choose **Policies**. Choose **Delete** > **Delete** in the **Operation** column of the target policy. #. In the dialog box displayed, click **Yes**. -.. |image1| image:: /_static/images/en-us_image_0000001568902521.png +.. |image1| image:: /_static/images/en-us_image_0000001695737185.png diff --git a/umn/source/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst b/umn/source/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst index c7b3de8..c5bd04c 100644 --- a/umn/source/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst +++ b/umn/source/auto_scaling/using_hpa_and_ca_for_auto_scaling_of_workloads_and_nodes.rst @@ -10,7 +10,7 @@ Application Scenarios The best way to handle surging traffic is to automatically adjust the number of machines based on the traffic volume or resource usage, which is called scaling. -In CCE, the resources that can be used by containers are fixed during application deployment. Therefore, in auto scaling, pods are scaled first. The node resource usage increases only after the number of pods increases. Then, nodes can be scaled based on the node resource usage. How to configure auto scaling in CCE? +When pods or containers are used for deploying applications, the upper limit of available resources is typically required to set for pods or containers to prevent unlimited usage of node resources during peak hours. However, after the upper limit is reached, an application error may occur. To resolve this issue, scale in the number of pods to share workloads. If the node resource usage increases to a certain extent that newly added pods cannot be scheduled, scale in the number of nodes based on the node resource usage. Solution -------- @@ -23,7 +23,7 @@ As shown in :ref:`Figure 1 ` is the image repository address, which can be obtained on the SWR console. - - **[Organization name]**: name of the organization created in :ref:`4 `. - - **[Image name 2:Tag 2]**: desired image name and tag to be displayed on the SWR console. + - *{Image name 1:Tag 1}*: name and tag of the local image to be uploaded. + - *{Image repository address}*: the domain name at the end of the login command in :ref:`login command `. It can be obtained on the SWR console. + - *{Organization name}*: name of the :ref:`created organization `. + - *{Image name 2:Tag 2}*: desired image name and tag to be displayed on the SWR console. Example: @@ -130,7 +130,7 @@ Creating a Node Pool and a Node Scaling Policy #. Log in to the CCE console, access the created cluster, click **Nodes** on the left, click the **Node Pools** tab, and click **Create Node Pool** in the upper right corner. -#. Set node pool parameters, add a node with 2 vCPUs and 4 GiB memory, and enable auto scaling. +#. Set node pool parameters, add a node with 2 vCPUs and 4 GB memory, and enable auto scaling. - **Nodes**: Set it to **1**, indicating that one node is created by default when a node pool is created. - **Auto Scaling**: Enable the option, meaning that nodes will be automatically created or deleted in the node pool based on the cluster loads. @@ -174,7 +174,7 @@ Use the hpa-example image to create a Deployment with one replica. The image pat spec: containers: - name: container-1 - image: 'hpa-example:latest ' # Replace it with the address of the image you uploaded to SWR. + image: 'hpa-example:latest' # Replace it with the address of the image you uploaded to SWR. resources: limits: # The value of limits must be the same as that of requests to prevent flapping during scaling. cpu: 500m @@ -231,7 +231,9 @@ There are two other annotations. One annotation defines the CPU thresholds, indi - type: Resource resource: name: cpu - targetAverageUtilization: 50 + target: + type: Utilization + averageUtilization: 50 Set the parameters as follows if you are using the console. @@ -263,7 +265,7 @@ Observing the Auto Scaling Process .. note:: - If no EIP is displayed, the cluster node has not been assigned any EIP. You need to create one, bind it to the node, and synchronize node data. . + If no EIP is displayed, the cluster node has not been assigned any EIP. Allocate one, bind it to the node, and synchronize node data. . Observe the scaling process of the workload. @@ -379,7 +381,7 @@ Summary Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed. -.. |image1| image:: /_static/images/en-us_image_0000001518222700.png -.. |image2| image:: /_static/images/en-us_image_0000001568902661.png -.. |image3| image:: /_static/images/en-us_image_0000001569182741.png -.. |image4| image:: /_static/images/en-us_image_0000001569023029.png +.. |image1| image:: /_static/images/en-us_image_0000001647577020.png +.. |image2| image:: /_static/images/en-us_image_0000001647577036.png +.. |image3| image:: /_static/images/en-us_image_0000001647417772.png +.. |image4| image:: /_static/images/en-us_image_0000001695737425.png diff --git a/umn/source/best_practice/devops/interconnecting_gitlab_with_swr_and_cce_for_ci_cd.rst b/umn/source/best_practice/devops/interconnecting_gitlab_with_swr_and_cce_for_ci_cd.rst index 3d403b6..40f70b8 100644 --- a/umn/source/best_practice/devops/interconnecting_gitlab_with_swr_and_cce_for_ci_cd.rst +++ b/umn/source/best_practice/devops/interconnecting_gitlab_with_swr_and_cce_for_ci_cd.rst @@ -232,4 +232,4 @@ FAQs .. |image6| image:: /_static/images/en-us_image_0000001701704289.png .. |image7| image:: /_static/images/en-us_image_0000001653584820.png .. |image8| image:: /_static/images/en-us_image_0000001653584824.png -.. |image9| image:: /_static/images/en-us_image_0000001701704285.png +.. |image9| image:: /_static/images/en-us_image_0000001667910920.png diff --git a/umn/source/best_practice/storage/dynamically_creating_and_mounting_subdirectories_of_an_sfs_turbo_file_system.rst b/umn/source/best_practice/storage/dynamically_creating_and_mounting_subdirectories_of_an_sfs_turbo_file_system.rst index 9587660..bf39928 100644 --- a/umn/source/best_practice/storage/dynamically_creating_and_mounting_subdirectories_of_an_sfs_turbo_file_system.rst +++ b/umn/source/best_practice/storage/dynamically_creating_and_mounting_subdirectories_of_an_sfs_turbo_file_system.rst @@ -169,8 +169,8 @@ Creating a Deployment and Mounting an Existing Volume to the Deployment **kubectl create -f deployment-test.yaml** -Dynamically Creating a subpath Volume for a StatefulSet Deployment ------------------------------------------------------------------- +Dynamically Creating a subpath Volume for a StatefulSet +------------------------------------------------------- #. Create a YAML file for a StatefulSet, for example, **statefulset-test.yaml**. diff --git a/umn/source/change_history.rst b/umn/source/change_history.rst index 9a82306..4a31bf2 100644 --- a/umn/source/change_history.rst +++ b/umn/source/change_history.rst @@ -10,6 +10,13 @@ Change History +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Released On | What's New | +===================================+=======================================================================================================================================================================================================================================+ + | 2023-11-06 | - Deleted section "Storage Management: Flexvolume (Deprecated)". | + | | - Deleted section "Kubernetes Version Support Mechanism". | + | | - Added :ref:`Kubernetes Version Policy `. | + | | - Updated :ref:`Networking `. | + | | - Updated :ref:`Storage `. | + | | - Deleted the description of CentOS 7.7. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 2023-08-15 | - Added :ref:`FAQs `. | | | - Added :ref:`Differences Between Helm v2 and Helm v3 and Adaptation Solutions `. | | | - Added :ref:`Deploying an Application Through the Helm v2 Client `. | diff --git a/umn/source/charts/deploying_an_application_through_the_helm_v3_client.rst b/umn/source/charts/deploying_an_application_through_the_helm_v3_client.rst deleted file mode 100644 index ba8568d..0000000 --- a/umn/source/charts/deploying_an_application_through_the_helm_v3_client.rst +++ /dev/null @@ -1,120 +0,0 @@ -:original_name: cce_10_0144.html - -.. _cce_10_0144: - -Deploying an Application Through the Helm v3 Client -=================================================== - -Prerequisites -------------- - -The Kubernetes cluster created on CCE has been connected to kubectl. For details, see :ref:`Using kubectl `. - -.. _cce_10_0144__en-us_topic_0226102212_en-us_topic_0179003017_section3719193213815: - -Installing Helm v3 ------------------- - -This document uses Helm v3.3.0 as an example. - -For other versions, visit https://github.com/helm/helm/releases. - -#. Download the Helm client from the VM connected to the cluster. - - .. code-block:: - - wget https://get.helm.sh/helm-v3.3.0-linux-amd64.tar.gz - -#. Decompress the Helm package. - - .. code-block:: - - tar -xzvf helm-v3.3.0-linux-amd64.tar.gz - -#. Copy Helm to the system path, for example, **/usr/local/bin/helm**. - - .. code-block:: - - mv linux-amd64/helm /usr/local/bin/helm - -#. Query the Helm version. - - .. code-block:: - - helm version - version.BuildInfo{Version:"v3.3.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"} - -Installing the Helm Chart -------------------------- - -If the charts provided by CCE do not meet requirements, download a chart and install it. - -You can obtain the required chart in the **stable** directory on this `website `__, download the chart, and upload it to the node. - -#. Download and decompress the obtained chart. Generally, the chart is in ZIP format. - - .. code-block:: - - unzip chart.zip - -#. Install the Helm chart. - - .. code-block:: - - helm install aerospike/ --generate-name - -#. After the installation is complete, run the **helm list** command to check the status of the chart releases. - -Common Issues -------------- - -- The following error message is displayed after the **helm version** command is run: - - .. code-block:: - - Client: - &version.Version{SemVer:"v3.3.0", - GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"} - E0718 11:46:10.132102 7023 portforward.go:332] an error occurred - forwarding 41458 -> 44134: error forwarding port 44134 to pod - d566b78f997eea6c4b1c0322b34ce8052c6c2001e8edff243647748464cd7919, uid : unable - to do port forwarding: socat not found. - Error: cannot connect to Tiller - - The preceding information is displayed because the socat is not installed. Run the following command to install the socat: - - **yum install socat -y** - -- When you run the **yum install socat -y** command on a node running EulerOS 2.9 and the following error message is displayed: - - No match for argument: socat - - Error: Unable to find a match: socat - - Manually download the socat image and run the following command to install it: - - **rpm -i socat-1.7.3.2-8.oe1.x86_64.rpm** - -- When the socat has been installed and the following error message is displayed after the **helm version** command is run: - - .. code-block:: - - $ helm version - Client: &version.Version{SemVer:"v3.3.0", GitCommit:"021cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"} - Error: cannot connect to Tiller - - The Helm chart reads the configuration certificate from the **.Kube/config** file to communicate with Kubernetes. The preceding error indicates that the kubectl configuration is incorrect. In this case, reconnect the cluster to kubectl. For details, see :ref:`Using kubectl `. - -- Storage fails to be created after you have connected to cloud storage services. - - This issue may be caused by the **annotation** field in the created PVC. Change the chart name and install the chart again. - -- If kubectl is not properly configured, the following error message is displayed after the **helm install** command is run: - - .. code-block:: - - # helm install prometheus/ --generate-name - WARNING: This chart is deprecated - Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp [::1]:8080: connect: connection refused - - **Solution**: Configure kubeconfig for the node. For details, see :ref:`Using kubectl `. diff --git a/umn/source/clusters/cluster_overview/basic_cluster_information.rst b/umn/source/clusters/cluster_overview/basic_cluster_information.rst index 50114de..760a907 100644 --- a/umn/source/clusters/cluster_overview/basic_cluster_information.rst +++ b/umn/source/clusters/cluster_overview/basic_cluster_information.rst @@ -9,43 +9,18 @@ Basic Cluster Information For developers, Kubernetes is a cluster operating system. Kubernetes provides service discovery, scaling, load balancing, self-healing, and even leader election, freeing developers from infrastructure-related configurations. -When using Kubernetes, it is like you run a large number of servers as one and the method for deploying applications in Kubernetes is always the same. +Cluster Network +--------------- -Kubernetes Cluster Architecture -------------------------------- +A cluster network can be divided into three network types: -A Kubernetes cluster consists of master nodes (Masters) and worker nodes (Nodes). Applications are deployed on worker nodes, and you can specify the nodes for deployment. +- Node network: IP addresses are assigned to nodes in a cluster. +- Container network: IP addresses are assigned to containers in a cluster for communication. Currently, multiple container network models are supported, and each model has its own working mechanism. +- Service network: A Service is a Kubernetes object used to access containers. Each Service has a static IP address. -.. note:: +When you create a cluster, select a proper CIDR block for each network. Ensure that the CIDR blocks do not conflict with each other and have sufficient available IP addresses. **You cannot change the container network model after the cluster is created.** Plan the container network model properly in advance. - For a cluster created on CCE, the master node is hosted by CCE. You only need to create a node. - -The following figure shows the architecture of a Kubernetes cluster. - - -.. figure:: /_static/images/en-us_image_0000001568822869.png - :alt: **Figure 1** Kubernetes cluster architecture - - **Figure 1** Kubernetes cluster architecture - -**Master node** - -A master node is the machine where the control plane components run, including API server, Scheduler, Controller manager, and etcd. - -- API server: functions as a transit station for components to communicate with each other, receives external requests, and writes information to etcd. -- Controller manager: performs cluster-level functions, such as component replication, node tracing, and node fault fixing. -- Scheduler: schedules containers to nodes based on various conditions (such as available resources and node affinity). -- etcd: serves as a distributed data storage component that stores cluster configuration information. - -In a production environment, multiple master nodes are deployed to ensure high cluster availability. For example, you can deploy three master nodes for your CCE cluster. - -**Worker node** - -A worker node is a compute node in a cluster, that is, a node running containerized applications. A worker node has the following components: - -- kubelet: communicates with the container runtime, interacts with the API server, and manages containers on the node. -- kube-proxy: serves as an access proxy between application components. -- Container runtime: functions as the software for running containers. You can download images to build your container runtime, such as Docker. +You are advised to learn about the cluster network and container network models before creating a cluster. For details, see :ref:`Container Network Models `. Master Nodes and Cluster Scale ------------------------------ @@ -54,21 +29,6 @@ When you create a cluster on CCE, you can have one or three master nodes. Three The master node specifications decide the number of nodes that can be managed by a cluster. You can select the cluster management scale, for example, 50 or 200 nodes. -Cluster Network ---------------- - -From the perspective of the network, all nodes in a cluster are located in a VPC, and containers are running on the nodes. You need to configure node-node, node-container, and container-container communication. - -A cluster network can be divided into three network types: - -- Node network: IP addresses are assigned to nodes in a cluster. -- Container network: IP addresses are assigned to containers in a cluster for communication. Currently, multiple container network models are supported, and each model has its own working mechanism. -- Service network: A Service is a Kubernetes object used to access containers. Each Service has a fixed IP address. - -When you create a cluster, select a proper CIDR block for each network. Ensure that the CIDR blocks do not conflict with each other and have sufficient available IP addresses. **You cannot change the container network model after the cluster is created.** Plan the container network model properly in advance. - -You are advised to learn about the cluster network and container network models before creating a cluster. For details, see :ref:`Container Network Models `. - Cluster Lifecycle ----------------- @@ -81,10 +41,6 @@ Cluster Lifecycle +-------------+-------------------------------------------------------------------+ | Running | A cluster is running properly. | +-------------+-------------------------------------------------------------------+ - | Scaling-out | A node is being added to a cluster. | - +-------------+-------------------------------------------------------------------+ - | Scaling-in | A node is being deleted from a cluster. | - +-------------+-------------------------------------------------------------------+ | Hibernating | A cluster is hibernating. | +-------------+-------------------------------------------------------------------+ | Awaking | A cluster is being woken up. | diff --git a/umn/source/clusters/cluster_overview/comparing_iptables_and_ipvs.rst b/umn/source/clusters/cluster_overview/comparing_iptables_and_ipvs.rst deleted file mode 100644 index 65b82de..0000000 --- a/umn/source/clusters/cluster_overview/comparing_iptables_and_ipvs.rst +++ /dev/null @@ -1,42 +0,0 @@ -:original_name: cce_10_0349.html - -.. _cce_10_0349: - -Comparing iptables and IPVS -=========================== - -kube-proxy is a key component of a Kubernetes cluster. It is responsible for load balancing and forwarding between a Service and its backend pod. - -CCE supports two forwarding modes: iptables and IPVS. - -- IPVS allows higher throughput and faster forwarding. This mode applies to scenarios where the cluster scale is large or the number of Services is large. -- iptables is the traditional kube-proxy mode. This mode applies to the scenario where the number of Services is small or a large number of short connections are concurrently sent on the client. - -Notes and Constraints ---------------------- - -In a cluster using the IPVS proxy mode, if the ingress and Service use the same ELB load balancer, the ingress cannot be accessed from the nodes and containers in the cluster because kube-proxy mounts the LoadBalancer Service address to the ipvs-0 bridge. This bridge intercepts the traffic of the load balancer connected to the ingress. You are advised to use different ELB load balancers for the ingress and Service. - -iptables --------- - -iptables is a Linux kernel function that provides a large amount of data packet processing and filtering capabilities. It allows flexible sequences of rules to be attached to various hooks in the packet processing pipeline. When iptables is used, kube-proxy implements NAT and load balancing in the NAT pre-routing hook. - -kube-proxy is an O(n) algorithm, in which *n* increases with the cluster scale. The cluster scale refers to the number of Services and backend pods. - -IPVS ----- - -IP Virtual Server (IPVS) is constructed on top of Netfilter and implements transport-layer load balancing as part of the Linux kernel. IPVS can direct requests for TCP- and UDP-based services to the real servers, and make services of the real servers appear as virtual services on a single IP address. - -In the IPVS mode, kube-proxy uses IPVS load balancing instead of iptables. IPVS is designed to balance loads for a large number of Services. It has a set of optimized APIs and uses optimized search algorithms instead of simply searching for rules from a list. - -The complexity of the connection process of IPVS-based kube-proxy is O(1). In other words, in most cases, the connection processing efficiency is irrelevant to the cluster scale. - -IPVS involves multiple load balancing algorithms, such as round-robin, shortest expected delay, least connections, and various hashing methods. However, iptables has only one algorithm for random selection. - -Compared with iptables, IPVS has the following advantages: - -#. Provides better scalability and performance for large clusters. -#. Supports better load balancing algorithms than iptables. -#. Supports functions including server health check and connection retries. diff --git a/umn/source/clusters/cluster_overview/index.rst b/umn/source/clusters/cluster_overview/index.rst index 873092a..7b37878 100644 --- a/umn/source/clusters/cluster_overview/index.rst +++ b/umn/source/clusters/cluster_overview/index.rst @@ -6,17 +6,13 @@ Cluster Overview ================ - :ref:`Basic Cluster Information ` -- :ref:`CCE Turbo Clusters and CCE Clusters ` -- :ref:`Comparing iptables and IPVS ` -- :ref:`Release Notes ` -- :ref:`Cluster Patch Version Release Notes ` +- :ref:`Kubernetes Release Notes ` +- :ref:`Release Notes for CCE Cluster Versions ` .. toctree:: :maxdepth: 1 :hidden: basic_cluster_information - cce_turbo_clusters_and_cce_clusters - comparing_iptables_and_ipvs - release_notes/index - cluster_patch_version_release_notes + kubernetes_release_notes/index + release_notes_for_cce_cluster_versions diff --git a/umn/source/clusters/cluster_overview/kubernetes_release_notes/index.rst b/umn/source/clusters/cluster_overview/kubernetes_release_notes/index.rst new file mode 100644 index 0000000..b5cc152 --- /dev/null +++ b/umn/source/clusters/cluster_overview/kubernetes_release_notes/index.rst @@ -0,0 +1,22 @@ +:original_name: cce_10_0068.html + +.. _cce_10_0068: + +Kubernetes Release Notes +======================== + +- :ref:`Kubernetes 1.25 Release Notes ` +- :ref:`Kubernetes 1.23 Release Notes ` +- :ref:`Kubernetes 1.21 Release Notes ` +- :ref:`Kubernetes 1.19 Release Notes ` +- :ref:`Kubernetes 1.17 (EOM) Release Notes ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + kubernetes_1.25_release_notes + kubernetes_1.23_release_notes + kubernetes_1.21_release_notes + kubernetes_1.19_release_notes + kubernetes_1.17_eom_release_notes diff --git a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.17_release_notes.rst b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.17_eom_release_notes.rst similarity index 96% rename from umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.17_release_notes.rst rename to umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.17_eom_release_notes.rst index ee6e2ec..400d298 100644 --- a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.17_release_notes.rst +++ b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.17_eom_release_notes.rst @@ -1,9 +1,9 @@ -:original_name: cce_10_0471.html +:original_name: cce_whsnew_0007.html -.. _cce_10_0471: +.. _cce_whsnew_0007: -CCE Kubernetes 1.17 Release Notes -================================= +Kubernetes 1.17 (EOM) Release Notes +=================================== CCE has passed the Certified Kubernetes Conformance Program and is a certified Kubernetes offering. This section describes the updates in CCE Kubernetes 1.17. diff --git a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.19_release_notes.rst b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.19_release_notes.rst similarity index 96% rename from umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.19_release_notes.rst rename to umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.19_release_notes.rst index 9720144..e3bab81 100644 --- a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.19_release_notes.rst +++ b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.19_release_notes.rst @@ -1,16 +1,16 @@ -:original_name: cce_10_0470.html +:original_name: cce_whsnew_0010.html -.. _cce_10_0470: +.. _cce_whsnew_0010: -CCE Kubernetes 1.19 Release Notes -================================= +Kubernetes 1.19 Release Notes +============================= CCE has passed the Certified Kubernetes Conformance Program and is a certified Kubernetes offering. This section describes the updates in CCE Kubernetes 1.19. Resource Changes and Deprecations --------------------------------- -**Kubernetes 1.19 Release Notes** +**Kubernetes v1.19 Release Notes** - vSphere in-tree volumes can be migrated to vSphere CSI drivers. The in-tree vSphere Volume plugin is no longer used and will be deleted in later versions. - **apiextensions.k8s.io/v1beta1** has been deprecated. You are advised to use **apiextensions.k8s.io/v1**. @@ -27,7 +27,7 @@ Resource Changes and Deprecations - The alpha feature **ResourceLimitsPriorityFunction** has been deleted. - **storage.k8s.io/v1beta1** has been deprecated. You are advised to use **storage.k8s.io/v1**. -**Kubernetes 1.18 Release Notes** +**Kubernetes v1.18 Release Notes** - kube-apiserver diff --git a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.21_release_notes.rst b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.21_release_notes.rst similarity index 97% rename from umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.21_release_notes.rst rename to umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.21_release_notes.rst index a40fb66..cd09dcf 100644 --- a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.21_release_notes.rst +++ b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.21_release_notes.rst @@ -1,9 +1,9 @@ -:original_name: cce_10_0469.html +:original_name: cce_bulletin_0026.html -.. _cce_10_0469: +.. _cce_bulletin_0026: -CCE Kubernetes 1.21 Release Notes -================================= +Kubernetes 1.21 Release Notes +============================= CCE has passed the Certified Kubernetes Conformance Program and is a certified Kubernetes offering. This section describes the updates in CCE Kubernetes 1.21. diff --git a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.23_release_notes.rst b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.23_release_notes.rst similarity index 93% rename from umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.23_release_notes.rst rename to umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.23_release_notes.rst index 9ddd2cb..dc70ef6 100644 --- a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.23_release_notes.rst +++ b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.23_release_notes.rst @@ -1,19 +1,15 @@ -:original_name: cce_10_0468.html +:original_name: cce_bulletin_0027.html -.. _cce_10_0468: +.. _cce_bulletin_0027: -CCE Kubernetes 1.23 Release Notes -================================= +Kubernetes 1.23 Release Notes +============================= CCE has passed the Certified Kubernetes Conformance Program and is a certified Kubernetes offering. This section describes the updates in CCE Kubernetes 1.23. Resource Changes and Deprecations --------------------------------- -**Changes in CCE 1.23** - -- The web-terminal add-on is no longer supported. Use kubectl instead. - **Kubernetes 1.23 Release Notes** - FlexVolume is deprecated. Use CSI. diff --git a/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.25_release_notes.rst b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.25_release_notes.rst new file mode 100644 index 0000000..cea8442 --- /dev/null +++ b/umn/source/clusters/cluster_overview/kubernetes_release_notes/kubernetes_1.25_release_notes.rst @@ -0,0 +1,165 @@ +:original_name: cce_bulletin_0058.html + +.. _cce_bulletin_0058: + +Kubernetes 1.25 Release Notes +============================= + +CCE has passed the Certified Kubernetes Conformance Program and is a certified Kubernetes offering. This document describes the changes made in Kubernetes 1.25 compared with Kubernetes 1.23. + +Indexes +------- + +- :ref:`New Features ` +- :ref:`Deprecations and Removals ` +- :ref:`Enhanced Kubernetes 1.25 on CCE ` +- :ref:`References ` + +.. _cce_bulletin_0058__en-us_topic_0000001596950457_en-us_topic_0000001389397618_en-us_topic_0000001430891141_en-us_topic_0000001072975092_section51381161799: + +New Features +------------ + +**Kubernetes 1.25** + +- Pod Security Admission is stable. PodSecurityPolicy is deprecated. + + PodSecurityPolicy is replaced by Pod Security Admission. For details about the migration, see `Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller `__. + +- The ephemeral container is stable. + + An `ephemeral container `__ is a container that runs temporarily in an existing pod. It is useful for troubleshooting, especially when kubectl exec cannot be used to check a container that breaks down or its image lacks a debugging tool. + +- Support for cgroups v2 enters the stable phase. + + Kubernetes supports cgroups v2. cgroups v2 provides some improvements over cgroup v1. For details, see `About cgroup v2 `__. + +- SeccompDefault moves to beta. + + To enable this feature, add the startup parameter **--seccomp-default=true** to kubelet. In this way, **seccomp** is set to **RuntimeDefault** by default, improving system security. Clusters of v1.25 no longer support **seccomp.security.alpha.kubernetes.io/pod** and **container.seccomp.security.alpha.kubernetes.io/annotation**. Replace them with the **securityContext.seccompProfile** field in pods or containers. For details, see `Configure a Security Context for a Pod or Container `__. + + .. note:: + + After this feature is enabled, the system calls required by the application may be restricted by the runtime. Ensure that the debugging is performed in the test environment, so that application is not affected. + +- The EndPort in the network policy moves to stable. + + EndPort in Network Policy is stable. This feature is incorporated in version 1.21. EndPort is added to NetworkPolicy. You can specify a port range. + +- Local ephemeral storage capacity isolation is stable. + + This feature provides support for capacity isolation of local ephemeral storage between pods, such as EmptyDir. If a pod's consumption of shared resources exceeds the limit, it will be evicted. + +- The CRD verification expression language moves to beta. + + This makes it possible to declare how to validate custom resources using `Common Expression Language (CEL) `__. For details, see `Extend the Kubernetes API with CustomResourceDefinitions `__. + +- KMS v2 APIs are introduced. + + The KMS v2 alpha1 API is introduced to add performance, rotation, and observability improvements. This API uses AES-GCM to replace AES-CBC and uses DEK to encrypt data at rest (Kubernetes Secrets). No additional operation is required during this process. Additionally, data can be read through AES-GCM and AES-CBC. For details, see `Using a KMS provider for data encryption `__. + +- Pod network readiness is introduced. + + Kubernetes 1.25 introduces Alpha support for PodHasNetwork. This status is in the **status** field of the pod. For details, see `Pod network readiness `__. + +- The two features used for application rollout are stable. + + - In Kubernetes 1.25, **minReadySeconds** for StatefulSets is stable. It allows each pod to wait for an expected period of time to slow down the rollout of a StatefulSet. For details, see `Minimum ready seconds `__. + - In Kubernetes 1.25, **maxSurge** for DaemonSets is stable. It allows a DaemonSet workload to run multiple instances of the same pod on one node during a rollout. This minimizes DaemonSet downtime for users. DaemonSet does not allow **maxSurge** and **hostPort** to be used at the same time because two active pods cannot share the same port on the same node. For details, see `Perform a Rolling Update on a DaemonSet `__. + +- Alpha support for running pods with user namespaces is provided. + + This feature maps the **root** user in a pod to a non-zero ID outside the container. In this way, the container runs as the **root** user and the node runs as a regular unprivileged user. This feature is still in the internal test phase. The UserNamespacesStatelessPodsSupport gate needs to be enabled, and the container runtime must support this function. For details, see `Kubernetes 1.25: alpha support for running Pods with user namespaces `__. + +**Kubernetes 1.24** + +- Dockershim is removed from kubelet. + + Dockershim was marked deprecated in Kubernetes 1.20 and officially removed from kubelet in Kubernetes 1.24. If you want to use Docker container, switch to cri-dockerd or other runtimes that support CRI, such as containerd and CRI-O. + + .. note:: + + Check whether there are agents or applications that depend on Docker Engine. For example, if **docker ps**, **docker run**, and **docker inspect** are used, ensure that multiple runtimes are compatible and switch to the standard CRI. + +- Beta APIs are disabled by default. + + The Kubernetes community found 90% cluster administrators did not care about the beta APIs and left them enabled. However, the beta features are not recommended because these APIs enabled in the production environment by default incur risks. Therefore, in 1.24 and later versions, beta APIs are disabled by default, but the existing beta APIs will retain the original settings. + +- OpenAPI v3 is supported. + + In Kubernetes 1.24 and later versions, OpenAPI V3 is enabled by default. + +- Storage capacity tracking is stable. + + In Kubernetes 1.24 and later versions, the CSIStorageCapacity API supports exposing the available storage capacity. This ensures that pods are scheduled to nodes with sufficient storage capacity, which reduces pod scheduling delay caused by volume creation and mounting failures. For details, see `Storage Capacity `__. + +- gRPC container probe moves to beta. + + In Kubernetes 1.24 and later versions, the gRPC probe goes to beta. The feature gate GRPCContainerProbe is available by default. For details about how to use this probe, see `Configure Probes `__. + +- LegacyServiceAccountTokenNoAutoGeneration is enabled by default. + + The LegacyServiceAccountTokenNoAutoGeneration feature is in beta state. By default, this feature is enabled and no more secret token will be automatically generated for the service account. To use a token that never expires, create a secret and mount it. For details, see `Service account token Secrets `__. + +- IP address conflict is prevented. + + In Kubernetes 1.24, `an IP address pool is soft reserved for the static IP addresses of Services `__. After you manually enable this function, Service IP addresses will be automatically from the IP address pool to minimize IP address conflict. + +- Clusters are compiled based on Go 1.18. + + Kubernetes clusters of versions later than 1.24 are compiled based on Go 1.18. By default, the SHA-1 hash algorithm, such as SHA1WithRSA and ECDSAWithSHA1, is no longer supported for certificate signature verification. Use the certificate generated by the SHA256 algorithm instead. + +- The maximum number of unavailable StatefulSet replicas is configurable. + + In Kubernetes 1.24 and later versions, the **maxUnavailable** parameter can be configured for StatefulSets so that pods can be stopped more quickly during a rolling update. + +- Alpha support for non-graceful node shutdown is introduced. + + The non-graceful node shutdown is introduced as alpha in Kubernetes v1.24. A node shutdown is considered graceful only if kubelet's node shutdown manager can detect the upcoming node shutdown action. For details, see `Non-graceful node shutdown handling `__. + +.. _cce_bulletin_0058__en-us_topic_0000001596950457_section1096111394018: + +Deprecations and Removals +------------------------- + +**Kubernetes 1.25** + +- The iptables chain ownership is cleared up. + + Kubernetes typically creates iptables chains to ensure data packets can be sent to the destination. These iptables chains and their names are for internal use only. These chains were never intended to be part of any Kubernetes API/ABI guarantees. For details, see `Kubernetes's IPTables Chains Are Not API `__. + + In versions later than Kubernetes 1.25, Kubelet uses IPTablesCleanup to migrate the Kubernetes-generated iptables chains used by the components outside of Kubernetes in phases so that iptables chains such as KUBE-MARK-DROP, KUBE-MARK-MASQ, and KUBE-POSTROUTING will not be created in the NAT table. For more details, see `Cleaning Up IPTables Chain Ownership `__. + +- The cloud service vendors' in-tree storage drivers are removed. + +**Kubernetes 1.24** + +- In Kubernetes 1.24 and later versions, Service.Spec.LoadBalancerIP is deprecated because it cannot be used for dual-stack protocols. Instead, use custom annotations. +- In Kubernetes 1.24 and later versions, the **--address**, **--insecure-bind-address**, **--port**, and **--insecure-port=0** parameters are removed from **kube-apiserver**. +- In Kubernetes 1.24 and later versions, startup parameters **--port=0** and **--address** are removed from **kube-controller-manager** and **kube-scheduler**. +- In Kubernetes 1.24 and later versions, **kube-apiserver --audit-log-version** and **--audit-webhook-version** support only **audit.k8s.io/v1**. In Kubernetes 1.24, **audit.k8s.io/v1[alpha|beta]1** is removed, and only **audit.k8s.io/v1** can be used. +- In Kubernetes 1.24 and later versions, the startup parameter **--network-plugin** is removed from kubelet. This Docker-specific parameter is available only when the container runtime environment is **Docker** and it is deleted with Dockershim. +- In Kubernetes 1.24 and later versions, dynamic log clearance has been discarded and removed accordingly. A log filter is introduced to the logs of all Kubernetes system components to prevent sensitive information from being leaked through logs. However, this function may block logs and therefore is discarded. For more details, see `Dynamic log sanitization `__ and `KEP-1753 `__. +- VolumeSnapshot v1beta1 CRD is discarded in Kubernetes 1.20 and removed in Kubernetes 1.24. Use VolumeSnapshot v1 instead. +- In Kubernetes 1.24 and later versions, **service annotation tolerate-unready-endpoints** discarded in Kubernetes 1.11 is replaced by **Service.spec.publishNotReadyAddresses**. +- In Kubernetes 1.24 and later versions, the **metadata.clusterName** field is discarded and will be deleted in the next version. +- In Kubernetes 1.24 and later versions, the logic for kube-proxy to listen to NodePorts is removed. If NodePorts conflict with **kernel net.ipv4.ip_local_port_range**, TCP connections may fail occasionally, which leads to a health check failure or service exception. Before the upgrade, ensure that cluster NodePorts do not conflict with **net.ipv4.ip_local_port_range** of all nodes in the cluster. For more details, see `Kubernetes PR `__. + +.. _cce_bulletin_0058__en-us_topic_0000001596950457_section115291322132513: + +Enhanced Kubernetes 1.25 on CCE +------------------------------- + +During a version maintenance period, CCE periodically updates Kubernetes 1.25 and provides enhanced functions. + +For details about cluster version updates, see :ref:`Release Notes for CCE Cluster Versions `. + +.. _cce_bulletin_0058__en-us_topic_0000001596950457_en-us_topic_0000001389397618_en-us_topic_0000001430891141_en-us_topic_0000001072975092_en-us_topic_0261805759_en-us_topic_0261793154_section1272182810583: + +References +---------- + +For more details about the performance comparison and function evolution between Kubernetes 1.25 and other versions, see the following documents: + +- `Kubernetes 1.25 Release Notes `__ +- `Kubernetes 1.24 Release Notes `__ diff --git a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.25_release_notes.rst b/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.25_release_notes.rst deleted file mode 100644 index 28b686b..0000000 --- a/umn/source/clusters/cluster_overview/release_notes/cce_kubernetes_1.25_release_notes.rst +++ /dev/null @@ -1,38 +0,0 @@ -:original_name: cce_10_0467.html - -.. _cce_10_0467: - -CCE Kubernetes 1.25 Release Notes -================================= - -CCE has passed the Certified Kubernetes Conformance Program and is a certified Kubernetes offering. This section describes the updates in CCE Kubernetes 1.25. - -Resource Changes and Deprecations ---------------------------------- - -**Kubernetes 1.25 Release Notes** - -- PodSecurityPolicy is replaced by Pod Security Admission. For details about the migration, see `Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller `__. -- SeccompDefault is in Beta. To enable this feature, you need to add the startup parameter **--seccomp-default=true** to kubelet. In this way, seccomp is set to **RuntimeDefault** by default, improving the system security. Clusters of v1.25 no longer use **seccomp.security.alpha.kubernetes.io/pod** and **container.seccomp.security.alpha.kubernetes.io/annotation** to use seccomp. Replace them with the **securityContext.seccompProfile** field in the pod or container. For details, see `Configure a Security Context for a Pod or Container `__. - - .. note:: - - After the feature is enabled, the system calls required by the application may be restricted by the runtime. Ensure that the debugging is performed in the test environment and the application is not affected. - -- EndPort in Network Policy is stable. This feature is incorporated in version 1.21. EndPort is added to NetworkPolicy for you to specify a port range. -- Since clusters of v1.25, Kubernetes does not support certificate authentication generated using the SHA1WithRSA or ECDSAWithSHA1 algorithm. You are advised to use the SHA256 algorithm. - -**Kubernetes 1.24 Release Notes** - -- Beta APIs are disabled by default. When some long-term beta APIs are removed from Kubernetes, 90% cluster administrators do not care about the beta APIs. Beta features are not recommended in the production environment. However, due to the default enabling policy, these APIs are enabled in the production environment, incurring risks. Therefore, in v1.24 and later versions, beta APIs are disabled by default except for the enabled beta APIs. -- The LegacyServiceAccountTokenNoAutoGeneration feature is in beta state. By default, this feature is enabled and no more secret token will be automatically generated for the service account. If you want to use a token that never expires, you need to create a secret and mount it. For details, see `Service account token secrets `__. -- **service.alpha.kubernetes.io/tolerate-unready-endpoints** is replaced by **Service.spec.publishNotReadyAddresses**. -- The **Service.Spec.LoadBalancerIP** tag is deprecated and may be removed in later versions. Use a customized annotation. - -References ----------- - -For more details about the performance comparison and function evolution between Kubernetes 1.25 and other versions, see the following documents: - -- `Kubernetes v1.25 Release Notes `__ -- `Kubernetes v1.24 Release Notes `__ diff --git a/umn/source/clusters/cluster_overview/release_notes/index.rst b/umn/source/clusters/cluster_overview/release_notes/index.rst deleted file mode 100644 index b1553ac..0000000 --- a/umn/source/clusters/cluster_overview/release_notes/index.rst +++ /dev/null @@ -1,22 +0,0 @@ -:original_name: cce_10_0068.html - -.. _cce_10_0068: - -Release Notes -============= - -- :ref:`CCE Kubernetes 1.25 Release Notes ` -- :ref:`CCE Kubernetes 1.23 Release Notes ` -- :ref:`CCE Kubernetes 1.21 Release Notes ` -- :ref:`CCE Kubernetes 1.19 Release Notes ` -- :ref:`CCE Kubernetes 1.17 Release Notes ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - cce_kubernetes_1.25_release_notes - cce_kubernetes_1.23_release_notes - cce_kubernetes_1.21_release_notes - cce_kubernetes_1.19_release_notes - cce_kubernetes_1.17_release_notes diff --git a/umn/source/clusters/cluster_overview/cluster_patch_version_release_notes.rst b/umn/source/clusters/cluster_overview/release_notes_for_cce_cluster_versions.rst similarity index 54% rename from umn/source/clusters/cluster_overview/cluster_patch_version_release_notes.rst rename to umn/source/clusters/cluster_overview/release_notes_for_cce_cluster_versions.rst index 3004fb4..c193bbc 100644 --- a/umn/source/clusters/cluster_overview/cluster_patch_version_release_notes.rst +++ b/umn/source/clusters/cluster_overview/release_notes_for_cce_cluster_versions.rst @@ -2,81 +2,90 @@ .. _cce_10_0405: -Cluster Patch Version Release Notes -=================================== +Release Notes for CCE Cluster Versions +====================================== Version 1.25 ------------ -.. table:: **Table 1** Release notes of v1.25 patch +.. important:: - +---------------------------+------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+-----------------------------+ - | CCE Cluster Patch Version | Kubernetes Version | Feature Updates | Optimization | Vulnerability Fixing | - +===========================+======================================================================================================+=========================================================================================================================================+===========================================================================================+=============================+ - | v1.25.3-r0 | `v1.25.5 `__ | None | Enhances the network stability when the specifications of CCE Turbo clusters are changed. | Fixed some security issues. | - +---------------------------+------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+-----------------------------+ - | v1.25.1-r0 | `v1.25.5 `__ | The CCE v1.25 cluster is released for the first time. For more information, see :ref:`CCE Kubernetes 1.25 Release Notes `. | None | None | - +---------------------------+------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------+-----------------------------+ + All nodes in the CCE clusters of version 1.25, except the ones running EulerOS 2.5, use containerd by default. + +.. table:: **Table 1** Release notes for the v1.25 patch + + +---------------------------+------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------+-----------------------------+ + | CCE Cluster Patch Version | Kubernetes Version | Feature Updates | Optimization | Vulnerability Fixing | + +===========================+======================================================================================================+============================================================================================================================================+==========================================================================================+=============================+ + | v1.25.3-r0 | `v1.25.5 `__ | None | Enhanced network stability of CCE Turbo clusters when their specifications are modified. | Fixed some security issues. | + +---------------------------+------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------+-----------------------------+ + | v1.25.1-r0 | `v1.25.5 `__ | CCE clusters of v1.25 are released for the first time. For more information, see :ref:`Kubernetes 1.25 Release Notes `. | None | None | + +---------------------------+------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------+-----------------------------+ Version 1.23 ------------ -.. table:: **Table 2** Release notes of v1.23 patch +.. table:: **Table 2** Release notes for the v1.23 patch - +---------------------------+--------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ - | CCE Cluster Patch Version | Kubernetes Version | Feature Updates | Optimization | Vulnerability Fixing | - +===========================+========================================================================================================+=========================================================================================================================================+=============================================================================================+=========================================================================+ - | v1.23.8-r0 | `v1.23.11 `__ | None | - Enhances Docker reliability during the upgrade. | Fixed some security issues. | - | | | | - Optimizes the time synchronization of nodes. | | - +---------------------------+--------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ - | v1.23.5-r0 | `v1.23.11 `__ | - Supports device fault detection and isolation for GPU nodes. | - The ETCD version of the master node is upgraded to the Kubernetes version 3.5.6. | Fixed some security issues and the following CVE vulnerabilities: | - | | | - Supports custom security groups by cluster. | - Scheduling is optimized. Pods are evenly distributed across AZs when pods are scaled in. | | - | | | - The node-level, Trunkport ENIs can be pre-bound for CCE Turbo clusters. | - The memory usage of kube-apiserver is optimized when CRDs are frequently updated. | - `CVE-2022-3294 `__ | - | | | - containerd is supported. | | - `CVE-2022-3162 `__ | - | | | | | - `CVE-2022-3172 `__ | - | | | | | - `CVE-2021-25749 `__ | - +---------------------------+--------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ - | v1.23.1-r0 | `v1.23.4 `__ | The CCE v1.23 cluster is released for the first time. For more information, see :ref:`CCE Kubernetes 1.23 Release Notes `. | None | None | - +---------------------------+--------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + +---------------------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | CCE Cluster Patch Version | Kubernetes Version | Feature Updates | Optimization | Vulnerability Fixing | + +===========================+========================================================================================================+============================================================================================================================================+=====================================================================================================+=========================================================================+ + | v1.23.8-r0 | `v1.23.11 `__ | None | - Enhanced Docker reliability during upgrades. | Fixed some security issues. | + | | | | - Optimized node time synchronization. | | + +---------------------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | v1.23.5-r0 | `v1.23.11 `__ | - Fault detection and isolation are supported on GPU nodes. | - The ETCD version of the master node has been upgraded to the Kubernetes version 3.5.6. | Fixed some security issues and the following CVE vulnerabilities: | + | | | - Security groups can be customized by cluster. | - Scheduling is optimized so that pods are evenly distributed across AZs after pods are scaled in. | | + | | | - CCE Turbo clusters support ENIs pre-binding by node. | - Optimized the memory usage of kube-apiserver when CRDs are frequently updated. | - `CVE-2022-3294 `__ | + | | | - containerd is supported. | | - `CVE-2022-3162 `__ | + | | | | | - `CVE-2022-3172 `__ | + | | | | | - `CVE-2021-25749 `__ | + +---------------------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | v1.23.1-r0 | `v1.23.4 `__ | CCE clusters of v1.23 are released for the first time. For more information, see :ref:`Kubernetes 1.23 Release Notes `. | None | None | + +---------------------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ Version 1.21 ------------ -.. table:: **Table 3** Release notes of v1.21 patch +.. table:: **Table 3** Release notes for the v1.21 patch - +---------------------------+----------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ - | CCE Cluster Patch Version | Kubernetes Version | Feature Updates | Optimization | Vulnerability Fixing | - +===========================+======================================================================================================================+=========================================================================================================================================+==================================================================================================+=======================================================================+ - | v1.21.10-r0 | `v1.21.14 `__ | None | - Enhances Docker reliability during the upgrade. | Fixed some security issues. | - | | | | - Optimizes the time synchronization of nodes. | | - | | | | - Optimizes the stability of pulling images when Docker is running after the node is restarted. | | - +---------------------------+----------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ - | v1.21.7-r0 | `v1.21.14 `__ | - Supports device fault detection and isolation for GPU nodes. | The stability of the ELB and ingress is optimized during massive connections. | Fixed some security issues and the following CVE vulnerabilities: | - | | | - Supports custom security groups by cluster. | | | - | | | | | - `CVE-2022-3294 `__ | - | | | | | - `CVE-2022-3162 `__ | - | | | | | - `CVE-2022-3172 `__ | - +---------------------------+----------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ - | v1.21.1-r0 | `v1.21.7 `__ | The CCE v1.21 cluster is released for the first time. For more information, see :ref:`CCE Kubernetes 1.21 Release Notes `. | None | None | - +---------------------------+----------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ + +---------------------------+----------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ + | CCE Cluster Patch Version | Kubernetes Version | Feature Updates | Optimization | Vulnerability Fixing | + +===========================+======================================================================================================================+============================================================================================================================================+===============================================================================================+=======================================================================+ + | v1.21.10-r0 | `v1.21.14 `__ | None | - Enhanced Docker reliability during upgrades. | Fixed some security issues. | + | | | | - Optimized node time synchronization. | | + | | | | - Enhanced the stability of the Docker runtime for pulling images after nodes are restarted. | | + +---------------------------+----------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ + | v1.21.7-r0 | `v1.21.14 `__ | - Fault detection and isolation are supported on GPU nodes. | Improved the stability of LoadBalancer Services/ingresses with a large number of connections. | Fixed some security issues and the following CVE vulnerabilities: | + | | | - Security groups can be customized by cluster. | | | + | | | - CCE Turbo clusters support ENIs pre-binding by node. | | - `CVE-2022-3294 `__ | + | | | | | - `CVE-2022-3162 `__ | + | | | | | - `CVE-2022-3172 `__ | + +---------------------------+----------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ + | v1.21.1-r0 | `v1.21.7 `__ | CCE clusters of v1.21 are released for the first time. For more information, see :ref:`Kubernetes 1.21 Release Notes `. | None | None | + +---------------------------+----------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ Version 1.19 ------------ -.. table:: **Table 4** Release notes of v1.19 patch +.. table:: **Table 4** Release notes of the v1.19 patch - +---------------------------+--------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ - | CCE Cluster Patch Version | Kubernetes Version | Feature Updates | Optimization | Vulnerability Fixing | - +===========================+========================================================================================================+=========================================================================================================================================+======================================================================================================+=======================================================================+ - | v1.19.16-r20 | `v1.19.16 `__ | None | - Cloud Native 2.0 Network supports the subnet specified by the namespace. | Fixed some security issues. | - | | | | - Optimizes the stability of pulling images when Docker is running after the node is restarted. | | - | | | | - Optimizes the ENI allocation performance of CCE Turbo clusters in non-full pre-binding scenarios. | | - +---------------------------+--------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ - | v1.19.16-r4 | `v1.19.16 `__ | - Supports device fault detection and isolation for GPU nodes. | - Scheduling is optimized in the node taint scenario. | Fixed some security issues and the following CVE vulnerabilities: | - | | | - Supports custom security groups by cluster. | - The stability of the ELB and ingress is optimized during massive connections. | | - | | | - The node-level, Trunkport ENIs can be pre-bound for CCE Turbo clusters. | - The memory usage of kube-apiserver is optimized when CRDs are frequently updated. | - `CVE-2022-3294 `__ | - | | | | | - `CVE-2022-3162 `__ | - | | | | | - `CVE-2022-3172 `__ | - +---------------------------+--------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ - | v1.19.10-r0 | `v1.19.10 `__ | The CCE v1.19 cluster is released for the first time. For more information, see :ref:`CCE Kubernetes 1.19 Release Notes `. | None | None | - +---------------------------+--------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------+ + +---------------------------+--------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | CCE Cluster Patch Version | Kubernetes Version | Feature Updates | Optimization | Vulnerability Fixing | + +===========================+========================================================================================================+==========================================================================================================================================+======================================================================================================================+=========================================================================+ + | v1.19.16-r20 | `v1.19.16 `__ | None | - Cloud Native 2.0 Networks allow you to specify subnets for a namespace. | Fixed some security issues. | + | | | | - Enhanced the stability of the Docker runtime for pulling images after nodes are restarted. | | + | | | | - Optimized the performance of CCE Turbo clusters in allocating ENIs if not all ENIs are pre-bound. | | + +---------------------------+--------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | v1.19.16-r4 | `v1.19.16 `__ | - Fault detection and isolation are supported on GPU nodes. | - Scheduling is optimized on taint nodes. | Fixed some security issues and the following CVE vulnerabilities: | + | | | - Security groups can be customized by cluster. | - Enhanced the long-term running stability of containerd when cores are bound. | | + | | | - CCE Turbo clusters support ENIs pre-binding by node. | - Improved the stability of LoadBalancer Services/ingresses with a large number of connections. | - `CVE-2022-3294 `__ | + | | | | - Optimized the memory usage of kube-apiserver when CRDs are frequently updated. | - `CVE-2022-3162 `__ | + | | | | | - `CVE-2022-3172 `__ | + +---------------------------+--------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | v1.19.16-r0 | `v1.19.16 `__ | None | Enhanced the stability in updating LoadBalancer Services when workloads are upgraded and nodes are scaled in or out. | Fixed some security issues and the following CVE vulnerabilities: | + | | | | | | + | | | | | - `CVE-2021-25741 `__ | + | | | | | - `CVE-2021-25737 `__ | + +---------------------------+--------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ + | v1.19.10-r0 | `v1.19.10 `__ | CCE clusters of v1.19 are released for the first time. For more information, see :ref:`Kubernetes 1.19 Release Notes `. | None | None | + +---------------------------+--------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+ diff --git a/umn/source/clusters/using_kubectl_to_run_a_cluster/customizing_a_cluster_certificate_san.rst b/umn/source/clusters/connecting_to_a_cluster/accessing_a_cluster_using_a_custom_domain_name.rst similarity index 89% rename from umn/source/clusters/using_kubectl_to_run_a_cluster/customizing_a_cluster_certificate_san.rst rename to umn/source/clusters/connecting_to_a_cluster/accessing_a_cluster_using_a_custom_domain_name.rst index f56d612..0b81ac5 100644 --- a/umn/source/clusters/using_kubectl_to_run_a_cluster/customizing_a_cluster_certificate_san.rst +++ b/umn/source/clusters/connecting_to_a_cluster/accessing_a_cluster_using_a_custom_domain_name.rst @@ -2,8 +2,8 @@ .. _cce_10_0367: -Customizing a Cluster Certificate SAN -===================================== +Accessing a Cluster Using a Custom Domain Name +============================================== Scenario -------- @@ -12,8 +12,14 @@ A **Subject Alternative Name (SAN)** can be signed in to a cluster server certif If the client cannot directly access the private IP or EIP of the cluster, you can sign the IP address or DNS domain name that can be directly accessed by the client into the cluster server certificate to enable two-way authentication on the client, which improves security. Typical use cases include DNAT access and domain name access. -Notes and Constraints ---------------------- +Typical domain name access scenarios: + +- Add the response domain name mapping when specifying the DNS domain name address in the host domain name configuration on the client, or configuring **/etc/hosts** on the client host. +- Use domain name access in the intranet. DNS allows you to configure mappings between cluster EIPs and custom domain names. After an EIP is updated, you can continue to use two-way authentication and the domain name to access the cluster without downloading the **kubeconfig.json** file again. +- Add A records on a self-built DNS server. + +Constraints +----------- This feature is available only to clusters of v1.19 and later. @@ -32,11 +38,4 @@ Customizing a SAN 3. If a custom domain name needs to be bound to an EIP, ensure that an EIP has been configured. -Typical Domain Name Access Scenarios ------------------------------------- - -- Add the response domain name mapping when specifying the DNS domain name address in the host domain name configuration on the client, or configuring **/etc/hosts** on the client host. -- Use domain name access in the intranet. DNS allows you to configure mappings between cluster EIPs and custom domain names. After an EIP is updated, you can continue to use two-way authentication and the domain name to access the cluster without downloading the **kubeconfig.json** file again. -- Add A records on a self-built DNS server. - -.. |image1| image:: /_static/images/en-us_image_0000001517743644.png +.. |image1| image:: /_static/images/en-us_image_0000001695737529.png diff --git a/umn/source/clusters/connecting_to_a_cluster/connecting_to_a_cluster_using_an_x.509_certificate.rst b/umn/source/clusters/connecting_to_a_cluster/connecting_to_a_cluster_using_an_x.509_certificate.rst new file mode 100644 index 0000000..47a5648 --- /dev/null +++ b/umn/source/clusters/connecting_to_a_cluster/connecting_to_a_cluster_using_an_x.509_certificate.rst @@ -0,0 +1,41 @@ +:original_name: cce_10_0175.html + +.. _cce_10_0175: + +Connecting to a Cluster Using an X.509 Certificate +================================================== + +Scenario +-------- + +This section describes how to obtain the cluster certificate from the console and use it access Kubernetes clusters. + +Procedure +--------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. Choose **Cluster Information** from the navigation pane and click **Download** next to **Authentication Mode** in the **Connection Information** area. + +#. In the **Download X.509 Certificate** dialog box displayed, select the certificate expiration time and download the X.509 certificate of the cluster as prompted. + + + .. figure:: /_static/images/en-us_image_0000001647417220.png + :alt: **Figure 1** Downloading a certificate + + **Figure 1** Downloading a certificate + + .. important:: + + - The downloaded certificate contains three files: **client.key**, **client.crt**, and **ca.crt**. Keep these files secure. + - Certificates are not required for mutual access between containers in a cluster. + +#. Call native Kubernetes APIs using the cluster certificate. + + For example, run the **curl** command to call an API to view the pod information. In the following information,\ *192.168.***.***:5443* indicates the IP address of the API server in the cluster. + + .. code-block:: + + curl --cacert ./ca.crt --cert ./client.crt --key ./client.key https://192.168.***.***:5443/api/v1/namespaces/default/pods/ + + For more cluster APIs, see `Kubernetes APIs `__. diff --git a/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst b/umn/source/clusters/connecting_to_a_cluster/connecting_to_a_cluster_using_kubectl.rst similarity index 84% rename from umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst rename to umn/source/clusters/connecting_to_a_cluster/connecting_to_a_cluster_using_kubectl.rst index 5adbb7b..f3e940a 100644 --- a/umn/source/clusters/using_kubectl_to_run_a_cluster/connecting_to_a_cluster_using_kubectl.rst +++ b/umn/source/clusters/connecting_to_a_cluster/connecting_to_a_cluster_using_kubectl.rst @@ -10,10 +10,10 @@ Scenario This section uses a CCE cluster as an example to describe how to connect to a CCE cluster using kubectl. -Permission Description ----------------------- +Permissions +----------- -When you access a cluster using kubectl, CCE uses the **kubeconfig.json** file generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a **kubeconfig.json** file vary from user to user. +When you access a cluster using kubectl, CCE uses **kubeconfig.json** generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a **kubeconfig.json** file vary from user to user. For details about user permissions, see :ref:`Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based) `. @@ -24,10 +24,10 @@ Using kubectl To connect to a Kubernetes cluster from a PC, you can use kubectl, a Kubernetes command line tool. You can log in to the CCE console, click the name of the cluster to be connected, and view the access address and kubectl connection procedure on the cluster details page. -CCE allows you to access a cluster through a **VPC network** or a **public network**. +CCE allows you to access a cluster through a private network or a public network. -- **Intra-VPC access**: The client that accesses the cluster must be in the same VPC as the cluster. -- **Public access**:The client that accesses the cluster must be able to access public networks and the cluster has been bound with a public network IP. +- Intranet access: The client that accesses the cluster must be in the same VPC as the cluster. +- Public access: The client that accesses the cluster must be able to access public networks and the cluster has been bound with a public network IP. .. important:: @@ -35,7 +35,7 @@ CCE allows you to access a cluster through a **VPC network** or a **public netwo Download kubectl and the configuration file. Copy the file to your client, and configure kubectl. After the configuration is complete, you can access your Kubernetes clusters. Procedure: -#. Download kubectl. +#. **Download kubectl.** Prepare a computer that can access the public network and install kubectl in CLI mode. You can run the **kubectl version** command to check whether kubectl has been installed. If kubectl has been installed, skip this step. @@ -59,14 +59,14 @@ Download kubectl and the configuration file. Copy the file to your client, and c #. .. _cce_10_0107__li34691156151712: - Obtain the kubectl configuration file (kubeconfig). + **Obtain the kubectl configuration file (kubeconfig).** - On the **Connection Information** pane on the cluster details page, click **Configure** next to **kubectl**. On the window displayed, download the configuration file. + In the **Connection Information** pane on the cluster details page, click **Configure** next to **kubectl**. On the window displayed, download the configuration file. .. note:: - The kubectl configuration file **kubeconfig.json** is used for cluster authentication. If the file is leaked, your clusters may be attacked. - - By default, two-way authentication is disabled for domain names in the current cluster. You can run the **kubectl config use-context externalTLSVerify** command to enable two-way authentication. For details, see :ref:`Two-Way Authentication for Domain Names `. For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, you need to bind the EIP again and download **kubeconfig.json** again. + - By default, two-way authentication is disabled for domain names in the current cluster. You can run the **kubectl config use-context externalTLSVerify** command to enable two-way authentication. For details, see :ref:`Two-Way Authentication for Domain Names `. For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, bind the EIP again and download **kubeconfig.json** again. - The Kubernetes permissions assigned by the configuration file downloaded by IAM users are the same as those assigned to the IAM users on the CCE console. - If the KUBECONFIG environment variable is configured in the Linux OS, kubectl preferentially loads the KUBECONFIG environment variable instead of **$home/.kube/config**. @@ -113,7 +113,7 @@ Download kubectl and the configuration file. Copy the file to your client, and c Two-Way Authentication for Domain Names --------------------------------------- -Currently, CCE supports two-way authentication for domain names. +CCE supports two-way authentication for domain names. - Two-way authentication is disabled for domain names by default. You can run the **kubectl config use-context externalTLSVerify** command to switch to the externalTLSVerify context to enable it. @@ -121,13 +121,13 @@ Currently, CCE supports two-way authentication for domain names. - Asynchronous cluster synchronization takes about 5 to 10 minutes. You can view the synchronization result in **Synchronize Certificate** in **Operation Records**. -- For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, you need to bind the EIP again and download **kubeconfig.json** again. +- For a cluster that has been bound to an EIP, if the authentication fails (x509: certificate is valid) when two-way authentication is used, bind the EIP again and download **kubeconfig.json** again. - If the domain name two-way authentication is not supported, **kubeconfig.json** contains the **"insecure-skip-tls-verify": true** field, as shown in :ref:`Figure 1 `. To use two-way authentication, you can download the **kubeconfig.json** file again and enable two-way authentication for the domain names. .. _cce_10_0107__fig1941342411: - .. figure:: /_static/images/en-us_image_0000001568822965.png + .. figure:: /_static/images/en-us_image_0000001726718109.png :alt: **Figure 1** Two-way authentication disabled for domain names **Figure 1** Two-way authentication disabled for domain names diff --git a/umn/source/clusters/connecting_to_a_cluster/index.rst b/umn/source/clusters/connecting_to_a_cluster/index.rst new file mode 100644 index 0000000..62a38e4 --- /dev/null +++ b/umn/source/clusters/connecting_to_a_cluster/index.rst @@ -0,0 +1,18 @@ +:original_name: cce_10_0140.html + +.. _cce_10_0140: + +Connecting to a Cluster +======================= + +- :ref:`Connecting to a Cluster Using kubectl ` +- :ref:`Connecting to a Cluster Using an X.509 Certificate ` +- :ref:`Accessing a Cluster Using a Custom Domain Name ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + connecting_to_a_cluster_using_kubectl + connecting_to_a_cluster_using_an_x.509_certificate + accessing_a_cluster_using_a_custom_domain_name diff --git a/umn/source/clusters/creating_a_cce_cluster.rst b/umn/source/clusters/creating_a_cce_cluster.rst deleted file mode 100644 index 5a27e29..0000000 --- a/umn/source/clusters/creating_a_cce_cluster.rst +++ /dev/null @@ -1,105 +0,0 @@ -:original_name: cce_10_0028.html - -.. _cce_10_0028: - -Creating a CCE Cluster -====================== - -On the CCE console, you can easily create Kubernetes clusters. Kubernetes can manage container clusters at scale. A cluster manages a group of node resources. - -In CCE, you can create a CCE cluster to manage VMs. By using high-performance network models, hybrid clusters provide a multi-scenario, secure, and stable runtime environment for containers. - -Constraints ------------ - -- During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the DNS server address of the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name. -- You can create a maximum of 50 clusters in a single region. -- After a cluster is created, the following items cannot be changed: - - - Cluster type - - Number of master nodes in the cluster - - AZ of a master node - - Network configuration of the cluster, such as the VPC, subnet, container CIDR block, Service CIDR block, and kube-proxy (forwarding) settings - - Network model. For example, change **Tunnel network** to **VPC network**. - -Procedure ---------- - -#. Log in to the CCE console. Choose **Clusters**. On the displayed page, click **Create** next to **CCE cluster**. - -#. Set cluster parameters. - - **Basic Settings** - - - **Cluster Name** - - - **Cluster Version**: Select the Kubernetes version used by the cluster. - - - **Cluster Scale**: maximum number of nodes that can be managed by the cluster. - - - **HA**: distribution mode of master nodes. By default, master nodes are randomly distributed in different AZs to improve DR capabilities. - - You can also expand advanced settings and customize the master node distribution mode. The following two modes are supported: - - - **Random**: Master nodes are created in different AZs for DR. - - **Custom**: You can determine the location of each master node. - - - **Host**: Master nodes are created on different hosts in the same AZ. - - **Custom**: You can determine the location of each master node. - - **Network Settings** - - The cluster network settings cover nodes, containers, and Services. For details about the cluster networking and container network models, see :ref:`Overview `. - - - **Network Model**: CCE clusters support **VPC network** and **tunnel network** models. For details, see :ref:`VPC Network ` and :ref:`Container Tunnel Network `. - - **VPC**: Select the VPC to which the cluster belongs. If no VPC is available, click **Create VPC** to create one. The VPC cannot be changed after creation. - - **Master Node Subnet**: Select the subnet where the master node is deployed. If no subnet is available, click **Create Subnet** to create one. The subnet cannot be changed after creation. - - **Container CIDR Block**: Set the CIDR block used by containers. - - **Service CIDR Block**: CIDR block for Services used by containers in the same cluster to access each other. The value determines the maximum number of Services you can create. The value cannot be changed after creation. - - **Advanced Settings** - - - **Request Forwarding**: The IPVS and iptables modes are supported. For details, see :ref:`Comparing iptables and IPVS `. - - **CPU Manager**: For details, see :ref:`Binding CPU Cores `. - - **Certificate Authentication**: - - - **Default**: The X509-based authentication mode is enabled by default. X509 is a commonly used certificate format. - - - **Custom:** The cluster can identify users based on the header in the request body for authentication. - - You need to upload your **CA root certificate**, **client certificate**, and **private key** of the client certificate. - - .. caution:: - - - Upload a file **smaller than 1 MiB**. The CA certificate and client certificate can be in **.crt** or **.cer** format. The private key of the client certificate can only be uploaded **unencrypted**. - - The validity period of the client certificate must be longer than five years. - - The uploaded CA certificate is used for both the authentication proxy and the kube-apiserver aggregation layer configuration. **If the certificate is invalid, the cluster cannot be created**. - - Starting from v1.25, Kubernetes no longer supports certificate authentication generated using the SHA1WithRSA or ECDSAWithSHA1 algorithm. You are advised to use the SHA256 algorithm. - - - **Description**: The value can contain a maximum of 200 English characters. - -#. Click **Next: Add-on Configuration**. - - **Domain Name Resolution**: Uses the :ref:`coredns ` add-on, installed by default, to resolve domain names and connect to the cloud DNS server. - - **Container Storage**: Uses the :ref:`everest ` add-on, installed by default, to provide container storage based on CSI and connect to cloud storage services such as EVS. - - **Service logs** - - - Using ICAgent: - - A log collector provided by Application Operations Management (AOM), reporting logs to AOM and Log Tank Service (LTS) according to the log collection rules you configured. - - You can collect stdout logs as required. - - **Overload Control**: If overload control is enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available. - -#. After setting the parameters, click **Next: Confirm**. After confirming that the cluster configuration information is correct, select **I have read and understand the preceding instructions** and click **Submit**. - - It takes about 6 to 10 minutes to create a cluster. You can click **Back to Cluster List** to perform other operations on the cluster or click **Go to Cluster Events** to view the cluster details. - -Related Operations ------------------- - -- Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. -- Add nodes to the cluster. For details, see :ref:`Creating a Node `. diff --git a/umn/source/clusters/creating_a_cce_turbo_cluster.rst b/umn/source/clusters/creating_a_cce_turbo_cluster.rst deleted file mode 100644 index 49279dc..0000000 --- a/umn/source/clusters/creating_a_cce_turbo_cluster.rst +++ /dev/null @@ -1,111 +0,0 @@ -:original_name: cce_10_0298.html - -.. _cce_10_0298: - -Creating a CCE Turbo Cluster -============================ - -CCE Turbo clusters run on a cloud native infrastructure that features software-hardware synergy to support passthrough networking, high security and reliability, and intelligent scheduling. - -CCE Turbo clusters are paired with the Cloud Native Network 2.0 model for large-scale, high-performance container deployment. Containers are assigned IP addresses from the VPC CIDR block. Containers and nodes can belong to different subnets. Access requests from external networks in a VPC can be directly routed to container IP addresses, which greatly improves networking performance. **It is recommended** that you go through :ref:`Cloud Native Network 2.0 ` to understand the features and network planning of each CIDR block of Cloud Native Network 2.0. - -Notes and Constraints ---------------------- - -- During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the DNS server address of the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name. -- You can create a maximum of 50 clusters in a single region. -- CCE Turbo clusters support only Cloud Native Network 2.0. For details about this network model, see :ref:`Cloud Native Network 2.0 `. -- After a cluster is created, the following items cannot be changed: - - - Cluster type - - Number of master nodes in the cluster - - AZ of a master node - - Network configuration of the cluster, such as the VPC, subnet, container CIDR block, Service CIDR block, and kube-proxy (forwarding) settings. - - Network model. For example, change **Tunnel network** to **VPC network**. - -Procedure ---------- - -#. Log in to the CCE console. Choose **Clusters**. On the displayed page, click **Create** next to **CCE Turbo cluster**. - -#. Specify cluster parameters. - - **Basic Settings** - - - **Cluster Name** - - - **Cluster Version**: Select the Kubernetes version used by the cluster. - - - **Cluster Scale**: Select the maximum number of nodes that can be managed by the cluster. After the creation is complete, only scale-out is supported, but not scale-in. - - - **HA**: distribution mode of master nodes. By default, master nodes are randomly distributed in different AZs to improve DR capabilities. - - You can also expand advanced settings and customize the master node distribution mode. The following modes are supported: - - - **Host**: Master nodes are created on different hosts in the same AZ. - - **Custom**: You can determine the location of each master node. - - **Network Settings** - - The cluster network settings cover nodes, containers, and Services. For details about the cluster networking and container network models, see :ref:`Overview `. - - - **Network Model**: CCE Turbo clusters support only **Cloud Native Network 2.0**. For details, see :ref:`Cloud Native Network 2.0 `. - - **VPC**: Select the VPC to which the cluster belongs. If no VPC is available, click **Create VPC** to create one. The value cannot be changed after creation. - - **Master Node Subnet**: Select the subnet where the master node is deployed. If no subnet is available, click **Create Subnet** to create one. A master node requires at least four IP addresses, which cannot be changed after creation. - - **Pod Subnet**: Select the subnet where the container is located. If no subnet is available, click **Create Subnet** to create one. The pod subnet determines the maximum number of containers in the cluster. You can add pod subnets after creating the cluster. - - **Service CIDR Block**: CIDR block for :ref:`Services ` used by containers in the same cluster to access each other. The value determines the maximum number of Services you can create. The value cannot be changed after creation. - - **Advanced Settings** - - - **Request Forwarding**: The IPVS and iptables modes are supported. For details, see :ref:`Comparing iptables and IPVS `. - - - **CPU Manager**: For details, see :ref:`Binding CPU Cores `. - - - **Resource Tag**: - - You can add resource tags to classify resources. - - You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and resource migration efficiency. - - - **Certificate Authentication**: - - - **Default**: The X509-based authentication mode is enabled by default. X509 is a commonly used certificate format. - - - **Custom:** The cluster can identify users based on the header in the request body for authentication. - - You need to upload your **CA root certificate**, **client certificate**, and **private key** of the client certificate. - - .. caution:: - - - Upload a file **smaller than 1 MB**. The CA certificate and client certificate can be in **.crt** or **.cer** format. The private key of the client certificate can only be uploaded **unencrypted**. - - The validity period of the client certificate must be longer than five years. - - The uploaded CA certificate is used for both the authentication proxy and the kube-apiserver aggregation layer configuration. **If the certificate is invalid, the cluster cannot be created**. - - Starting from v1.25, Kubernetes no longer supports certificate authentication generated using the SHA1WithRSA or ECDSAWithSHA1 algorithm. You are advised to use the SHA256 algorithm. - - - **Description**: The value can contain a maximum of 200 English characters. - -#. Click **Next: Add-on Configuration**. - - **Domain Name Resolution**: Uses the :ref:`coredns ` add-on, installed by default, to resolve domain names and connect to the cloud DNS server. - - **Container Storage**: The :ref:`everest ` add-on is installed by default to provide container storage based on CSI and connect to cloud storage services such as EVS. - - **Service log** - - - **ICAgent**: - - A log collector provided by Application Operations Management (AOM), reporting logs to AOM and Log Tank Service (LTS) according to the log collection rules you configured. - - You can collect stdout logs as required. - - **Overload Control**: If overload control is enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available. - -#. After configuring the parameters, click **Next: Confirm**. - - It takes about 6 to 10 minutes to create a cluster. You can click **Back to Cluster List** to perform other operations on the cluster or click **Go to Cluster Events** to view the cluster details. - -Related Operations ------------------- - -- Using kubectl to connect to the cluster: :ref:`Connecting to a Cluster Using kubectl ` -- Add nodes to the cluster. For details, see :ref:`Creating a Node `. diff --git a/umn/source/clusters/cluster_overview/cce_turbo_clusters_and_cce_clusters.rst b/umn/source/clusters/creating_a_cluster/cce_turbo_clusters_and_cce_clusters.rst similarity index 72% rename from umn/source/clusters/cluster_overview/cce_turbo_clusters_and_cce_clusters.rst rename to umn/source/clusters/creating_a_cluster/cce_turbo_clusters_and_cce_clusters.rst index 2a5b2e4..b3e3108 100644 --- a/umn/source/clusters/cluster_overview/cce_turbo_clusters_and_cce_clusters.rst +++ b/umn/source/clusters/creating_a_cluster/cce_turbo_clusters_and_cce_clusters.rst @@ -8,35 +8,35 @@ CCE Turbo Clusters and CCE Clusters Comparison Between CCE Turbo Clusters and CCE Clusters ------------------------------------------------------ -The following table lists the differences between CCE Turbo clusters and CCE clusters: +The following table lists the differences between CCE Turbo clusters and CCE clusters. .. table:: **Table 1** Cluster types - +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+ - | Dimension | Sub-dimension | CCE Turbo cluster | CCE cluster | - +=================+=============================+================================================================================================================================+================================================================================================+ - | Cluster | Positioning | Next-gen container cluster, with accelerated computing, networking, and scheduling. Designed for Cloud Native 2.0 | Standard cluster for common commercial use | - +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+ - | | Node type | Deployment of VMs | Hybrid deployment of VMs and bare metal servers | - +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+ - | Network | Model | **Cloud Native Network 2.0**: applies to large-scale and high-performance scenarios. | **Cloud-native network 1.0**: applies to common, smaller-scale scenarios. | - | | | | | - | | | Max networking scale: 2,000 nodes | - Container tunnel network model | - | | | | - VPC network model | - +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+ - | | Network performance | Flattens the VPC network and container network into one. No performance loss. | Overlays the VPC network with the container network, causing certain performance loss. | - +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+ - | | Container network isolation | Associates pods with security groups. Unifies security isolation in and out the cluster via security groups' network policies. | - Container tunnel network model: supports network policies for intra-cluster communications. | - | | | | - VPC network model: supports no isolation. | - +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+ - | Security | Isolation | - VM: runs common containers, isolated by cgroups. | Common containers are deployed and isolated by cgroups. | - +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+ + +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ + | Category | Subcategory | CCE Turbo Cluster | CCE Cluster | + +=================+=============================+================================================================================================================================+========================================================================================+ + | Cluster | Positioning | Next-gen container cluster designed for Cloud Native 2.0, with accelerated computing, networking, and scheduling | Standard cluster for common commercial use | + +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ + | | Node type | Deployment of VMs | Hybrid deployment of VMs and bare-metal servers | + +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ + | Networking | Model | **Cloud Native Network 2.0**: applies to large-scale and high-performance scenarios. | **Cloud Native Network 1.0**: applies to common, smaller-scale scenarios. | + | | | | | + | | | Max networking scale: 2,000 nodes | - Tunnel network model | + | | | | - VPC network model | + +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ + | | Performance | Flattens the VPC network and container network into one, achieving zero performance loss. | Overlays the VPC network with the container network, causing certain performance loss. | + +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ + | | Container network isolation | Associates pods with security groups. Unifies security isolation in and out the cluster via security groups' network policies. | - Tunnel network model: supports network policies for intra-cluster communications. | + | | | | - VPC network model: supports no isolation. | + +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ + | Security | Isolation | - VM: runs common containers, isolated by cgroups. | Runs common containers, isolated by cgroups. | + +-----------------+-----------------------------+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------+ QingTian Architecture --------------------- |image1| -The QingTian architecture consists of data plane (software-hardware synergy) and management plane (Alkaid Smart Cloud Brain). The data plane innovates in five dimensions: simplified data center, diversified computing power, QingTian cards, ultra-fast engines, and simplified virtualization, to fully offload and accelerate compute, storage, networking, and security components. VMs, bare metal servers, and containers can run together. As a distributed operating system, the Alkaid Smart Cloud Brain focuses on the cloud, AI, and 5G, and provide all-domain scheduling to achieve cloud-edge-device collaboration and governance. +The QingTian architecture consists of data plane (software-hardware synergy) and management plane (Alkaid Smart Cloud Brain). The data plane innovates in five dimensions: simplified data center, diversified computing power, QingTian cards, ultra-fast engines, and simplified virtualization, to fully offload and accelerate compute, storage, networking, and security components. VMs, bare-metal servers, and containers can run together. As a distributed operating system, the Alkaid Smart Cloud Brain focuses on the cloud, AI, and 5G, and provides all-domain scheduling to achieve cloud-edge-device collaboration and governance. -.. |image1| image:: /_static/images/en-us_image_0000001517743452.png +.. |image1| image:: /_static/images/en-us_image_0000001647576704.png diff --git a/umn/source/clusters/creating_a_cluster/comparing_iptables_and_ipvs.rst b/umn/source/clusters/creating_a_cluster/comparing_iptables_and_ipvs.rst new file mode 100644 index 0000000..26a0083 --- /dev/null +++ b/umn/source/clusters/creating_a_cluster/comparing_iptables_and_ipvs.rst @@ -0,0 +1,43 @@ +:original_name: cce_10_0349.html + +.. _cce_10_0349: + +Comparing iptables and IPVS +=========================== + +kube-proxy is a key component of a Kubernetes cluster. It is used for load balancing and forwarding data between a Service and its backend pods. + +CCE supports the iptables and IPVS forwarding modes. + +- IPVS allows higher throughput and faster forwarding. This mode applies to scenarios where the cluster scale is large or the number of Services is large. +- iptables is the traditional kube-proxy mode. This mode applies to the scenario where the number of Services is small or there are a large number of short concurrent connections on the client. When there are more than 1,000 Services in the cluster, network delay may occur. + +Constraints +----------- + +- In a cluster using the IPVS proxy mode, if the ingress and Service use the same ELB load balancer, the ingress cannot be accessed from the nodes and containers in the cluster because kube-proxy mounts the LoadBalancer Service address to the ipvs-0 bridge. This bridge intercepts the traffic of the load balancer connected to the ingress. You are advised to use different ELB load balancers for the ingress and Service. +- In iptables mode, the ClusterIP cannot be pinged. In IPVS mode, the ClusterIP can be pinged. + +iptables +-------- + +iptables is a Linux kernel function for processing and filtering a large number of data packets. It allows flexible sequences of rules to be attached to various hooks in the packet processing pipeline. When iptables is used, kube-proxy implements NAT and load balancing in the NAT pre-routing hook. + +kube-proxy is an O(n) algorithm, in which *n* increases with the cluster scale. The cluster scale refers to the number of Services and backend pods. + +IPVS +---- + +IP Virtual Server (IPVS) is constructed on top of Netfilter and balances transport-layer loads as part of the Linux kernel. IPVS can direct requests for TCP- or UDP-based services to the real servers, and make services of the real servers appear as virtual services on a single IP address. + +In the IPVS mode, kube-proxy uses IPVS load balancing instead of iptables. IPVS is designed to balance loads for a large number of Services. It has a set of optimized APIs and uses optimized search algorithms instead of simply searching for rules from a list. + +The complexity of the connection process of IPVS-based kube-proxy is O(1). In most cases, the connection processing efficiency is irrelevant to the cluster scale. + +IPVS involves multiple load balancing algorithms, such as round-robin, shortest expected delay, least connections, and various hashing methods. However, iptables has only one algorithm for random selection. + +Compared with iptables, IPVS has the following advantages: + +#. Provides better scalability and performance for large clusters. +#. Supports better load balancing algorithms than iptables. +#. Supports functions including server health check and connection retries. diff --git a/umn/source/clusters/creating_a_cluster/creating_a_cluster.rst b/umn/source/clusters/creating_a_cluster/creating_a_cluster.rst new file mode 100644 index 0000000..4440dba --- /dev/null +++ b/umn/source/clusters/creating_a_cluster/creating_a_cluster.rst @@ -0,0 +1,118 @@ +:original_name: cce_10_0028.html + +.. _cce_10_0028: + +Creating a Cluster +================== + +On the CCE console, you can easily create Kubernetes clusters. After a cluster is created, the master node is hosted by CCE. You only need to create worker nodes. In this way, you can implement cost-effective O&M and efficient service deployment. + +Constraints +----------- + +- During the node creation, software packages are downloaded from OBS using the domain name. Use a private DNS server to resolve the OBS domain name, and configure the DNS server address of the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name. +- You can create a maximum of 50 clusters in a single region. +- After a cluster is created, the following items cannot be changed: + + - Cluster type + - Number of master nodes in the cluster + - AZ of a master node + - Network configuration of the cluster, such as the VPC, subnet, container CIDR block, Service CIDR block, and kube-proxy (:ref:`request forwarding `) settings. + - Network model. For example, change **Tunnel network** to **VPC network**. + +Procedure +--------- + +#. Log in to the CCE console. + +#. Choose **Clusters**. On the displayed page, select the type of the cluster to be created and click **Create**. + +#. Specify cluster parameters. + + **Basic Settings** + + - **Cluster Name**: indicates the name of the cluster to be created. The cluster name must be unique under the same account. + + - **Cluster Version**: Select the Kubernetes version used by the cluster. + + - **Cluster Scale**: maximum number of nodes that can be managed by the cluster. + + - HA: distribution mode of master nodes. By default, master nodes are randomly distributed in different AZs to improve DR capabilities. + + You can also expand advanced settings and customize the master node distribution mode. The following two modes are supported: + + - **Random**: Master nodes are created in different AZs for DR. + - **Custom**: You can determine the location of each master node. + + - **Host**: Master nodes are created on different hosts in the same AZ. + - **Custom**: You can determine the location of each master node. + + **Network Settings** + + The cluster network settings cover nodes, containers, and Services. For details about the cluster networking and container network models, see :ref:`Overview `. + + - Network Model: CCE clusters support **VPC network** and **Tunnel network**. CCE Turbo clusters support **Cloud Native Network 2.0.**. For details, see :ref:`Overview `. + - **VPC**: Select the VPC to which the cluster belongs. If no VPC is available, click **Create VPC** to create one. The value cannot be changed after creation. + - **Master Node Subnet**: Select the subnet where the master node is deployed. If no subnet is available, click **Create Subnet** to create one. The subnet cannot be changed after creation. + - **Container CIDR Block** (CCE Cluster): Specify the CIDR block used by containers, which determines the maximum number of containers in the cluster. + - **Default Pod Subnet** (CCE Turbo Cluster): Select the subnet where the container is located. If no subnet is available, click **Create Subnet**. The pod subnet determines the maximum number of containers in the cluster. You can add pod subnets after creating the cluster. + - **Service CIDR Block**: CIDR block for Services used by containers in the same cluster to access each other. The value determines the maximum number of Services you can create. The value cannot be changed after creation. + + **Advanced Settings** + + - .. _cce_10_0028__li1895772174715: + + **Request Forwarding**: The IPVS and iptables modes are supported. For details, see :ref:`Comparing iptables and IPVS `. + + - **CPU Manager**: When enabled, CPU cores will be exclusively allocated to workload pods. For details, see :ref:`CPU Policy `. + + - Resource Tag: + + You can add resource tags to classify resources. + + - **Certificate Authentication**: + + - **Default**: The X509-based authentication mode is enabled by default. X509 is a commonly used certificate format. + + - **Custom:** The cluster can identify users based on the header in the request body for authentication. + + Upload your **CA root certificate**, **client certificate**, and **private key** of the client certificate. + + .. caution:: + + - Upload a file **smaller than 1 MiB**. The CA certificate and client certificate can be in **.crt** or **.cer** format. The private key of the client certificate can only be uploaded **unencrypted**. + - The validity period of the client certificate must be longer than five years. + - The uploaded CA certificate is used for both the authentication proxy and the kube-apiserver aggregation layer configuration. **If the certificate is invalid, the cluster cannot be created**. + - Starting from v1.25, Kubernetes no longer supports certificate authentication generated using the SHA1WithRSA or ECDSAWithSHA1 algorithm. You are advised to use the SHA256 algorithm. + + - **Description**: The description cannot exceed 200 characters. + +#. Click **Next: Add-on Configuration**. + + **Domain Name Resolution**: + + - **Domain Name Resolution**: The :ref:`coredns ` add-on is installed by default to resolve domain names and connect to the cloud DNS server. + + **Container Storage**: The :ref:`everest ` add-on is installed by default to provide container storage based on CSI and connect to cloud storage services such as EVS. + + **Fault Detection**: The :ref:`npd ` add-on is installed by default to provide node fault detection and isolation for the cluster, helping you identify node problems in a timely manner. + + **Data Plane Logs** + + - Using ICAgent: + + A log collector provided by Application Operations Management (AOM), reporting logs to AOM and Log Tank Service (LTS) according to the log collection rules you configured. + + You can collect stdout logs as required. + + **Overload Control**: If enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available. For details, see :ref:`Cluster Overload Control `. + +#. After the parameters are specified, click **Next: Confirm**. The cluster resource list is displayed. Confirm the information and click **Submit**. + + It takes about 6 to 10 minutes to create a cluster. You can click **Back to Cluster List** to perform other operations on the cluster or click **Go to Cluster Events** to view the cluster details. + +Related Operations +------------------ + +- After creating a cluster, you can use the Kubernetes command line (CLI) tool kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. +- Add nodes to the cluster. For details, see :ref:`Creating a Node `. diff --git a/umn/source/clusters/creating_a_cluster/index.rst b/umn/source/clusters/creating_a_cluster/index.rst new file mode 100644 index 0000000..a2ee791 --- /dev/null +++ b/umn/source/clusters/creating_a_cluster/index.rst @@ -0,0 +1,18 @@ +:original_name: cce_10_0298.html + +.. _cce_10_0298: + +Creating a Cluster +================== + +- :ref:`CCE Turbo Clusters and CCE Clusters ` +- :ref:`Creating a Cluster ` +- :ref:`Comparing iptables and IPVS ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + cce_turbo_clusters_and_cce_clusters + creating_a_cluster + comparing_iptables_and_ipvs diff --git a/umn/source/clusters/index.rst b/umn/source/clusters/index.rst index cf95826..5dc0afe 100644 --- a/umn/source/clusters/index.rst +++ b/umn/source/clusters/index.rst @@ -6,23 +6,17 @@ Clusters ======== - :ref:`Cluster Overview ` -- :ref:`Creating a CCE Turbo Cluster ` -- :ref:`Creating a CCE Cluster ` -- :ref:`Using kubectl to Run a Cluster ` +- :ref:`Creating a Cluster ` +- :ref:`Connecting to a Cluster ` - :ref:`Upgrading a Cluster ` - :ref:`Managing a Cluster ` -- :ref:`Obtaining a Cluster Certificate ` -- :ref:`Changing Cluster Scale ` .. toctree:: :maxdepth: 1 :hidden: cluster_overview/index - creating_a_cce_turbo_cluster - creating_a_cce_cluster - using_kubectl_to_run_a_cluster/index + creating_a_cluster/index + connecting_to_a_cluster/index upgrading_a_cluster/index managing_a_cluster/index - obtaining_a_cluster_certificate - changing_cluster_scale diff --git a/umn/source/clusters/changing_cluster_scale.rst b/umn/source/clusters/managing_a_cluster/changing_cluster_scale.rst similarity index 86% rename from umn/source/clusters/changing_cluster_scale.rst rename to umn/source/clusters/managing_a_cluster/changing_cluster_scale.rst index 2c291cd..35db840 100644 --- a/umn/source/clusters/changing_cluster_scale.rst +++ b/umn/source/clusters/managing_a_cluster/changing_cluster_scale.rst @@ -10,8 +10,8 @@ Scenario CCE allows you to change the number of nodes managed in a cluster. -Notes and Constraints ---------------------- +Constraints +----------- - This function is supported for clusters of v1.15 and later versions. - Starting from v1.15.11, the number of nodes in a cluster can be changed to 2000. The number of nodes in a single master node cannot be changed to 1000 or more. @@ -25,12 +25,12 @@ Procedure #. Log in to the CCE console. In the navigation pane, choose **Clusters**. -#. Click |image1| next to the cluster whose specifications need to be changed. +#. Click |image1| next to the cluster whose specifications need to be modified. -#. On the page displayed, select a new flavor as required. +#. On the page displayed, select a new cluster scale. -#. Click **OK**. +#. Click **Next** to confirm the specifications and click **OK**. You can click **Operation Records** in the upper left corner to view the cluster change history. The status changes from **Executing** to **Successful**, indicating that the cluster specifications are successfully changed. -.. |image1| image:: /_static/images/en-us_image_0000001518062664.png +.. |image1| image:: /_static/images/en-us_image_0000001647417520.png diff --git a/umn/source/clusters/managing_a_cluster/cluster_configuration_management.rst b/umn/source/clusters/managing_a_cluster/cluster_configuration_management.rst index 0f3b2f4..a73746f 100644 --- a/umn/source/clusters/managing_a_cluster/cluster_configuration_management.rst +++ b/umn/source/clusters/managing_a_cluster/cluster_configuration_management.rst @@ -10,19 +10,135 @@ Scenario CCE allows you to manage cluster parameters, through which you can let core components work under your very requirements. -Notes and Constraints ---------------------- +Constraints +----------- -This function is supported only for clusters of **v1.15 and later**. It is not displayed for versions earlier than v1.15. +This function is supported only in clusters of **v1.15 and later**. It is not displayed for versions earlier than v1.15. Procedure --------- #. Log in to the CCE console. In the navigation pane, choose **Clusters**. #. Click |image1| next to the target cluster. -#. On the **Manage Component** page on the right, change the values of the following Kubernetes parameters: +#. On the **Manage Components** page on the right, change the values of the Kubernetes parameters listed in the following table. - .. table:: **Table 1** Extended controller parameters + .. table:: **Table 1** kube-apiserver parameters + + +----------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | Value | + +========================================+====================================================================================================================================================================================================================================+===================================================================================================================================+ + | default-not-ready-toleration-seconds | Tolerance time when a node is in the **NotReady** state. | Default: 300s | + | | | | + | | By default, this tolerance is added to each pod. | | + +----------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ + | default-unreachable-toleration-seconds | Tolerance time when a node is in the **unreachable** state. | Default: 300s | + | | | | + | | By default, this tolerance is added to each pod. | | + +----------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ + | max-mutating-requests-inflight | Maximum number of concurrent mutating requests. When the value of this parameter is exceeded, the server rejects requests. | Manual configuration is no longer supported since cluster v1.21. The value is automatically specified based on the cluster scale. | + | | | | + | | The value **0** indicates no limitation. This parameter is related to the cluster scale. You are advised not to change the value. | - **200** for clusters with 50 or 200 nodes | + | | | - **500** for clusters with 1,000 nodes | + | | | - **1000** for clusters with 2,000 nodes | + +----------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ + | max-requests-inflight | Maximum number of concurrent non-mutating requests. When the value of this parameter is exceeded, the server rejects requests. | Manual configuration is no longer supported since cluster v1.21. The value is automatically specified based on the cluster scale. | + | | | | + | | The value **0** indicates no limitation. This parameter is related to the cluster scale. You are advised not to change the value. | - **400** for clusters with 50 or 200 nodes | + | | | - **1000** for clusters with 1,000 nodes | + | | | - **2000** for clusters with 2,000 nodes | + +----------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ + | service-node-port-range | NodePort port range. After changing the value, go to the security group page and change the TCP/UDP port range of node security groups 30000 to 32767. Otherwise, ports other than the default port cannot be accessed externally. | Default: | + | | | | + | | | 30000-32767 | + | | | | + | | | Value range: | + | | | | + | | | Min > 20105 | + | | | | + | | | Max < 32768 | + +----------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ + | support-overload | Cluster overload control. If enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available. | - false: Overload control is disabled. | + | | | - true: Overload control is enabled. | + | | This parameter is supported only by clusters of v1.23 or later. | | + +----------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ + + .. table:: **Table 2** kube-scheduler parameters + + +-----------------------+------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | Parameter | Description | Value | + +=======================+==================================================================+=========================================================================================+ + | kube-api-qps | Query per second (QPS) to use while talking with kube-apiserver. | - If the number of nodes in a cluster is less than 1000, the default value is **100**. | + | | | - If a cluster contains 1000 or more nodes, the default value is **200**. | + +-----------------------+------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | kube-api-burst | Burst to use while talking with kube-apiserver. | - If the number of nodes in a cluster is less than 1000, the default value is **100**. | + | | | - If a cluster contains 1000 or more nodes, the default value is **200**. | + +-----------------------+------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + + .. table:: **Table 3** kube-controller-manager parameters + + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | Parameter | Description | Value | + +=======================================+========================================================================================================================================================================+=========================================================================================+ + | concurrent-deployment-syncs | Number of Deployments that are allowed to synchronize concurrently. | Default: 5 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-endpoint-syncs | Number of endpoints that are allowed to synchronize concurrently. | Default: 5 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-gc-syncs | Number of garbage collector workers that are allowed to synchronize concurrently. | Default: 20 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-job-syncs | Number of jobs that can be synchronized at the same time. | Default: 5 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-namespace-syncs | Number of namespaces that are allowed to synchronize concurrently. | Default: 10 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-replicaset-syncs | Number of ReplicaSets that are allowed to synchronize concurrently. | Default: 5 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-resource-quota-syncs | Number of resource quotas that are allowed to synchronize concurrently. | Default: 5 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-service-syncs | Number of Services that are allowed to synchronize concurrently. | Default: 10 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-serviceaccount-token-syncs | Number of service account tokens that are allowed to synchronize concurrently. | Default: 5 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-ttl-after-finished-syncs | Number of TTL-after-finished controller workers that are allowed to synchronize concurrently. | Default: 5 | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | concurrent-rc-syncs | Number of replication controllers that are allowed to synchronize concurrently. | Default: 5 | + | | | | + | | .. note:: | | + | | | | + | | This parameter is used only in clusters of v1.21 to v1.23. In clusters of v1.25 and later, this parameter is deprecated (officially deprecated from v1.25.3-r0 on). | | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | horizontal-pod-autoscaler-sync-period | How often HPA audits metrics in a cluster. | Default: 15 seconds | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | kube-api-qps | Query per second (QPS) to use while talking with kube-apiserver. | - If the number of nodes in a cluster is less than 1000, the default value is **100**. | + | | | - If a cluster contains 1000 or more nodes, the default value is **200**. | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | kube-api-burst | Burst to use while talking with kube-apiserver. | - If the number of nodes in a cluster is less than 1000, the default value is **100**. | + | | | - If a cluster contains 1000 or more nodes, the default value is **200**. | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + | terminated-pod-gc-threshold | Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. | Default: 1000 | + | | | | + | | If <= 0, the terminated pod garbage collector is disabled. | | + +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------+ + + .. table:: **Table 4** eni parameters (supported only by CCE Turbo clusters) + + +----------------------------+------------------------------------------------------------------------------------------------------+-----------------------+ + | Parameter | Description | Value | + +============================+======================================================================================================+=======================+ + | nic-minimum-target | Minimum number of ENIs bound to a node at the cluster level | Default: 10 | + +----------------------------+------------------------------------------------------------------------------------------------------+-----------------------+ + | nic-maximum-target | Maximum number of ENIs pre-bound to a node at the cluster level | Default: 0 | + +----------------------------+------------------------------------------------------------------------------------------------------+-----------------------+ + | nic-warm-target | Number of ENIs pre-bound to a node at the cluster level | Default: 2 | + +----------------------------+------------------------------------------------------------------------------------------------------+-----------------------+ + | nic-max-above-warm-target | Reclaim number of ENIs pre-bound to a node at the cluster level | Default: 2 | + +----------------------------+------------------------------------------------------------------------------------------------------+-----------------------+ + | prebound-subeni-percentage | Low threshold of the number of bound ENIs: High threshold of the number of bound ENIs | Default: 0:0 | + | | | | + | | .. note:: | | + | | | | + | | This parameter is being discarded. Use the dynamic pre-binding parameters of the other four ENIs. | | + +----------------------------+------------------------------------------------------------------------------------------------------+-----------------------+ + + .. table:: **Table 5** Extended controller configuration parameters (supported only by clusters of v1.21 and later) +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ | Parameter | Description | Value | @@ -33,119 +149,6 @@ Procedure | | - **true**: auto creation enabled For details about the resource quota defaults, see :ref:`Setting a Resource Quota `. | | +-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - .. table:: **Table 2** kube-apiserver parameters - - +----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | Value | - +========================================+===============================================================================================================================================================================================================================================+===================================================================================================================================+ - | default-not-ready-toleration-seconds | notReady tolerance time, in seconds. NoExecute that is added by default to every pod that does not already have such a toleration. | Default: 300s | - +----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ - | default-unreachable-toleration-seconds | unreachable tolerance time, in seconds. NoExecute that is added by default to every pod that does not already have such a toleration. | Default: 300s | - +----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ - | max-mutating-requests-inflight | Maximum number of concurrent mutating requests. When the value of this parameter is exceeded, the server rejects requests. | Manual configuration is no longer supported since cluster v1.21. The value is automatically specified based on the cluster scale. | - | | | | - | | The value **0** indicates no limitation. | - **200** for clusters with 50 or 200 nodes | - | | | - **500** for clusters with 1,000 nodes | - | | | - **1000** for clusters with 2,000 nodes | - +----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ - | max-requests-inflight | Maximum number of concurrent non-mutating requests. When the value of this parameter is exceeded, the server rejects requests. | Manual configuration is no longer supported since cluster v1.21. The value is automatically specified based on the cluster scale. | - | | | | - | | The value **0** indicates no limitation. | - **400** for clusters with 50 or 200 nodes | - | | | - **1000** for clusters with 1,000 nodes | - | | | - **2000** for clusters with 2,000 nodes | - +----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ - | service-node-port-range | NodePort port range. After changing the value, you need to go to the security group page to change the TCP/UDP port range of node security groups 30000 to 32767. Otherwise, ports other than the default port cannot be accessed externally. | Default: | - | | | | - | | | 30000-32767 | - | | | | - | | | Options: | - | | | | - | | | min>20105 | - | | | | - | | | max<32768 | - +----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ - | support-overload | Cluster overload control. If enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available. | - false: Overload control is disabled. | - | | | - true: Overload control is enabled. | - +----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ - - .. table:: **Table 3** kube-controller-manager parameters - - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | Parameter | Description | Value | - +=======================================+========================================================================================================================================================================+=======================+ - | concurrent-deployment-syncs | Number of Deployments that are allowed to synchronize concurrently. | Default: 5 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-endpoint-syncs | Number of endpoints that are allowed to synchronize concurrently. | Default: 5 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-gc-syncs | Number of garbage collector workers that are allowed to synchronize concurrently. | Default: 20 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-job-syncs | Number of jobs that can be synchronized at the same time. | Default: 5 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-namespace-syncs | Number of namespaces that are allowed to synchronize concurrently. | Default: 10 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-replicaset-syncs | Number of ReplicaSets that are allowed to synchronize concurrently. | Default: 5 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-resource-quota-syncs | Number of resource quotas that are allowed to synchronize concurrently. | Default: 5 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-service-syncs | Number of Services that are allowed to synchronize concurrently. | Default: 10 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-serviceaccount-token-syncs | Number of service account tokens that are allowed to synchronize concurrently. | Default: 5 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-ttl-after-finished-syncs | Number of TTL-after-finished controller workers that are allowed to synchronize concurrently. | Default: 5 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent_rc_syncs | Number of replication controllers that are allowed to synchronize concurrently. | Default: 5 | - | | | | - | | .. note:: | | - | | | | - | | This parameter is used only in clusters of v1.19 or earlier. | | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | concurrent-rc-syncs | Number of replication controllers that are allowed to synchronize concurrently. | Default: 5 | - | | | | - | | .. note:: | | - | | | | - | | This parameter is used only in clusters of v1.21 to v1.23. In clusters of v1.25 and later, this parameter is deprecated (officially deprecated from v1.25.3-r0 on). | | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | horizontal-pod-autoscaler-sync-period | How often HPA audits metrics in a cluster. | Default: 15 seconds | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | kube-api-qps | Query per second (QPS) to use while talking with kube-apiserver. | Default: 100 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | kube-api-burst | Burst to use while talking with kube-apiserver. | Default: 100 | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | terminated-pod-gc-threshold | Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. | Default: 1000 | - | | | | - | | If <= 0, the terminated pod garbage collector is disabled. | | - +---------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - - .. table:: **Table 4** kube-scheduler parameters - - +----------------+------------------------------------------------------------------+--------------+ - | Parameter | Description | Value | - +================+==================================================================+==============+ - | kube-api-qps | Query per second (QPS) to use while talking with kube-apiserver. | Default: 100 | - +----------------+------------------------------------------------------------------+--------------+ - | kube-api-burst | Burst to use while talking with kube-apiserver. | Default: 100 | - +----------------+------------------------------------------------------------------+--------------+ - - .. table:: **Table 5** eni parameters (supported only by CCE Turbo clusters) - - +----------------------------+----------------------------------------------------------------------------------------------+-----------------------+ - | Parameter | Description | Value | - +============================+==============================================================================================+=======================+ - | nic-minimum-target | Minimum number of ENIs bound to a node at the cluster level | Default: 10 | - +----------------------------+----------------------------------------------------------------------------------------------+-----------------------+ - | nic-maximum-target | Maximum number of ENIs pre-bound to a node at the cluster level | Default: 0 | - +----------------------------+----------------------------------------------------------------------------------------------+-----------------------+ - | nic-warm-target | Number of ENIs pre-bound to a node at the cluster level | Default: 2 | - +----------------------------+----------------------------------------------------------------------------------------------+-----------------------+ - | nic-max-above-warm-target | Reclaim number of ENIs pre-bound to a node at the cluster level | Default: 2 | - +----------------------------+----------------------------------------------------------------------------------------------+-----------------------+ - | prebound-subeni-percentage | Low threshold of the number of bound ENIs : High threshold of the number of bound ENIs | Default: 0:0 | - | | | | - | | .. note:: | | - | | | | - | | This parameter is discarded. Use the other four dynamic preheating parameters of the ENI. | | - +----------------------------+----------------------------------------------------------------------------------------------+-----------------------+ - #. Click **OK**. References @@ -155,4 +158,4 @@ References - `kube-controller-manager `__ - `kube-scheduler `__ -.. |image1| image:: /_static/images/en-us_image_0000001517903048.png +.. |image1| image:: /_static/images/en-us_image_0000001695896409.png diff --git a/umn/source/clusters/managing_a_cluster/cluster_overload_control.rst b/umn/source/clusters/managing_a_cluster/cluster_overload_control.rst index e3b1094..a3d8023 100644 --- a/umn/source/clusters/managing_a_cluster/cluster_overload_control.rst +++ b/umn/source/clusters/managing_a_cluster/cluster_overload_control.rst @@ -8,10 +8,10 @@ Cluster Overload Control Scenario -------- -If overload control is enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available. +If enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available. -Notes and Constraints ---------------------- +Constraints +----------- The cluster version must be 1.23 or later. diff --git a/umn/source/clusters/managing_a_cluster/deleting_a_cluster.rst b/umn/source/clusters/managing_a_cluster/deleting_a_cluster.rst index e1bae06..2e76c47 100644 --- a/umn/source/clusters/managing_a_cluster/deleting_a_cluster.rst +++ b/umn/source/clusters/managing_a_cluster/deleting_a_cluster.rst @@ -5,11 +5,6 @@ Deleting a Cluster ================== -Scenario --------- - -This section describes how to delete a cluster. - Precautions ----------- @@ -21,32 +16,35 @@ Precautions - ELB load balancers associated with Services and ingresses (only the automatically created load balancers are deleted); - Manually created cloud storage resources associated with PVs or imported cloud storage resources (only the cloud storage resources automatically created by PVCs are deleted) -- A hibernated cluster cannot be deleted. Wake up the cluster and try again. +- If you delete a cluster that is not running (for example, unavailable), associated resources, such as storage and networking resources, will remain. -- If a cluster whose status is Unavailable is deleted, some storage resources of the cluster may need to be manually deleted. -Procedure ---------- +Deleting a Cluster +------------------ + +.. important:: + + A hibernated cluster cannot be deleted. Wake up the cluster and try again. #. Log in to the CCE console. In the navigation pane, choose **Clusters**. #. Click |image1| next to the cluster to be deleted. -#. In the displayed dialog box, select the resources to be released. +#. In the displayed **Delete Cluster** dialog box, select the resources to be released. - - Delete cloud storage resources attached to workloads in the cluster. + - Delete cloud storage resources associated with workloads in the cluster. .. note:: - Before you delete the PVCs and volumes, pay attention to the following rules: + When deleting underlying cloud storage resources bound to storage volumes in a cluster, pay attention to following constraints: - - The underlying storage resources are deleted according to the reclaim policy you defined. - - If there are a large number of files (more than 1,000) in the OBS bucket, manually clear the files and then delete the cluster. + - The underlying storage resources are deleted according to the reclamation policy you defined for the storage volumes. For example, if the reclamation policy of storage volumes is **Retain**, the underlying storage resources will be retained after the cluster is deleted. + - If there are more than 1,000 files in the OBS bucket, manually clear the files and then delete the cluster. - - Delete networking resources, such as load balancers in a cluster. (Only automatically created load balancers can be deleted.) + - Delete network resources such as load balancers in a cluster. (Only automatically created load balancers will be deleted). #. Click **Yes** to start deleting the cluster. The delete operation takes 1 to 3 minutes to complete. -.. |image1| image:: /_static/images/en-us_image_0000001569023085.png +.. |image1| image:: /_static/images/en-us_image_0000001695896837.png diff --git a/umn/source/clusters/managing_a_cluster/hibernating_and_waking_up_a_cluster.rst b/umn/source/clusters/managing_a_cluster/hibernating_and_waking_up_a_cluster.rst index 4eb9494..af2d638 100644 --- a/umn/source/clusters/managing_a_cluster/hibernating_and_waking_up_a_cluster.rst +++ b/umn/source/clusters/managing_a_cluster/hibernating_and_waking_up_a_cluster.rst @@ -14,8 +14,8 @@ After a cluster is hibernated, resources such as workloads cannot be created or A hibernated cluster can be quickly woken up and used normally. -Notes and Constraints ---------------------- +Constraints +----------- During cluster wakeup, the master node may fail to be started due to insufficient resources. As a result, the cluster fails to be woken up. Wait for a while and wake up the cluster again. @@ -33,5 +33,5 @@ Waking Up a Cluster #. Click |image2| next to the cluster to be woken up. #. When the cluster status changes from **Waking up** to **Running**, the cluster is woken up. It takes about 3 to 5 minutes to wake up the cluster. -.. |image1| image:: /_static/images/en-us_image_0000001517743460.png -.. |image2| image:: /_static/images/en-us_image_0000001569182589.png +.. |image1| image:: /_static/images/en-us_image_0000001695896449.png +.. |image2| image:: /_static/images/en-us_image_0000001695737165.png diff --git a/umn/source/clusters/managing_a_cluster/index.rst b/umn/source/clusters/managing_a_cluster/index.rst index 68d0b13..1decea2 100644 --- a/umn/source/clusters/managing_a_cluster/index.rst +++ b/umn/source/clusters/managing_a_cluster/index.rst @@ -6,15 +6,17 @@ Managing a Cluster ================== - :ref:`Cluster Configuration Management ` +- :ref:`Cluster Overload Control ` +- :ref:`Changing Cluster Scale ` - :ref:`Deleting a Cluster ` - :ref:`Hibernating and Waking Up a Cluster ` -- :ref:`Cluster Overload Control ` .. toctree:: :maxdepth: 1 :hidden: cluster_configuration_management + cluster_overload_control + changing_cluster_scale deleting_a_cluster hibernating_and_waking_up_a_cluster - cluster_overload_control diff --git a/umn/source/clusters/obtaining_a_cluster_certificate.rst b/umn/source/clusters/obtaining_a_cluster_certificate.rst deleted file mode 100644 index d98cf05..0000000 --- a/umn/source/clusters/obtaining_a_cluster_certificate.rst +++ /dev/null @@ -1,31 +0,0 @@ -:original_name: cce_10_0175.html - -.. _cce_10_0175: - -Obtaining a Cluster Certificate -=============================== - -Scenario --------- - -This section describes how to obtain the cluster certificate from the console and use it to access Kubernetes clusters. - -Procedure ---------- - -#. Log in to the CCE console and access the cluster console. - -#. Choose **Cluster Information** from the navigation pane and click **Download** next to **Authentication Mode** in the **Connection Information** area. - -#. In the **Download X.509 Certificate** dialog box displayed, select the certificate expiration time and download the X.509 certificate of the cluster as prompted. - - - .. figure:: /_static/images/en-us_image_0000001568822637.png - :alt: **Figure 1** Downloading a certificate - - **Figure 1** Downloading a certificate - - .. important:: - - - The downloaded certificate contains three files: **client.key**, **client.crt**, and **ca.crt**. Keep these files secure. - - Certificates are not required for mutual access between containers in a cluster. diff --git a/umn/source/clusters/upgrading_a_cluster/before_you_start.rst b/umn/source/clusters/upgrading_a_cluster/before_you_start.rst index f64e4e0..443df1d 100644 --- a/umn/source/clusters/upgrading_a_cluster/before_you_start.rst +++ b/umn/source/clusters/upgrading_a_cluster/before_you_start.rst @@ -7,56 +7,297 @@ Before You Start Before the upgrade, you can check whether your cluster can be upgraded and which versions are available on the CCE console. For details, see :ref:`Upgrade Overview `. -Notes ------ +.. _cce_10_0302__section16520163082115: -- **Upgraded clusters cannot be rolled back. Therefore, perform the upgrade during off-peak hours to minimize the impact on your services.** -- Do not **shut down, restart, or delete nodes** during cluster upgrade. Otherwise, the upgrade fails. -- Before upgrading a cluster, **disable auto scaling policies** to prevent node scaling during the upgrade. Otherwise, the upgrade fails. -- If you locally modify the configuration of a cluster node, the cluster upgrade may fail or the configuration may be lost after the upgrade. Therefore, modify the configurations on the CCE console (cluster or node pool list page) so that they will be automatically inherited during the upgrade. -- During the cluster upgrade, the running workload services will not be interrupted, but access to the API server will be temporarily interrupted. -- Before upgrading the cluster, check whether the cluster is healthy. -- To ensure data security, you are advised to back up data before upgrading the cluster. During the upgrade, you are not advised to perform any operations on the cluster. -- During the cluster upgrade, the **node.kubernetes.io/upgrade** taint (the effect is **NoSchedule**) is added to the node. After the cluster upgrade is complete, the taint is removed. Do not add taints with the same key name on the node. Even if the taints have different effects, they may be deleted by the system by mistake after the upgrade. +Precautions +----------- + +Before upgrading a cluster, pay attention to the following points: + +- **Upgrading a cluster cannot be rolled back. Perform an upgrade at a proper time to minimize the impact on your services.** To ensure data security, you back up your data before an upgrade. +- Before upgrading a cluster, **ensure that no** :ref:`high-risk operations ` **are performed in the cluster**. Otherwise, the cluster upgrade may fail or the configuration may be lost after the upgrade. Common high-risk operations include modifying cluster node configurations locally and modifying the configurations of the listeners managed by CCE on the ELB console. Instead, modify configurations on the CCE console so that the modifications can be automatically inherited during the upgrade. +- Before upgrading a cluster, ensure the cluster is working properly. +- Before upgrading a cluster, learn about the features and differences of each cluster version in :ref:`Kubernetes Release Notes ` to prevent exceptions due to the use of an incompatible cluster version. For example, check whether any APIs deprecated in the target version are used in the cluster. Otherwise, calling the APIs may fail after the upgrade. For details, see :ref:`Deprecated APIs `. + +During a cluster upgrade, pay attention to the following points that may affect your services: + +- During a cluster upgrade, do not perform any operation on the cluster. Do not **stop, restart, or delete nodes** during cluster upgrade. Otherwise, the upgrade will fail. +- During a cluster upgrade, the running workloads will not be interrupted, but access to the API server will be temporarily interrupted. +- During a cluster upgrade, the **node.kubernetes.io/upgrade** taint (equivalent to **NoSchedule**) will be added to the nodes in the cluster. The taint will be removed after the cluster is upgraded. Do not add taints with the same key name on a node. Even if the taints have different effects, they may be deleted by the system by mistake after the upgrade. Constraints ----------- -- Currently, only CCE clusters consisting of VM nodes and CCE Turbo clusters can be upgraded. +- CCE clusters and CCE Turbo clusters with VM nodes can be upgraded. +- If there are any nodes created using a private image, the cluster cannot be upgraded. +- After the cluster is upgraded, if the containerd vulnerability of the container engine is fixed in :ref:`Kubernetes Release Notes `, manually restart containerd for the upgrade to take effect. The same applies to the existing pods. +- If you mount the **docker.sock** file on a node to a pod using the hostPath mode, that is, the Docker in Docker scenario, Docker will restart during the upgrade, but the **docker.sock** file does not change. As a result, your services may malfunction. You are advised to mount the **docker.sock** file by mounting the directory. +- When clusters using the tunnel network model are upgraded to v1.19.16-r4, v1.21.7-r0, v1.23.5-r0, v1.25.1-r0, or later, the SNAT rule whose destination address is the container CIDR block but the source address is not the container CIDR block will be removed. If you have configured VPC routes to directly access all pods outside the cluster, only the pods on the corresponding nodes can be directly accessed after the upgrade. -- Currently, clusters using private images cannot be upgraded. +.. _cce_10_0302__section1143094820148: -- After the cluster is upgraded, if the containerd vulnerability of the container engine is fixed in :ref:`Cluster Version Release Notes `, you need to manually restart containerd for the upgrade to take effect. The same applies to the existing pods. +Deprecated APIs +--------------- -- If initContainer or Istio is used in the in-place upgrade of a cluster of v1.15, pay attention to the following restrictions: +With the evolution of Kubernetes APIs, APIs are periodically reorganized or upgraded, and old APIs are deprecated and finally deleted. The following tables list the deprecated APIs in each Kubernetes community version. For details about more deprecated APIs, see `Deprecated API Migration Guide `__. - In kubelet 1.16 and later versions, `QoS classes `__ are different from those in earlier versions. In kubelet 1.15 and earlier versions, only containers in **spec.containers** are counted. In kubelet 1.16 and later versions, containers in both **spec.containers** and **spec.initContainers** are counted. The QoS class of a pod will change after the upgrade. As a result, the container in the pod restarts. You are advised to modify the QoS class of the service container before the upgrade to avoid this problem. For details, see :ref:`Table 1 `. +- :ref:`APIs Deprecated in Kubernetes v1.25 ` +- :ref:`APIs Deprecated in Kubernetes v1.22 ` +- :ref:`APIs Deprecated in Kubernetes v1.16 ` - .. _cce_10_0302__table10713231143911: +.. note:: - .. table:: **Table 1** QoS class changes before and after the upgrade + When an API is deprecated, the existing resources are not affected. However, when you create or edit the resources, the API version will be intercepted. - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ - | Init Container (Calculated Based on spec.initContainers) | Service Container (Calculated Based on spec.containers) | Pod (Calculated Based on spec.containers and spec.initContainers) | Impacted or Not | - +==========================================================+=========================================================+===================================================================+=================+ - | Guaranteed | Besteffort | Burstable | Yes | - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ - | Guaranteed | Burstable | Burstable | No | - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ - | Guaranteed | Guaranteed | Guaranteed | No | - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ - | Besteffort | Besteffort | Besteffort | No | - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ - | Besteffort | Burstable | Burstable | No | - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ - | Besteffort | Guaranteed | Burstable | Yes | - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ - | Burstable | Besteffort | Burstable | Yes | - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ - | Burstable | Burstable | Burstable | No | - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ - | Burstable | Guaranteed | Burstable | Yes | - +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ +.. _cce_10_0302__table555192311179: + +.. table:: **Table 1** Deprecated APIs in Kubernetes v1.25 + + +-------------------------+--------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Resource Name | Deprecated API Version | Substitute API Version | Change Description | + +=========================+==========================+=====================================================+===============================================================================================================================================================================================================================================================================================================+ + | CronJob | batch/v1beta1 | batch/v1 | None | + | | | | | + | | | (This API is available since v1.21.) | | + +-------------------------+--------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | EndpointSlice | discovery.k8s.io/v1beta1 | discovery.k8s.io/v1 | Pay attention to the following changes: | + | | | | | + | | | (This API is available since v1.21.) | - In each endpoint, the **topology["kubernetes.io/hostname"]** field has been deprecated. Replace it with the **nodeName** field. | + | | | | - In each endpoint, the **topology["kubernetes.io/zone"]** field has been deprecated. Replace it with the **zone** field. | + | | | | - The **topology** field is replaced with **deprecatedTopology** and cannot be written in v1. | + +-------------------------+--------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Event | events.k8s.io/v1beta1 | events.k8s.io/v1 | Pay attention to the following changes: | + | | | | | + | | | (This API is available since v1.19.) | - The **type** field can only be set to **Normal** or **Warning**. | + | | | | - The **involvedObject** field is renamed **regarding**. | + | | | | - The **action**, **reason**, **reportingController**, and **reportingInstance** fields are mandatory for creating a new **events.k8s.io/v1** event. | + | | | | - Use **eventTime** instead of the deprecated **firstTimestamp** field (this field has been renamed **deprecatedFirstTimestamp** and is not allowed to appear in the new **events.k8s.io/v1** event object). | + | | | | - Use **series.lastObservedTime** instead of the deprecated **lastTimestamp** field (this field has been renamed **deprecatedLastTimestamp** and is not allowed to appear in the new **events.k8s.io/v1** event object). | + | | | | - Use **series.count** instead of the deprecated **count** field (this field has been renamed **deprecatedCount** and is not allowed to appear in the new **events.k8s.io/v1** event object). | + | | | | - Use **reportingController** instead of the deprecated **source.component** field (this field has been renamed **deprecatedSource.component** and is not allowed to appear in the new **events.k8s.io/v1** event object). | + | | | | - Use **reportingInstance** instead of the deprecated **source.host** field (this field has been renamed **deprecatedSource.host** and is not allowed to appear in the new **events.k8s.io/v1** event object). | + +-------------------------+--------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | HorizontalPodAutoscaler | autoscaling/v2beta1 | autoscaling/v2 | None | + | | | | | + | | | (This API is available since v1.23.) | | + +-------------------------+--------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PodDisruptionBudget | policy/v1beta1 | policy/v1 | If **spec.selector** is set to null (**{}**) in **PodDisruptionBudget** of **policy/v1**, all pods in the namespace are selected. (In **policy/v1beta1**, an empty **spec.selector** means that no pod will be selected.) If **spec.selector** is not specified, pod will be selected in neither API version. | + | | | | | + | | | (This API is available since v1.21.) | | + +-------------------------+--------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PodSecurityPolicy | policy/v1beta1 | None | Since v1.25, the PodSecurityPolicy resource no longer provides APIs of the **policy/v1beta1** version, and the PodSecurityPolicy access controller is deleted. | + | | | | | + | | | | Replace it with :ref:`Configuring Pod Security Admission `. | + +-------------------------+--------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | RuntimeClass | node.k8s.io/v1beta1 | node.k8s.io/v1 (This API is available since v1.20.) | None | + +-------------------------+--------------------------+-----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0302__table133341432194513: + +.. table:: **Table 2** Deprecated APIs in Kubernetes v1.22 + + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Resource Name | Deprecated API Version | Substitute API Version | Change Description | + +================================+======================================+======================================+========================================================================================================================================================================================================================================================================================================================================+ + | MutatingWebhookConfiguration | admissionregistration.k8s.io/v1beta1 | admissionregistration.k8s.io/v1 | - The default value of **webhooks[*].failurePolicy** is changed from **Ignore** to **Fail** in v1. | + | | | | - The default value of **webhooks[*].matchPolicy** is changed from **Exact** to **Equivalent** in v1. | + | ValidatingWebhookConfiguration | | (This API is available since v1.16.) | - The default value of **webhooks[*].timeoutSeconds** is changed from **30s** to **10s** in v1. | + | | | | - The default value of **webhooks[*].sideEffects** is deleted, and this field must be specified. In v1, the value can only be **None** or **NoneOnDryRun**. | + | | | | - The default value of **webhooks[*].admissionReviewVersions** is deleted. In v1, this field must be specified. (**AdmissionReview** v1 and v1beta1 are supported.) | + | | | | - **webhooks[*].name** must be unique in the list of objects created through **admissionregistration.k8s.io/v1**. | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CustomResourceDefinition | apiextensions.k8s.io/v1beta1 | apiextensions/v1 | - The default value of **spec.scope** is no longer **Namespaced**. This field must be explicitly specified. | + | | | | - **spec.version** is deleted from v1. Use **spec.versions** instead. | + | | | (This API is available since v1.16.) | - **spec.validation** is deleted from v1. Use **spec.versions[*].schema** instead. | + | | | | - **spec.subresources** is deleted from v1. Use **spec.versions[*].subresources** instead. | + | | | | - **spec.additionalPrinterColumns** is deleted from v1. Use **spec.versions[*].additionalPrinterColumns** instead. | + | | | | - **spec.conversion.webhookClientConfig** is moved to **spec.conversion.webhook.clientConfig** in v1. | + | | | | - **spec.conversion.conversionReviewVersions** is moved to **spec.conversion.webhook.conversionReviewVersions** in v1. | + | | | | | + | | | | - **spec.versions[*].schema.openAPIV3Schema** becomes a mandatory field when the **CustomResourceDefinition** object of the v1 version is created, and its value must be a `structural schema `__. | + | | | | - **spec.preserveUnknownFields: true** cannot be specified when the **CustomResourceDefinition** object of the v1 version is created. This configuration must be specified using **x-kubernetes-preserve-unknown-fields: true** in the schema definition. | + | | | | - In v1, the **JSONPath** field in the **additionalPrinterColumns** entry is renamed **jsonPath** (patch `#66531 `__). | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | APIService | apiregistration/v1beta1 | apiregistration.k8s.io/v1 | None | + | | | | | + | | | (This API is available since v1.10.) | | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | TokenReview | authentication.k8s.io/v1beta1 | authentication.k8s.io/v1 | None | + | | | | | + | | | (This API is available since v1.6.) | | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | LocalSubjectAccessReview | authorization.k8s.io/v1beta1 | authorization.k8s.io/v1 | **spec.group** was renamed **spec.groups** in v1 (patch `#32709 `__). | + | | | | | + | SelfSubjectAccessReview | | (This API is available since v1.16.) | | + | | | | | + | SubjectAccessReview | | | | + | | | | | + | SelfSubjectRulesReview | | | | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CertificateSigningRequest | certificates.k8s.io/v1beta1 | certificates.k8s.io/v1 | Pay attention to the following changes in **certificates.k8s.io/v1**: | + | | | | | + | | | (This API is available since v1.19.) | - For an API client that requests a certificate: | + | | | | | + | | | | - **spec.signerName** becomes a mandatory field (see `Known Kubernetes Signers `__). In addition, the **certificates.k8s.io/v1** API cannot be used to create requests whose signer is **kubernetes.io/legacy-unknown**. | + | | | | - **spec.usages** now becomes a mandatory field, which cannot contain duplicate string values and can contain only known usage strings. | + | | | | | + | | | | - For an API client that needs to approve or sign a certificate: | + | | | | | + | | | | - **status.conditions** cannot contain duplicate types. | + | | | | - The **status.conditions[*].status** field is now mandatory. | + | | | | - The **status.certificate** must be PEM-encoded and can contain only the **CERTIFICATE** data block. | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Lease | coordination.k8s.io/v1beta1 | coordination.k8s.io/v1 | None | + | | | | | + | | | (This API is available since v1.14.) | | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Ingress | networking.k8s.io/v1beta1 | networking.k8s.io/v1 | - The **spec.backend** field is renamed **spec.defaultBackend**. | + | | | | - The **serviceName** field of the backend is renamed **service.name**. | + | | extensions/v1beta1 | (This API is available since v1.19.) | - The backend **servicePort** field represented by a number is renamed **service.port.number**. | + | | | | - The backend **servicePort** field represented by a string is renamed **service.port.name**. | + | | | | - The **pathType** field is mandatory for all paths to be specified. The options are **Prefix**, **Exact**, and **ImplementationSpecific**. To match the behavior of not defining the path type in v1beta1, use **ImplementationSpecific**. | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | IngressClass | networking.k8s.io/v1beta1 | networking.k8s.io/v1 | None | + | | | | | + | | | (This API is available since v1.19.) | | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ClusterRole | rbac.authorization.k8s.io/v1beta1 | rbac.authorization.k8s.io/v1 | None | + | | | | | + | ClusterRoleBinding | | (This API is available since v1.8.) | | + | | | | | + | Role | | | | + | | | | | + | RoleBinding | | | | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PriorityClass | scheduling.k8s.io/v1beta1 | scheduling.k8s.io/v1 | None | + | | | | | + | | | (This API is available since v1.14.) | | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CSIDriver | storage.k8s.io/v1beta1 | storage.k8s.io/v1 | - CSIDriver is available in **storage.k8s.io/v1** since v1.19. | + | | | | - CSINode is available in **storage.k8s.io/v1** since v1.17. | + | CSINode | | | - StorageClass is available in **storage.k8s.io/v1** since v1.6. | + | | | | - VolumeAttachment is available in **storage.k8s.io/v1** since v1.13. | + | StorageClass | | | | + | | | | | + | VolumeAttachment | | | | + +--------------------------------+--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0302__table115511655135720: + +.. table:: **Table 3** Deprecated APIs in Kubernetes v1.16 + + +-------------------+------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Resource Name | Deprecated API Version | Substitute API Version | Change Description | + +===================+========================+======================================+=========================================================================================================================================================================================================================================================+ + | NetworkPolicy | extensions/v1beta1 | networking.k8s.io/v1 | None | + | | | | | + | | | (This API is available since v1.8.) | | + +-------------------+------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | DaemonSet | extensions/v1beta1 | apps/v1 | - The **spec.templateGeneration** field is deleted. | + | | | | - **spec.selector** is now a mandatory field and cannot be changed after the object is created. The label of an existing template can be used as a selector for seamless migration. | + | | apps/v1beta2 | (This API is available since v1.9.) | - The default value of **spec.updateStrategy.type** is changed to **RollingUpdate** (the default value in the **extensions/v1beta1** API version is **OnDelete**). | + +-------------------+------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Deployment | extensions/v1beta1 | apps/v1 | - The **spec.rollbackTo** field is deleted. | + | | | | - **spec.selector** is now a mandatory field and cannot be changed after the Deployment is created. The label of an existing template can be used as a selector for seamless migration. | + | | apps/v1beta1 | (This API is available since v1.9.) | - The default value of **spec.progressDeadlineSeconds** is changed to 600 seconds (the default value in **extensions/v1beta1** is unlimited). | + | | | | - The default value of **spec.revisionHistoryLimit** is changed to **10**. (In the **apps/v1beta1** API version, the default value of this field is **2**. In the **extensions/v1beta1** API version, all historical records are retained by default.) | + | | apps/v1beta2 | | - The default values of **maxSurge** and **maxUnavailable** are changed to **25%**. (In the **extensions/v1beta1** API version, these fields default to **1**.) | + +-------------------+------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | StatefulSet | apps/v1beta1 | apps/v1 | - **spec.selector** is now a mandatory field and cannot be changed after the StatefulSet is created. The label of an existing template can be used as a selector for seamless migration. | + | | | | - The default value of **spec.updateStrategy.type** is changed to **RollingUpdate** (the default value in the **apps/v1beta1** API version is **OnDelete**). | + | | apps/v1beta2 | (This API is available since v1.9.) | | + +-------------------+------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ReplicaSet | extensions/v1beta1 | apps/v1 | **spec.selector** is now a mandatory field and cannot be changed after the object is created. The label of an existing template can be used as a selector for seamless migration. | + | | | | | + | | apps/v1beta1 | (This API is available since v1.9.) | | + | | | | | + | | apps/v1beta2 | | | + +-------------------+------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PodSecurityPolicy | extensions/v1beta1 | policy/v1beta1 | PodSecurityPolicy for the **policy/v1beta1** API version will be removed in v1.25. | + | | | | | + | | | (This API is available since v1.10.) | | + +-------------------+------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +Version Differences +------------------- + ++-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Upgrade Path | Version Difference | Self-Check | ++=======================+===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ +| v1.19 to v1.21 | The bug of **exec probe timeouts** is fixed in Kubernetes 1.21. Before this bug is fixed, the exec probe does not consider the **timeoutSeconds** field. Instead, the probe will run indefinitely, even beyond its configured deadline. It will stop until the result is returned. If this field is not specified, the default value **1** is used. This field takes effect after the upgrade. If the probe runs over 1 second, the application health check may fail and the application may restart frequently. | Before the upgrade, check whether the timeout is properly set for the exec probe. | ++-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| | kube-apiserver of CCE 1.19 or later requires that the Subject Alternative Names (SANs) field be configured for the certificate of your webhook server. Otherwise, kube-apiserver fails to call the webhook server after the upgrade, and containers cannot be started properly. | Before the upgrade, check whether the SAN field is configured in the certificate of your webhook server. | +| | | | +| | Root cause: X.509 `CommonName `__ is discarded in Go 1.15. kube-apiserver of CCE 1.19 is compiled using Go 1.15. If your webhook certificate does not have SANs, kube-apiserver does not process the **CommonName** field of the X.509 certificate as the host name by default. As a result, the authentication fails. | - If you do not have your own webhook server, you can skip this check. | +| | | - If the field is not set, you are advised to use the SAN field to specify the IP address and domain name supported by the certificate. | ++-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| v1.15 to v1.19 | The control plane of CCE clusters of v1.19 is incompatible with kubelet v1.15. If a node fails to be upgraded or the node to be upgraded restarts after the master node is successfully upgraded, there is a high probability that the node is in the **NotReady** status. | #. In normal cases, this scenario is not triggered. | +| | | #. After the master node is upgraded, do not suspend the upgrade so the node can be quickly upgraded. | +| | This is because the node failed to be upgraded restarts the kubelet and trigger the node registration. In clusters of v1.15, the default registration tags (**failure-domain.beta.kubernetes.io/is-baremetal** and **kubernetes.io/availablezone**) are regarded as invalid tags by the clusters of v1.19. | #. If a node fails to be upgraded and cannot be restored, evict applications on the node as soon as possible. Contact technical support and skip the node upgrade. After the upgrade is complete, reset the node. | +| | | | +| | The valid tags in the clusters of v1.19 are **node.kubernetes.io/baremetal** and **failure-domain.beta.kubernetes.io/zone**. | | ++-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| | In CCE 1.15 and 1.19 clusters, the Docker storage driver file system is switched from XFS to Ext4. As a result, the import package sequence in the pods of the upgraded Java application may be abnormal, causing pod exceptions. | Before the upgrade, check the Docker configuration file **/etc/docker/daemon.json** on the node. Check whether the value of **dm.fs** is **xfs**. | +| | | | +| | | - If the value is **ext4** or the storage driver is Overlay, you can skip the next steps. | +| | | - If the value is **xfs**, you are advised to deploy applications in the cluster of the new version in advance to test whether the applications are compatible with the new cluster version. | +| | | | +| | | .. code-block:: | +| | | | +| | | { | +| | | "storage-driver": "devicemapper", | +| | | "storage-opts": [ | +| | | "dm.thinpooldev=/dev/mapper/vgpaas-thinpool", | +| | | "dm.use_deferred_removal=true", | +| | | "dm.fs=xfs", | +| | | "dm.use_deferred_deletion=true" | +| | | ] | +| | | } | ++-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| | kube-apiserver of CCE 1.19 or later requires that the Subject Alternative Names (SANs) field be configured for the certificate of your webhook server. Otherwise, kube-apiserver fails to call the webhook server after the upgrade, and containers cannot be started properly. | Before the upgrade, check whether the SAN field is configured in the certificate of your webhook server. | +| | | | +| | Root cause: X.509 `CommonName `__ is discarded in Go 1.15. kube-apiserver of CCE 1.19 is compiled using Go 1.15. The **CommonName** field is processed as the host name. As a result, the authentication fails. | - If you do not have your own webhook server, you can skip this check. | +| | | - If the field is not set, you are advised to use the SAN field to specify the IP address and domain name supported by the certificate. | +| | | | +| | | .. important:: | +| | | | +| | | NOTICE: | +| | | To mitigate the impact of version differences on cluster upgrade, CCE performs special processing during the upgrade from 1.15 to 1.19 and still supports certificates without SANs. However, no special processing is required for subsequent upgrades. You are advised to rectify your certificate as soon as possible. | ++-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| | In clusters of v1.17.17 and later, CCE automatically creates pod security policies (PSPs) for you, which restrict the creation of pods with unsafe configurations, for example, pods for which **net.core.somaxconn** under a sysctl is configured in the security context. | After an upgrade, you can allow insecure system configurations as required. For details, see :ref:`Configuring a Pod Security Policy `. | ++-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| | If initContainer or Istio is used in the in-place upgrade of a cluster of v1.15, pay attention to the following restrictions: | You are advised to modify the QoS class of the service container before the upgrade to avoid this problem. For details, see :ref:`Table 4 `. | +| | | | +| | In kubelet 1.16 and later versions, `QoS classes `__ are different from those in earlier versions. In kubelet 1.15 and earlier versions, only containers in **spec.containers** are counted. In kubelet 1.16 and later versions, containers in both **spec.containers** and **spec.initContainers** are counted. The QoS class of a pod will change after the upgrade. As a result, the container in the pod restarts. | | ++-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| v1.13 to v1.15 | After a VPC network cluster is upgraded, the master node occupies an extra CIDR block due to the upgrade of network components. If no container CIDR block is available for the new node, the pod scheduled to the node cannot run. | Generally, this problem occurs when the nodes in the cluster are about to fully occupy the container CIDR block. For example, the container CIDR block is 10.0.0.0/16, the number of available IP addresses is 65,536, and the VPC network is allocated a CIDR block with the fixed size (using the mask to determine the maximum number of container IP addresses allocated to each node). If the upper limit is 128, the cluster supports a maximum of 512 (65536/128) nodes, including the three master nodes. After the cluster is upgraded, each of the three master nodes occupies one CIDR block. As a result, 506 nodes are supported. | ++-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0302__table10713231143911: + +.. table:: **Table 4** QoS class changes before and after the upgrade + + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ + | Init Container (Calculated Based on spec.initContainers) | Service Container (Calculated Based on spec.containers) | Pod (Calculated Based on spec.containers and spec.initContainers) | Impacted or Not | + +==========================================================+=========================================================+===================================================================+=================+ + | Guaranteed | Besteffort | Burstable | Yes | + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ + | Guaranteed | Burstable | Burstable | No | + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ + | Guaranteed | Guaranteed | Guaranteed | No | + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ + | Besteffort | Besteffort | Besteffort | No | + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ + | Besteffort | Burstable | Burstable | No | + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ + | Besteffort | Guaranteed | Burstable | Yes | + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ + | Burstable | Besteffort | Burstable | Yes | + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ + | Burstable | Burstable | Burstable | No | + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ + | Burstable | Guaranteed | Burstable | Yes | + +----------------------------------------------------------+---------------------------------------------------------+-------------------------------------------------------------------+-----------------+ Upgrade Backup -------------- diff --git a/umn/source/clusters/upgrading_a_cluster/index.rst b/umn/source/clusters/upgrading_a_cluster/index.rst index b484737..d50e3a5 100644 --- a/umn/source/clusters/upgrading_a_cluster/index.rst +++ b/umn/source/clusters/upgrading_a_cluster/index.rst @@ -7,9 +7,8 @@ Upgrading a Cluster - :ref:`Upgrade Overview ` - :ref:`Before You Start ` -- :ref:`Post-Upgrade Verification ` -- :ref:`Performing Replace or Rolling Upgrade ` - :ref:`Performing In-place Upgrade ` +- :ref:`Performing Post-Upgrade Verification ` - :ref:`Migrating Services Across Clusters of Different Versions ` - :ref:`Troubleshooting for Pre-upgrade Check Exceptions ` @@ -19,8 +18,7 @@ Upgrading a Cluster upgrade_overview before_you_start - post-upgrade_verification/index - performing_replace_or_rolling_upgrade performing_in-place_upgrade + performing_post-upgrade_verification/index migrating_services_across_clusters_of_different_versions troubleshooting_for_pre-upgrade_check_exceptions/index diff --git a/umn/source/clusters/upgrading_a_cluster/migrating_services_across_clusters_of_different_versions.rst b/umn/source/clusters/upgrading_a_cluster/migrating_services_across_clusters_of_different_versions.rst index e0c092a..3ecc417 100644 --- a/umn/source/clusters/upgrading_a_cluster/migrating_services_across_clusters_of_different_versions.rst +++ b/umn/source/clusters/upgrading_a_cluster/migrating_services_across_clusters_of_different_versions.rst @@ -17,28 +17,28 @@ Prerequisites .. table:: **Table 1** Checklist before migration - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Category | Description | - +===================================+============================================================================================================================================================================================================================================+ - | Cluster | NodeIP-related: Check whether node IP addresses (including EIPs) of the cluster before the migration have been used in other configurations or whitelists. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Workloads | Record the number of workloads for post-migration check. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Storage | #. Check whether the storage resources in use are provisioned by the cloud or by your organization. | - | | #. Change the automatically created storage to the existing storage in the new cluster. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Network | #. Pay special attention to the ELB and ingress. | - | | #. Clusters of an earlier version support only the classic load balancer. To migrate services to a new cluster, you need to change load balancer type to shared load balancer. Then, the corresponding ELB service will be re-established. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | O&M | Private configuration: Check whether kernel parameters or system data have been configured on nodes in the cluster. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Category | Description | + +===================================+================================================================================================================================================================================================================================+ + | Cluster | NodeIP-related: Check whether node IP addresses (including EIPs) of the cluster before the migration have been used in other configurations or whitelists. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Workloads | Record the number of workloads for post-migration check. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Storage | #. Check whether the storage resources in use are provisioned by the cloud or by your organization. | + | | #. Change the automatically created storage to the existing storage in the new cluster. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Network | #. Pay special attention to the ELB and ingress. | + | | #. Clusters of an earlier version support only the classic load balancer. To migrate services to a new cluster, change load balancer type to shared load balancer. Then, the corresponding ELB service will be re-established. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | O&M | Private configuration: Check whether kernel parameters or system data have been configured on nodes in the cluster. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Procedure --------- #. **Create a CCE cluster.** - Create a cluster with the same specifications and configurations as the cluster of the earlier version. For details, see :ref:`Creating a CCE Cluster `. + Create a cluster with the same specifications and configurations as the cluster of the earlier version. For details, see :ref:`Creating a Cluster `. #. **Add a node.** @@ -46,19 +46,23 @@ Procedure #. **Create a storage volume in the new cluster.** - Use an existing storage volume to create a PVC in the new cluster. The PVC name remains unchanged. For details, see :ref:`PVCs `. + Use an existing storage volume to create a PVC in the new cluster. The PVC name remains unchanged. For details, see :ref:`Using an Existing OBS Bucket Through a Static PV ` or :ref:`Using an Existing SFS Turbo File System Through a Static PV `. .. note:: - Storage switching supports only OBS buckets, SFS file systems, and shared EVS disks. If a non-shared EVS disk is used, you need to suspend the workloads in the old cluster to switch the storage resources. As a result, services will be interrupted. + Storage switching supports only OBS buckets and SFS Turbo file systems. If non-shared storage is used, suspend the workloads in the old cluster to switch the storage resources. As a result, services will be unavailable. #. **Create a workload in the new cluster.** - The workload name and specifications remain unchanged. For details about how to create a workload, see :ref:`Creating a Deployment ` or :ref:`Creating a StatefulSet `. For details about how to attach a storage volume to the workload, see :ref:`Creating a Deployment Mounted with an EVS Volume `. + The workload name and specifications remain unchanged. For details about how to create a workload, see :ref:`Creating a Deployment ` or :ref:`Creating a StatefulSet `. + +#. **Mount the storage again.** + + Mount the existing storage in the workload again. For details, see :ref:`Using an Existing OBS Bucket Through a Static PV ` or :ref:`Using an Existing SFS Turbo File System Through a Static PV `. #. **Create a Service in the new cluster.** - The Service name and specifications remain unchanged. For details about how to create a Service, see :ref:`Services `. + The Service name and specifications remain unchanged. For details about how to create a Service, see :ref:`Service `. #. **Commission services.** diff --git a/umn/source/clusters/upgrading_a_cluster/performing_in-place_upgrade.rst b/umn/source/clusters/upgrading_a_cluster/performing_in-place_upgrade.rst index c6a766c..78ce0ba 100644 --- a/umn/source/clusters/upgrading_a_cluster/performing_in-place_upgrade.rst +++ b/umn/source/clusters/upgrading_a_cluster/performing_in-place_upgrade.rst @@ -5,9 +5,6 @@ Performing In-place Upgrade =========================== -Scenario --------- - You can upgrade your clusters to a newer version on the CCE console. Before the upgrade, learn about the target version to which each CCE cluster can be upgraded in what ways, and the upgrade impacts. For details, see :ref:`Upgrade Overview ` and :ref:`Before You Start `. @@ -16,14 +13,14 @@ Description ----------- - An in-place upgrade updates the Kubernetes components on cluster nodes, without changing their OS version. -- Data plane nodes are upgraded in batches. By default, they are prioritized based on their CPU, memory, and PDB (Pod Disruption Budget, which is `Specifying a Disruption Budget for your Application `__). You can also set the priorities according to your service requirements. +- Data plane nodes are upgraded in batches. By default, they are prioritized based on their CPU, memory, and `PodDisruptionBudgets (PDBs) `__. You can also set the priorities according to your service requirements. Precautions ----------- - During the cluster upgrade, the system will automatically upgrade add-ons to a version compatible with the target cluster version. Do not uninstall or reinstall add-ons during the cluster upgrade. - Before the upgrade, ensure that all add-ons are running. If an add-on fails to be upgraded, rectify the fault and try again. -- During the upgrade, CCE checks the add-on running status. Some add-ons (such as coredns) require at least two nodes to run normally. In this case, at least two nodes must be available for the upgrade. +- During the upgrade, CCE checks the add-on running status. Some add-ons (such as CoreDNS) require at least two nodes to run normally. In this case, at least two nodes must be available for the upgrade. For more information, see :ref:`Before You Start `. @@ -34,7 +31,7 @@ The cluster upgrade goes through check, backup, configuration and upgrade, and v #. Log in to the CCE console and click the cluster name to access the cluster console. -#. In the navigation pane, choose **Cluster Upgrade**. You can view the recommended version on the right. +#. In the navigation pane, choose **Cluster Upgrade**. #. Select the cluster version to be upgraded and click **Check**. @@ -44,14 +41,16 @@ The cluster upgrade goes through check, backup, configuration and upgrade, and v - If your cluster has a new major version, you can select a version as required. - If your cluster is of the latest version, the check entry will be hidden. -#. Click **Start Check** and confirm the check. If there are abnormal or risky items in the cluster, handle the exceptions based on the check results displayed on the page and check again. +#. Perform the pre-upgrade check. Click **Start Check** and confirm the check. If there are abnormal or risky items in the cluster, handle the exceptions based on the check results displayed on the page and check again. - **Exceptions**: View the solution displayed on the page, handle the exceptions and check again. - **Risk Items**: may affect the cluster upgrade. Check the risk description and see whether you may be impacted. If no risk exists, click **OK** next to the risk item to manually skip this risk item and check again. After the check is passed, click **Next: Back Up**. -#. (Optional) Manually back up the data. Data is backed up during the upgrade following a default policy. You can click **Back Up** to manually back up data. If you do not need to manually back up data, click **Next: Configure & Upgrade**. +#. (Optional) Manually back up the cluster data. Data is backed up during the upgrade following a default policy. You can click **Back Up** to manually back up data. If you do not need to manually back up data, click **Next: Configure & Upgrade**. + + Manual backup will back up the entire master node. The backup process uses the Cloud Backup and Recovery (CBR) service and takes about 20 minutes. If there are many cloud backup tasks at the current site, the backup may take longer. The cluster cannot be upgraded during the backup. #. Configure the upgrade parameters. @@ -62,7 +61,7 @@ The cluster upgrade goes through check, backup, configuration and upgrade, and v If a red dot |image1| is displayed on the right of an add-on, the add-on is incompatible with the target cluster version. During the upgrade, the add-on will be uninstalled and then re-installed. Ensure that the add-on parameters are correctly configured. - **Node Upgrade Configuration:** You can set the maximum number of nodes to be upgraded in a batch. - - **Node Priority:** You can set priorities for nodes to be upgraded. If you do not set this parameter, the system will determine the nodes to upgrade in batches based on specific conditions. Before setting the node upgrade priority, you need to select a node pool. Nodes and node pools will be upgraded according to the priorities you specify. + - **Node Priority:** You can set priorities for nodes to be upgraded. If you do not set this parameter, the system will determine the nodes to upgrade in batches based on specific conditions. Before setting the node upgrade priority, select a node pool. Nodes and node pools will be upgraded according to the priorities you specify. - **Add Upgrade Priority**: Add upgrade priorities for node pools. - **Add Node Priority**: After adding a node pool priority, you can set the upgrade sequence of nodes in the node pool. The system upgrades nodes in the sequence you specify. If you skip this setting, the system upgrades nodes based on the default policy. @@ -79,4 +78,4 @@ The cluster upgrade goes through check, backup, configuration and upgrade, and v You can verify the cluster Kubernetes version on the **Clusters** page. -.. |image1| image:: /_static/images/en-us_image_0000001517743672.png +.. |image1| image:: /_static/images/en-us_image_0000001695737489.png diff --git a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/index.rst b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/index.rst similarity index 88% rename from umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/index.rst rename to umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/index.rst index 095e6cd..bc3cd00 100644 --- a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/index.rst +++ b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/index.rst @@ -2,8 +2,8 @@ .. _cce_10_0560: -Post-Upgrade Verification -========================= +Performing Post-Upgrade Verification +==================================== - :ref:`Service Verification ` - :ref:`Pod Check ` diff --git a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/new_node_check.rst b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/new_node_check.rst similarity index 59% rename from umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/new_node_check.rst rename to umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/new_node_check.rst index 4dc1a13..0befbf6 100644 --- a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/new_node_check.rst +++ b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/new_node_check.rst @@ -13,7 +13,7 @@ Check whether nodes can be created in the cluster. Procedure --------- -Go to the CCE console and access the cluster console. Choose **Nodes** in the navigation pane, and click **Create Node**. +Log in to the CCE console and access the cluster console. Choose **Nodes** in the navigation pane, and click **Create Node**. For details about node configurations, see :ref:`Creating a Node `. Solution -------- diff --git a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/new_pod_check.rst b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/new_pod_check.rst similarity index 92% rename from umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/new_pod_check.rst rename to umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/new_pod_check.rst index 1a7f45c..fe92b6a 100644 --- a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/new_pod_check.rst +++ b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/new_pod_check.rst @@ -16,13 +16,13 @@ Procedure After creating a node based on :ref:`New Node Check `, create a DaemonSet workload to create pods on each node. -Go to the CCE console, access the cluster console, and choose **Workloads** in the navigation pane. On the displayed page, switch to the **DaemonSets** tab page and click **Create Workload** or **Create from YAML** in the upper right corner. +Go to the CCE console, access the cluster console, and choose **Workloads** in the navigation pane. On the displayed page, switch to the **DaemonSets** tab page and click **Create Workload** or **Create from YAML** in the upper right corner. For details, see :ref:`Creating a DaemonSet `. You are advised to use the image for routine tests as the base image. You can deploy a pod by referring to the following YAML file. .. note:: - In this test, YAML deploys DaemonSet in the default namespace, uses **ngxin:perl** as the base image, requests 10 MB CPU and 10 Mi memory, and limits 100 MB CPU and 50 Mi memory. + In this test, YAML deploys DaemonSet in the default namespace, uses **ngxin:perl** as the base image, requests 10 MB CPU and 10 MiB memory, and limits 100 MB CPU and 50 MiB memory. .. code-block:: diff --git a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/node_and_container_network_check.rst b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/node_and_container_network_check.rst similarity index 82% rename from umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/node_and_container_network_check.rst rename to umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/node_and_container_network_check.rst index 2c5db59..541655b 100644 --- a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/node_and_container_network_check.rst +++ b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/node_and_container_network_check.rst @@ -19,8 +19,6 @@ The node status reflects whether the node component or network is normal. Go to the CCE console and access the cluster console. Choose **Nodes** in the navigation pane. You can filter node status by status to check whether there are abnormal nodes. -|image1| - The container network affects services. Check whether your services are available. Solution @@ -30,39 +28,37 @@ If the node status is abnormal, contact technical support. If the container network is abnormal and your services are affected, contact technical support and confirm the abnormal network access path. -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| Source | Destination | Destination Type | Possible Fault | -+==============================================+==============================================================================+======================================+======================================================================================================================================+ -| - Pods (inside a cluster) | Public IP address of Service ELB | Cluster traffic load balancing entry | No record. | -| - Nodes (inside a cluster) | | | | -| - Nodes in the same VPC (outside a cluster) | | | | -| - Other sources | | | | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | Private IP address of Service ELB | Cluster traffic load balancing entry | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | Public IP address of ingress ELB | Cluster traffic load balancing entry | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | Private IP address of ingress ELB | Cluster traffic load balancing entry | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | Public IP address of NodePort Service | Cluster traffic entry | The kube proxy configuration is overwritten. This fault has been rectified in the upgrade process. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | Private IP address of NodePort Service | Cluster traffic entry | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | ClusterIP Service | Service network plane | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | Non NodePort Service port | Container network | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | Cross-node pods | Container network plane | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | Pods on the same node | Container network plane | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | Service and pod domain names are resolved by CoreDNS. | Domain name resolution | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | External domain names are resolved based on the CoreDNS hosts configuration. | Domain name resolution | After the coredns add-on is upgraded, the configuration is overwritten. This fault has been rectified in the add-on upgrade process. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | External domain names are resolved based on the CoreDNS upstream server. | Domain name resolution | After the coredns add-on is upgraded, the configuration is overwritten. This fault has been rectified in the add-on upgrade process. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ -| | External domain names are not resolved by CoreDNS. | Domain name resolution | No record. | -+----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ - -.. |image1| image:: /_static/images/en-us_image_0000001518062524.png ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| Source | Destination | Destination Type | Possible Fault | ++==============================================+==============================================================================+======================================+===========================================================================================================================+ +| - Pods (inside a cluster) | Public IP address of Service ELB | Cluster traffic load balancing entry | No record. | +| - Nodes (inside a cluster) | | | | +| - Nodes in the same VPC (outside a cluster) | | | | +| - Third-party clouds | | | | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | Private IP address of Service ELB | Cluster traffic load balancing entry | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | Public IP address of ingress ELB | Cluster traffic load balancing entry | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | Private IP address of ingress ELB | Cluster traffic load balancing entry | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | Public IP address of NodePort Service | Cluster traffic entry | The kube proxy configuration is overwritten. This fault has been rectified in the upgrade process. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | Private IP address of NodePort Service | Cluster traffic entry | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | ClusterIP Service | Service network plane | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | Non NodePort Service port | Container network | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | Cross-node pods | Container network plane | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | Pods on the same node | Container network plane | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | Service and pod domain names are resolved by CoreDNS. | Domain name resolution | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | External domain names are resolved based on the CoreDNS hosts configuration. | Domain name resolution | After CoreDNS is upgraded, the configuration is overwritten. This fault has been rectified in the add-on upgrade process. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | External domain names are resolved based on the CoreDNS upstream server. | Domain name resolution | After CoreDNS is upgraded, the configuration is overwritten. This fault has been rectified in the add-on upgrade process. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ +| | External domain names are not resolved by CoreDNS. | Domain name resolution | No record. | ++----------------------------------------------+------------------------------------------------------------------------------+--------------------------------------+---------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/node_label_and_taint_check.rst b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/node_label_and_taint_check.rst similarity index 69% rename from umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/node_label_and_taint_check.rst rename to umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/node_label_and_taint_check.rst index 61234d3..914eea2 100644 --- a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/node_label_and_taint_check.rst +++ b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/node_label_and_taint_check.rst @@ -8,8 +8,8 @@ Node Label and Taint Check Check Item ---------- -- Check whether the label is lost. -- Check whether there are unexpected taints. +- Check whether custom node labels are lost. +- Check whether there are any unexpected taints newly added on the node, which will affect workload scheduling. Procedure --------- @@ -19,7 +19,7 @@ Go to the CCE console, access the cluster console, and choose **Nodes** in the n Solution -------- -User labels are not changed during the cluster upgrade. If you find that labels are lost or added abnormally, contact technical support. +Custom labels will not be changed during a cluster upgrade. If you find that labels are lost or added unexpectedly, contact technical support. If you find a new taint (**node.kubernetes.io/upgrade**) on a node, the node may be skipped during the upgrade. For details, see :ref:`Node Skipping Check for Reset `. diff --git a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/node_skipping_check_for_reset.rst b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/node_skipping_check_for_reset.rst similarity index 89% rename from umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/node_skipping_check_for_reset.rst rename to umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/node_skipping_check_for_reset.rst index 69ab900..4ffe8db 100644 --- a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/node_skipping_check_for_reset.rst +++ b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/node_skipping_check_for_reset.rst @@ -8,7 +8,7 @@ Node Skipping Check for Reset Check Item ---------- -After the cluster is upgraded, you need to reset the nodes that fail to be upgraded. +After the cluster is upgraded, reset the nodes that fail to be upgraded. Procedure --------- diff --git a/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/pod_check.rst b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/pod_check.rst new file mode 100644 index 0000000..33c6a0c --- /dev/null +++ b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/pod_check.rst @@ -0,0 +1,24 @@ +:original_name: cce_10_0562.html + +.. _cce_10_0562: + +Pod Check +========= + +Check Item +---------- + +- Check whether there are unexpected pods in the cluster. +- Check whether there are any pods that ran properly originally in the cluster restart unexpectedly. + +Procedure +--------- + +Log in to the CCE console and access the cluster console. Choose **Workloads** in the navigation pane. On the displayed page, switch to the **Pods** tab page. Select all namespaces, click **Status**, and check whether there are any abnormal pods. + +View the **Restarts** column to check whether there are pods that are restarted abnormally. + +Solution +-------- + +If there are abnormal pods in your cluster after the cluster upgrade, contact technical support. diff --git a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/service_verification.rst b/umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/service_verification.rst similarity index 100% rename from umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/service_verification.rst rename to umn/source/clusters/upgrading_a_cluster/performing_post-upgrade_verification/service_verification.rst diff --git a/umn/source/clusters/upgrading_a_cluster/performing_replace_or_rolling_upgrade.rst b/umn/source/clusters/upgrading_a_cluster/performing_replace_or_rolling_upgrade.rst deleted file mode 100644 index 2be4812..0000000 --- a/umn/source/clusters/upgrading_a_cluster/performing_replace_or_rolling_upgrade.rst +++ /dev/null @@ -1,95 +0,0 @@ -:original_name: cce_10_0120.html - -.. _cce_10_0120: - -Performing Replace or Rolling Upgrade -===================================== - -Scenario --------- - -You can upgrade your clusters to a newer Kubernetes version on the CCE console. - -Before the upgrade, learn about the target version to which each CCE cluster can be upgraded in what ways, and the upgrade impacts. For details, see :ref:`Upgrade Overview ` and :ref:`Before You Start `. - -Precautions ------------ - -- If the coredns add-on needs to be upgraded during the cluster upgrade, ensure that the number of nodes is greater than or equal to the number of coredns instances and all coredns instances are running. Otherwise, the upgrade will fail. Before upgrading a cluster of v1.13, you need to upgrade the coredns add-on to the latest version available for the cluster. -- When a cluster of v1.11 or earlier is upgraded to v1.13, the impacts on the cluster are as follows: - - - All cluster nodes will be restarted as their OSs are upgraded, which affects application running. - - The cluster signature certificate mechanism is changed. As a result, the original cluster certificate becomes invalid. You need to obtain the certificate or kubeconfig file again after the cluster is upgraded. - -- During the upgrade from one release of v1.13 to a later release of v1.13, applications in the cluster are interrupted for a short period of time only during the upgrade of network components. -- During the upgrade from Kubernetes 1.9 to 1.11, the kube-dns of the cluster will be uninstalled and replaced with CoreDNS, which may cause loss of the cascading DNS configuration in the kube-dns or temporary interruption of the DNS service. Back up the DNS address configured in the kube-dns so you can configure the domain name in the CoreDNS again when domain name resolution is abnormal. - -Procedure ---------- - -#. Log in to the CCE console and click the cluster name to access the cluster. - -#. In the navigation pane, choose **Cluster Upgrade**. You can view the new version available for upgrade on the right. Click **Upgrade**. - - .. note:: - - - If your cluster version is up-to-date, the **Upgrade** button is grayed out. - - If your cluster status is abnormal or there are abnormal add-ons, the **Upgrade** button is dimmed. Perform a check by referring to :ref:`Before You Start `. - -#. In the displayed **Pre-upgrade Check** dialog box, click **Check Now**. - -#. The pre-upgrade check starts. While the pre-upgrade check is in progress, the cluster status will change to **Pre-checking** and new nodes/applications will not be able to be deployed on the cluster. However, existing nodes and applications will not be affected. It takes 3 to 5 minutes to complete the pre-upgrade check. - -#. When the status of the pre-upgrade check is **Completed**, click **Upgrade Now**. - -#. On the cluster upgrade page, review or configure basic information by referring to :ref:`Table 1 `. - - .. _cce_10_0120__table924319911495: - - .. table:: **Table 1** Basic information - - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+===================================================================================================================================================================================================+ - | Cluster Name | Review the name of the cluster to be upgraded. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Current Version | Review the version of the cluster to be upgraded. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Target Version | Review the target version after the upgrade. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Node Upgrade Policy | **Replace** (replace upgrade): Worker nodes will be reset. Their OSs will be reinstalled, and data on the system and data disks will be cleared. Exercise caution when performing this operation. | - | | | - | | .. note:: | - | | | - | | - The lifecycle management function of the nodes and workloads in the cluster is unavailable. | - | | - APIs cannot be called temporarily. | - | | - Running workloads will be interrupted because nodes are reset during the upgrade. | - | | - Data in the system and data disks on the worker nodes will be cleared. Back up important data before resetting the nodes. | - | | - Data disks without LVM mounted to worker nodes need to be mounted again after the upgrade, and data on the disks will not be lost during the upgrade. | - | | - The EVS disk quota must be greater than 0. | - | | - The container IP addresses change, but the communication between containers is not affected. | - | | - Custom labels on the worker nodes will be cleared. | - | | - It takes about 12 minutes to complete the cluster upgrade. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Login Mode | **Key Pair** | - | | | - | | Select the key pair used to log in to the node. You can select a shared key. | - | | | - | | A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click **Create Key Pair**. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -#. Click **Next**. In the dialog box displayed, click **OK**. - -#. Upgrade add-ons. If an add-on needs to be upgraded, a red dot is displayed. Click the **Upgrade** button in the lower left corner of the add-on card view. After the upgrade is complete, click **Upgrade** in the lower right corner of the page. - - .. note:: - - - Master nodes will be upgraded first, and then the worker nodes will be upgraded concurrently. If there are a large number of worker nodes, they will be upgraded in different batches. - - Select a proper time window for the upgrade to reduce impacts on services. - - Clicking **OK** will start the upgrade immediately, and the upgrade cannot be canceled. Do not shut down or restart nodes during the upgrade. - -#. In the displayed **Upgrade** dialog box, read the information and click **OK**. Note that the cluster cannot be rolled back after the upgrade. - -#. Back to the cluster list, you can see that the cluster status is **Upgrading**. Wait until the upgrade is completed. - - After the upgrade is successful, you can view the cluster status and version on the cluster list or cluster details page. diff --git a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/pod_check.rst b/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/pod_check.rst deleted file mode 100644 index c3740f5..0000000 --- a/umn/source/clusters/upgrading_a_cluster/post-upgrade_verification/pod_check.rst +++ /dev/null @@ -1,31 +0,0 @@ -:original_name: cce_10_0562.html - -.. _cce_10_0562: - -Pod Check -========= - -Check Item ----------- - -- Check whether unexpected pods exist in the cluster. -- Check whether there are pods restart unexpectedly in the cluster. - -Procedure ---------- - -Go to the CCE console and access the cluster console. Choose **Workloads** in the navigation pane. On the displayed page, switch to the **Pods** tab page. Select **All namespaces**, click **Status**, and check whether abnormal pods exist. - -|image1| - -View the **Restarts** column to check whether there are pods that are restarted abnormally. - -|image2| - -Solution --------- - -If there are abnormal pods in your cluster after the cluster upgrade, contact technical support. - -.. |image1| image:: /_static/images/en-us_image_0000001518222492.png -.. |image2| image:: /_static/images/en-us_image_0000001518062540.png diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/add-ons.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/add-ons.rst new file mode 100644 index 0000000..fe11e71 --- /dev/null +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/add-ons.rst @@ -0,0 +1,35 @@ +:original_name: cce_10_0433.html + +.. _cce_10_0433: + +Add-ons +======= + +Check Item +---------- + +Check the following aspects: + +- Check whether the add-on status is normal. +- Check whether the add-on support the target version. + +Solution +-------- + +- **Scenario 1: The add-on status is abnormal.** + + Log in to the CCE console and go to the target cluster. Choose **O&M** > **Add-ons** to view and handle the abnormal add-on. + +- **Scenario 2: The target version does not support the current add-on.** + + The add-on cannot be automatically upgraded with the cluster. Log in to the CCE console and go to the target cluster. Choose **O&M** > **Add-ons** to manually upgrade the add-on. + +- **Scenario 3: After the add-on is upgraded to the latest version, the target cluster version is still not supported.** + + Log in to the CCE console and go to the target cluster. Choose **O&M** > **Add-ons** to manually uninstall the add-on. For details about the supported add-on versions and replacement solutions, see the :ref:`Help ` document. + +- **Scenario 4: The add-on configuration does not meet the upgrade requirements. Upgrade the add-on and try again.** + + As shown in the following figure, the error message "please upgrade addon [ ] in the page of addon managecheck and try again" is displayed during the pre-upgrade check. + + Log in to the CCE console and go to the target cluster. Choose **O&M** > **Add-ons** to manually upgrade the add-on. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_blocklist.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/blocklist.rst similarity index 91% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_blocklist.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/blocklist.rst index 132914b..5b5b8a6 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_blocklist.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/blocklist.rst @@ -2,8 +2,8 @@ .. _cce_10_0432: -Checking the Blocklist -====================== +Blocklist +========= Check Item ---------- diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/cce-hpa-controller_restriction_check.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/cce-hpa-controller_restrictions.rst similarity index 82% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/cce-hpa-controller_restriction_check.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/cce-hpa-controller_restrictions.rst index d814ec8..0a01985 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/cce-hpa-controller_restriction_check.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/cce-hpa-controller_restrictions.rst @@ -2,8 +2,8 @@ .. _cce_10_0479: -cce-hpa-controller Restriction Check -==================================== +cce-hpa-controller Restrictions +=============================== Check Item ---------- diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_add-on.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_add-on.rst deleted file mode 100644 index 380b24b..0000000 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_add-on.rst +++ /dev/null @@ -1,29 +0,0 @@ -:original_name: cce_10_0433.html - -.. _cce_10_0433: - -Checking the Add-on -=================== - -Check Item ----------- - -Check the following aspects: - -- Check whether the add-on status is normal. -- Check whether the add-on supports the target version. - -Solution --------- - -- **Scenario 1: The add-on status is abnormal.** - - Log in to the CCE console and go to the target cluster. Choose **O&M** > **Add-ons** to view and handle the abnormal add-on. - -- **Scenario 2: The target version does not support the current add-on.** - - The add-on cannot be automatically upgraded with the cluster. Log in to the CCE console and go to the target cluster. Choose **O&M** > **Add-ons** to manually upgrade the add-on. - -- **Scenario 3: The add-on does not support the target cluster even if the add-on is upgraded to the latest version. In this case, go to the cluster console and choose Cluster Information > O&M > Add-ons in the navigation pane to manually uninstall the add-on.** - - For details about the supported add-on versions and replacement solutions, see the :ref:`help document `. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_node.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_node.rst deleted file mode 100644 index 7d517e9..0000000 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_node.rst +++ /dev/null @@ -1,58 +0,0 @@ -:original_name: cce_10_0431.html - -.. _cce_10_0431: - -Checking the Node -================= - -Check Item ----------- - -Check the following aspects: - -- Check whether the node is available. -- Check whether the node OS supports the upgrade. -- Check whether there are unexpected node pool tags in the node. -- Check whether the Kubernetes node name is consistent with the ECS name. - -Solution --------- - -- **Scenario 1: The node is unavailable.** - - Log in to the CCE console and access the cluster console. Choose **Nodes** in the navigation pane and check the node status. Ensure that the node is in the **Running** status. A node in the **Installing** or **Deleting** status cannot be upgraded. - - If the node status is abnormal, restore the node and retry the check task. - -- **Scenario 2: The node OS does not support the upgrade.** - - The following table lists the node OSs that support the upgrade. You can reset the node OS to an available OS in the list. - - .. table:: **Table 1** OSs that support the upgrade - - +--------------+-----------------------------------------------------------------------------------------------------------------------+ - | OS | Restriction | - +==============+=======================================================================================================================+ - | EulerOS 2.x | None | - +--------------+-----------------------------------------------------------------------------------------------------------------------+ - | CentOS 7.x | None | - +--------------+-----------------------------------------------------------------------------------------------------------------------+ - | Ubuntu 22.04 | Some sites cannot perform upgrade. If the check result shows the upgrade is not supported, contact technical support. | - +--------------+-----------------------------------------------------------------------------------------------------------------------+ - -- **Scenario 3: There are unexpected node pool tags in the node.** - - If a node is migrated from a node pool to the default node pool, the node pool label **cce.cloud.com/cce-nodepool** is retained, affecting cluster upgrade. Check whether the load scheduling on the node depends on the label. - - - If there is no dependency, delete the tag. - - If yes, modify the load balancing policy, remove the dependency, and then delete the tag. - -- **Scenario 4: The Kubernetes node name is consistent with the ECS name.** - - Kubernetes node name, which defaults to the node's private IP. If you select a cloud server name as the node name, the cluster cannot be upgraded. - - Log in to the CCE console and access the cluster console. Choose **Nodes** in the navigation pane, view the node label, and check whether the value of **kubernetes.io/hostname** is consistent with the ECS name. If they are the same, remove the node before the cluster upgrade. - - |image1| - -.. |image1| image:: /_static/images/en-us_image_0000001517903020.png diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_node_pool.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_node_pool.rst deleted file mode 100644 index 012260d..0000000 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_node_pool.rst +++ /dev/null @@ -1,45 +0,0 @@ -:original_name: cce_10_0436.html - -.. _cce_10_0436: - -Checking the Node Pool -====================== - -Check Item ----------- - -Check the following aspects: - -- Check the node status. -- Check whether the auto scaling function of the node pool is disabled. - -Solution --------- - -- **Scenario 1: The node pool status is abnormal.** - - Log in to the CCE console, go to the target cluster and choose **Nodes**. On the displayed page, click **Node Pools** tab and check the node pool status. If the node pool is being scaled, wait until the scaling is complete, and disable the auto scaling function by referring to :ref:`Scenario 2 `. - -- .. _cce_10_0436__li2791152121810: - - **Scenario 2: The auto scaling function of the node pool is enabled.** - - **Solution 1 (Recommended)** - - Log in to the CCE console and go to the target cluster. Choose **O&M** > **Add-ons** and uninstall the autoscaler add-on. - - .. note:: - - Before uninstalling the autoscaler add-on, click **Upgrade** to back up the configuration so that the add-on configuration can be restored during reinstallation. - - Before uninstalling the autoscaler add-on, choose **O&M** > **Node Scaling** and back up the current scaling policies so that they can be restored during reinstallation. These policies will be deleted when the autoscaler add-on is uninstalled. - - Obtain and back up the node scaling policy by clicking **Edit**. - - **Solution 2** - - If you do not want to uninstall the autoscaler add-on, log in to the CCE console and access the cluster detail page. Choose **Nodes** in the navigation pane. On the displayed page, click the **Node Pools** tab and click **Edit** of the corresponding node pool to disable the auto scaling function. - - .. note:: - - Before disabling the auto scaling function, back up the autoscaling configuration so that the configuration can be restored when the function is enabled. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/compatibility_risk.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/compatibility_risks.rst similarity index 65% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/compatibility_risk.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/compatibility_risks.rst index 3dc0be0..37678cf 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/compatibility_risk.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/compatibility_risks.rst @@ -2,65 +2,69 @@ .. _cce_10_0441: -Compatibility Risk -================== +Compatibility Risks +=================== Check Item ---------- -Read the version compatibility differences and ensure that they are not affected. - -The patch upgrade does not involve version compatibility differences. +Read the version compatibility differences and ensure that they are not affected. The patch upgrade does not involve version compatibility differences. Version compatibility --------------------- -+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Major Version Upgrade Path | Precaution | Self-Check | -+======================================+==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ -| Upgrade from v1.19 to v1.21 or v1.23 | The bug of **exec probe timeouts** is fixed in Kubernetes 1.21. Before this bug fix, the exec probe does not consider the **timeoutSeconds** field. Instead, the probe will run indefinitely, even beyond its configured deadline. It will stop until the result is returned. If this field is not specified, the default value **1** is used. This field takes effect after the upgrade. If the probe runs over 1 second, the application health check may fail and the application may restart frequently. | Before the upgrade, check whether the timeout is properly set for the exec probe. | -+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| | kube-apiserver of CCE 1.19 or later requires that the Subject Alternative Names (SANs) field be configured for the certificate of your webhook server. Otherwise, kube-apiserver fails to call the webhook server after the upgrade, and containers cannot be started properly. | Before the upgrade, check whether the SAN field is configured in the certificate of your webhook server. | -| | | | -| | Root cause: X.509 `CommonName `__ is discarded in Go 1.15. kube-apiserver of CCE 1.19 is compiled using Go 1.15. If your webhook certificate does not have SANs, kube-apiserver does not process the **CommonName** field of the X.509 certificate as the host name by default. As a result, the authentication fails. | - If you do not have your own webhook server, you can skip this check. | -| | | - If the field is not set, you are advised to use the SAN field to specify the IP address and domain name supported by the certificate. | -+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| | Arm nodes are not supported in clusters of v1.21 and later. | Check whether your services will be affected if Arm nodes cannot be used. | -+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Upgrade from v1.15 to v1.19 | The control plane of in the clusters v1.19 is incompatible with kubelet v1.15. If a node fails to be upgraded or the node to be upgraded restarts after the master node is successfully upgraded, there is a high probability that the node is in the **NotReady** status. | #. In normal cases, this scenario is not triggered. | -| | | #. After the master node is upgraded, do not suspend the upgrade so the node can be quickly upgraded. | -| | This is because the node failed to be upgraded restarts the kubelet and trigger the node registration. In clusters of v1.15, the default registration tags (**failure-domain.beta.kubernetes.io/is-baremetal** and **kubernetes.io/availablezone**) are regarded as invalid tags by the clusters of v1.19. | #. If a node fails to be upgraded and cannot be restored, evict applications on the node as soon as possible. Contact technical support and skip the node upgrade. After the upgrade is complete, reset the node. | -| | | | -| | The valid tags in the clusters of v1.19 are **node.kubernetes.io/baremetal** and **failure-domain.beta.kubernetes.io/zone**. | | -+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| | In CCE 1.15 and 1.19 clusters, the Docker storage driver file system is switched from XFS to Ext4. As a result, the import package sequence in the pods of the upgraded Java application may be abnormal, causing pod exceptions. | Before the upgrade, check the Docker configuration file **/etc/docker/daemon.json** on the node. Check whether the value of **dm.fs** is **xfs**. | -| | | | -| | | - If the value is **ext4** or the storage driver is Overlay, you can skip the next steps. | -| | | - If the value is **xfs**, you are advised to deploy applications in the cluster of the new version in advance to test whether the applications are compatible with the new cluster version. | -| | | | -| | | .. code-block:: | -| | | | -| | | { | -| | | "storage-driver": "devicemapper", | -| | | "storage-opts": [ | -| | | "dm.thinpooldev=/dev/mapper/vgpaas-thinpool", | -| | | "dm.use_deferred_removal=true", | -| | | "dm.fs=xfs", | -| | | "dm.use_deferred_deletion=true" | -| | | ] | -| | | } | -+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| | kube-apiserver of CCE 1.19 or later requires that the Subject Alternative Names (SANs) field be configured for the certificate of your webhook server. Otherwise, kube-apiserver fails to call the webhook server after the upgrade, and containers cannot be started properly. | Before the upgrade, check whether the SAN field is configured in the certificate of your webhook server. | -| | | | -| | Root cause: X.509 `CommonName `__ is discarded in Go 1.15. kube-apiserver of CCE 1.19 is compiled using Go 1.15. The **CommonName** field is processed as the host name. As a result, the authentication fails. | - If you do not have your own webhook server, you can skip this check. | -| | | - If the field is not set, you are advised to use the SAN field to specify the IP address and domain name supported by the certificate. | -| | | | -| | | .. important:: | -| | | | -| | | NOTICE: | -| | | To mitigate the impact of version differences on cluster upgrade, CCE performs special processing during the upgrade from 1.15 to 1.19 and still supports certificates without SANs. However, no special processing is required for subsequent upgrades. You are advised to rectify your certificate as soon as possible. | -+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| | In clusters of v1.17.17 and later, CCE automatically creates pod security policies (PSPs) for you, which restrict the creation of pods with unsafe configurations, for example, pods for which **net.core.somaxconn** under a sysctl is configured in the security context. | After an upgrade, you can allow insecure system configurations as required. For details, see :ref:`Configuring a Pod Security Policy `. | -+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Upgrade from v1.13 to v1.15 | After a VPC network cluster is upgraded, the master node occupies an extra CIDR block due to the upgrade of network components. If no container CIDR block is available for the new node, the pod scheduled to the node cannot run. | This problem occurs when almost all CIDR blocks are occupied. For example, the container CIDR block is 10.0.0.0/16, the number of available IP addresses is 65,536, and the VPC network is allocated a CIDR block with the fixed size (using the mask to determine the maximum number of container IP addresses allocated to each node). If the upper limit is 128, the cluster supports a maximum of 512 (65536/128) nodes, including the three master nodes. After the cluster is upgraded, each of the three master nodes occupies one CIDR block. As a result, 506 nodes are supported. | -+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ++-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Upgrade Path | Version Difference | Self-Check | ++=========================+===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ +| v1.23 to v1.25 | Since Kubernetes v1.25, PodSecurityPolicy has been replaced by pod Security Admission (:ref:`Configuring Pod Security Admission `). | - To migrate PodSecurityPolicy capabilities to Pod Security Admission, perform the following steps: | +| | | | +| | | #. Ensure that the cluster is of the latest CCE v1.23 version. | +| | | #. To migrate PodSecurityPolicy capabilities to Pod Security Admission, see :ref:`Migrating from Pod Security Policy to Pod Security Admission `. | +| | | #. After confirming that the functions are normal after the migration, upgrade the cluster to v1.25. | +| | | | +| | | - If you no longer need PodSecurityPolicy, you can delete PodSecurityPolicy from the cluster and upgrade the cluster to v1.25. | ++-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| v1.19 to v1.21 or v1.23 | The bug of **exec probe timeouts** is fixed in Kubernetes 1.21. Before this bug is fixed, the exec probe does not consider the **timeoutSeconds** field. Instead, the probe will run indefinitely, even beyond its configured deadline. It will stop until the result is returned. If this field is not specified, the default value **1** is used. This field takes effect after the upgrade. If the probe runs over 1 second, the application health check may fail and the application may restart frequently. | Before the upgrade, check whether the timeout is properly set for the exec probe. | ++-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| | kube-apiserver of CCE 1.19 or later requires that the Subject Alternative Names (SANs) field be configured for the certificate of your webhook server. Otherwise, kube-apiserver fails to call the webhook server after the upgrade, and containers cannot be started properly. | Before the upgrade, check whether the SAN field is configured in the certificate of your webhook server. | +| | | | +| | Root cause: X.509 `CommonName `__ is discarded in Go 1.15. kube-apiserver of CCE 1.19 is compiled using Go 1.15. If your webhook certificate does not have SANs, kube-apiserver does not process the **CommonName** field of the X.509 certificate as the host name by default. As a result, the authentication fails. | - If you do not have your own webhook server, you can skip this check. | +| | | - If the field is not set, you are advised to use the SAN field to specify the IP address and domain name supported by the certificate. | ++-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| v1.15 to v1.19 | The control plane of in the clusters v1.19 is incompatible with kubelet v1.15. If a node fails to be upgraded or the node to be upgraded restarts after the master node is successfully upgraded, there is a high probability that the node is in the **NotReady** status. | #. In normal cases, this scenario is not triggered. | +| | | #. After the master node is upgraded, do not suspend the upgrade so the node can be quickly upgraded. | +| | This is because the node failed to be upgraded restarts the kubelet and trigger the node registration. In clusters of v1.15, the default registration tags (**failure-domain.beta.kubernetes.io/is-baremetal** and **kubernetes.io/availablezone**) are regarded as invalid tags by the clusters of v1.19. | #. If a node fails to be upgraded and cannot be restored, evict applications on the node as soon as possible. Contact technical support and skip the node upgrade. After the upgrade is complete, reset the node. | +| | | | +| | The valid tags in the clusters of v1.19 are **node.kubernetes.io/baremetal** and **failure-domain.beta.kubernetes.io/zone**. | | ++-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| | In CCE 1.15 and 1.19 clusters, the Docker storage driver file system is switched from XFS to Ext4. As a result, the import package sequence in the pods of the upgraded Java application may be abnormal, causing pod exceptions. | Before the upgrade, check the Docker configuration file **/etc/docker/daemon.json** on the node. Check whether the value of **dm.fs** is **xfs**. | +| | | | +| | | - If the value is **ext4** or the storage driver is Overlay, you can skip the next steps. | +| | | - If the value is **xfs**, you are advised to deploy applications in the cluster of the new version in advance to test whether the applications are compatible with the new cluster version. | +| | | | +| | | .. code-block:: | +| | | | +| | | { | +| | | "storage-driver": "devicemapper", | +| | | "storage-opts": [ | +| | | "dm.thinpooldev=/dev/mapper/vgpaas-thinpool", | +| | | "dm.use_deferred_removal=true", | +| | | "dm.fs=xfs", | +| | | "dm.use_deferred_deletion=true" | +| | | ] | +| | | } | ++-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| | kube-apiserver of CCE 1.19 or later requires that the Subject Alternative Names (SANs) field be configured for the certificate of your webhook server. Otherwise, kube-apiserver fails to call the webhook server after the upgrade, and containers cannot be started properly. | Before the upgrade, check whether the SAN field is configured in the certificate of your webhook server. | +| | | | +| | Root cause: X.509 `CommonName `__ is discarded in Go 1.15. kube-apiserver of CCE 1.19 is compiled using Go 1.15. The **CommonName** field is processed as the host name. As a result, the authentication fails. | - If you do not have your own webhook server, you can skip this check. | +| | | - If the field is not set, you are advised to use the SAN field to specify the IP address and domain name supported by the certificate. | +| | | | +| | | .. important:: | +| | | | +| | | NOTICE: | +| | | To mitigate the impact of version differences on cluster upgrade, CCE performs special processing during the upgrade from 1.15 to 1.19 and still supports certificates without SANs. However, no special processing is required for subsequent upgrades. You are advised to rectify your certificate as soon as possible. | ++-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| | In clusters of v1.17.17 and later, CCE automatically creates pod security policies (PSPs) for you, which restrict the creation of pods with unsafe configurations, for example, pods for which **net.core.somaxconn** under a sysctl is configured in the security context. | After an upgrade, you can allow insecure system configurations as required. For details, see :ref:`Configuring a Pod Security Policy `. | ++-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| v1.13 to v1.15 | After a VPC network cluster is upgraded, the master node occupies an extra CIDR block due to the upgrade of network components. If no container CIDR block is available for the new node, the pod scheduled to the node cannot run. | Generally, this problem occurs when the nodes in the cluster are about to fully occupy the container CIDR block. For example, the container CIDR block is 10.0.0.0/16, the number of available IP addresses is 65,536, and the VPC network is allocated a CIDR block with the fixed size (using the mask to determine the maximum number of container IP addresses allocated to each node). If the upper limit is 128, the cluster supports a maximum of 512 (65536/128) nodes, including the three master nodes. After the cluster is upgraded, each of the three master nodes occupies one CIDR block. As a result, 506 nodes are supported. | ++-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/containerd.sock_check.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/containerd.sock.rst similarity index 87% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/containerd.sock_check.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/containerd.sock.rst index b536554..e64c85a 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/containerd.sock_check.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/containerd.sock.rst @@ -2,8 +2,8 @@ .. _cce_10_0457: -containerd.sock Check -===================== +containerd.sock +=============== Check Item ---------- @@ -17,5 +17,5 @@ Solution #. Log in to the node. #. Run the **rpm -qa \| grep docker \| grep euleros** command. If the command output is not empty, the Docker used on the node is Euler-docker. -#. Run the **ls /run/containerd/containerd.sock** command. If the file exists, Docker fails to be started. +#. Run the **ls /run/containerd/containerd.sock** command. If the file exists, Docker startup will fail. #. Run the **rm -rf /run/containerd/containerd.sock** command and perform the cluster upgrade check again. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_coredns_configuration_consistency.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/coredns_configuration_consistency.rst similarity index 53% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_coredns_configuration_consistency.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/coredns_configuration_consistency.rst index 450a0f0..5a47f7d 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_coredns_configuration_consistency.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/coredns_configuration_consistency.rst @@ -2,22 +2,22 @@ .. _cce_10_0493: -Checking CoreDNS Configuration Consistency -========================================== +CoreDNS Configuration Consistency +================================= Check Item ---------- -Check whether the current CoreDNS key configuration Corefile is different from the Helm Release record. The difference may be overwritten during the add-on upgrade, affecting domain name resolution in the cluster. +Check whether the current CoreDNS key configuration Corefile is different from the Helm release record. The difference may be overwritten during the add-on upgrade, **affecting domain name resolution in the cluster**. Solution -------- -You can upgrade the coredns add-on separately after confirming the configuration differences. +You can upgrade CoreDNS separately after confirming the configuration differences. -#. For details about how to configure kubectl, see :ref:`Connecting to a Cluster Using kubectl `. +#. Configure kubectl, see :ref:`Connecting to a Cluster Using kubectl `. -#. .. _cce_10_0493__en-us_topic_0000001548755413_li1178291934910: +#. .. _cce_10_0493__li1178291934910: Obtain the Corefile that takes effect currently. @@ -26,9 +26,9 @@ You can upgrade the coredns add-on separately after confirming the configuration kubectl get cm -nkube-system coredns -o jsonpath='{.data.Corefile}' > corefile_now.txt cat corefile_now.txt -#. .. _cce_10_0493__en-us_topic_0000001548755413_li111544111811: +#. .. _cce_10_0493__li111544111811: - Obtain the Corefile in the Helm Release record (depending on Python 3). + Obtain the Corefile in the Helm release record (depending on Python 3). .. code-block:: @@ -47,24 +47,28 @@ You can upgrade the coredns add-on separately after confirming the configuration " > corefile_record.txt cat corefile_record.txt -#. Compare the output information of :ref:`2 ` and :ref:`3 `. +#. Compare the output differences between :ref:`2 ` and :ref:`3 `. .. code-block:: diff corefile_now.txt corefile_record.txt -y; - |image1| -#. Return to the CCE console and click the cluster name to go to the cluster console. On the **Add-ons** page, select the coredns add-on and click **Upgrade**. + .. figure:: /_static/images/en-us_image_0000001695896617.png + :alt: **Figure 1** Viewing output differences - To retain the differentiated configurations, use either of the following methods: + **Figure 1** Viewing output differences - - Set **parameterSyncStrategy** to **force**. You need to manually enter the differentiated configurations. For details, see :ref:`coredns (System Resource Add-On, Mandatory) `. +#. Return to the CCE console and click the cluster name to go to the cluster console. On the **Add-ons** page, select CoreDNS and click **Upgrade**. + + To retain the different configurations, use either of the following methods: + + - Set **parameterSyncStrategy** to **force**. Manually enter the differential configuration. For details, see :ref:`CoreDNS (System Resource Add-On, Mandatory) `. - If **parameterSyncStrategy** is set to **inherit**, differentiated configurations are automatically inherited. The system automatically parses, identifies, and inherits differentiated parameters. - |image2| + |image1| -#. Click **OK**. After the add-on upgrade is complete, check whether all CoreDNS instances are available and whether the Corefile meets the expectation. +#. Click **OK**. After the add-on upgrade is complete, check whether all CoreDNS instances are available and whether Corefile meets the expectation. .. code-block:: @@ -72,7 +76,6 @@ You can upgrade the coredns add-on separately after confirming the configuration #. Change the value of **parameterSyncStrategy** to **ensureConsistent** to enable configuration consistency verification. - Use the parameter configuration function of CCE add-on management to modify the Corefile configuration to avoid differences. + In addition, you are advised to use the parameter configuration function of CCE add-on management to modify the Corefile configuration to avoid differences. -.. |image1| image:: /_static/images/en-us_image_0000001628843805.png -.. |image2| image:: /_static/images/en-us_image_0000001668036886.png +.. |image1| image:: /_static/images/en-us_image_0000001716141253.png diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/crd_check.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/crds.rst similarity index 94% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/crd_check.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/crds.rst index 1740da7..91ed5a0 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/crd_check.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/crds.rst @@ -2,8 +2,8 @@ .. _cce_10_0444: -CRD Check -========= +CRDs +==== Check Item ---------- diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_deprecated_kubernetes_apis.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/discarded_kubernetes_apis.rst similarity index 64% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_deprecated_kubernetes_apis.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/discarded_kubernetes_apis.rst index dc9b337..537fef0 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_deprecated_kubernetes_apis.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/discarded_kubernetes_apis.rst @@ -2,8 +2,8 @@ .. _cce_10_0487: -Checking Deprecated Kubernetes APIs -=================================== +Discarded Kubernetes APIs +========================= Check Item ---------- @@ -17,11 +17,11 @@ The system scans the audit logs of the past day to check whether the user calls Solution -------- -**Description** +**Check Description** -The check result shows that your cluster calls a deprecated API of the target cluster version through kubectl or other applications. You can rectify the fault before the upgrade. Otherwise, the API will be intercepted by kube-apiserver after the upgrade. For details about each deprecated API, see `Deprecated API Migration Guide `__. +Based on the check result, it is detected that your cluster calls a deprecated API of the target cluster version using kubectl or other applications. You can rectify the fault before the upgrade. Otherwise, the API will be intercepted by kube-apiserver after the upgrade. For details about each deprecated API, see :ref:`Deprecated APIs `. -**Cases** +**Case Study** Ingresses of extensions/v1beta1 and networking.k8s.io/v1beta1 API are deprecated in clusters of v1.22. If you upgrade a CCE cluster from v1.19 or v1.21 to v1.23, existing resources are not affected, but the v1beta1 API version may be intercepted in the creation and editing scenarios. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/discarded_kubernetes_resource.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/discarded_kubernetes_resource.rst deleted file mode 100644 index ea69a7d..0000000 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/discarded_kubernetes_resource.rst +++ /dev/null @@ -1,35 +0,0 @@ -:original_name: cce_10_0440.html - -.. _cce_10_0440: - -Discarded Kubernetes Resource -============================= - -Check Item ----------- - -Check whether there are discarded resources in the clusters. - -Solution --------- - -**Scenario 1: The PodSecurityPolicy resource object has been discarded since clusters of v1.25.** - -|image1| - -Run the **kubectl get psp -A** command in the cluster to obtain the existing PSP object. - -If these two objects are not used, skip the check. Otherwise, upgrade the corresponding functions to PodSecurity by referring to :ref:`Pod Security `. - -**Scenario 2: The discarded annotation (tolerate-unready-endpoints) exists in Services in clusters of 1.25 or later.** - -|image2| - -Check whether the Service in the log information contains the annotation **tolerate-unready-endpoints**. If yes, delete the annotation and add the following field to the spec of the corresponding Service to replace the annotation: - -.. code-block:: - - publishNotReadyAddresses: true - -.. |image1| image:: /_static/images/en-us_image_0000001569022901.png -.. |image2| image:: /_static/images/en-us_image_0000001517903056.png diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/discarded_kubernetes_resources.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/discarded_kubernetes_resources.rst new file mode 100644 index 0000000..0ce67cf --- /dev/null +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/discarded_kubernetes_resources.rst @@ -0,0 +1,28 @@ +:original_name: cce_10_0440.html + +.. _cce_10_0440: + +Discarded Kubernetes Resources +============================== + +Check Item +---------- + +Check whether there are discarded resources in the clusters. + +Solution +-------- + +**Scenario 1: The PodSecurityPolicy resource object has been discarded since clusters of 1.25.** + +Run the **kubectl get psp -A** command in the cluster to obtain the existing PSP object. + +If these two objects are not used, skip the check. Otherwise, upgrade the corresponding functions to PodSecurity by referring to :ref:`Pod Security `. + +**Scenario 2: The Service in the clusters of 1.25 or later has discarded annotation:** **tolerate-unready-endpoints.** + +Check whether the Service provided in the log information contains the annotation of **tolerate-unready-endpoints**. If yes, replace the annotation with the following fields: + +.. code-block:: + + publishNotReadyAddresses: true diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/enhanced_cpu_management_policy.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/enhanced_cpu_management_policy.rst deleted file mode 100644 index fa08db3..0000000 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/enhanced_cpu_management_policy.rst +++ /dev/null @@ -1,29 +0,0 @@ -:original_name: cce_10_0480.html - -.. _cce_10_0480: - -Enhanced CPU Management Policy -============================== - -Check Item ----------- - -Check whether the current cluster version and the target version support enhanced CPU policy. - -Solution --------- - -**Scenario**: The current cluster version uses the enhanced CPU management policy, but the target cluster version does not support the enhanced CPU management policy. - -Upgrade the cluster to a version that supports the enhanced CPU management policy. The following table lists the cluster versions that support the enhanced CPU management policy. - -.. table:: **Table 1** Cluster versions that support the enhanced CPU policy - - ================ ============================= - Cluster Version Enhanced CPU Policy Supported - ================ ============================= - v1.17 or earlier No - v1.19 No - v1.21 No - v1.23 or later Yes - ================ ============================= diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/enhanced_cpu_policies.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/enhanced_cpu_policies.rst new file mode 100644 index 0000000..d7f994b --- /dev/null +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/enhanced_cpu_policies.rst @@ -0,0 +1,29 @@ +:original_name: cce_10_0480.html + +.. _cce_10_0480: + +Enhanced CPU Policies +===================== + +Check Item +---------- + +Check whether the current cluster version and the target version support the enhanced CPU policy. + +Solution +-------- + +**Scenario**: Only the current cluster version supports the enhanced CPU policy function. The target version does not support the enhanced CPU policy function. + +Upgrade to a cluster version that supports the enhanced CPU policy function. The following table lists the cluster versions that support the enhanced CPU policy function. + +.. table:: **Table 1** List of cluster versions that support the enhanced CPU policy function + + ============================ =================== + Cluster Version Enhanced CPU Policy + ============================ =================== + Clusters of v1.17 or earlier Not supported + Clusters of v1.19 Not supported + Clusters of v1.21 Not supported + Clusters of v1.23 and later Supported + ============================ =================== diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/everest_restriction_check.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/everest_restrictions.rst similarity index 58% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/everest_restriction_check.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/everest_restrictions.rst index c5f52d6..39dc13f 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/everest_restriction_check.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/everest_restrictions.rst @@ -2,17 +2,15 @@ .. _cce_10_0478: -everest Restriction Check -========================= +everest Restrictions +==================== Check Item ---------- -Check whether the current everest add-on has compatibility restrictions. See :ref:`Table 1 `. +Check whether there are any compatibility restrictions on the current everest add-on. -.. _cce_10_0478__table1126154011128: - -.. table:: **Table 1** List of everest add-on versions that have compatibility restrictions +.. table:: **Table 1** List of everest add-on versions with compatibility restrictions +-----------------------------------+-----------------------------------+ | Add-on Name | Versions Involved | @@ -25,4 +23,4 @@ Check whether the current everest add-on has compatibility restrictions. See :re Solution -------- -The current everest add-on has compatibility restrictions and cannot be upgraded with the cluster upgrade. Contact technical support. +There are compatibility restrictions on the current everest add-on and it cannot be upgraded with the cluster upgrade. Contact technical support. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_helm_chart.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/helm_charts.rst similarity index 91% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_helm_chart.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/helm_charts.rst index 2f30fc8..993c5ed 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_helm_chart.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/helm_charts.rst @@ -2,8 +2,8 @@ .. _cce_10_0434: -Checking the Helm Chart -======================= +Helm Charts +=========== Check Item ---------- diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/index.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/index.rst index 42dd896..ac73311 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/index.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/index.rst @@ -5,92 +5,92 @@ Troubleshooting for Pre-upgrade Check Exceptions ================================================ -- :ref:`Performing Pre-upgrade Check ` -- :ref:`Checking the Node ` -- :ref:`Checking the Blocklist ` -- :ref:`Checking the Add-on ` -- :ref:`Checking the Helm Chart ` -- :ref:`Checking the Master Node SSH Connectivity ` -- :ref:`Checking the Node Pool ` -- :ref:`Checking the Security Group ` -- :ref:`To-Be-Migrated Node ` -- :ref:`Discarded Kubernetes Resource ` -- :ref:`Compatibility Risk ` -- :ref:`Node CCEAgent Version ` +- :ref:`Pre-upgrade Check ` +- :ref:`Node Restrictions ` +- :ref:`Blocklist ` +- :ref:`Add-ons ` +- :ref:`Helm Charts ` +- :ref:`SSH Connectivity of Master Nodes ` +- :ref:`Node Pools ` +- :ref:`Security Groups ` +- :ref:`To-Be-Migrated Nodes ` +- :ref:`Discarded Kubernetes Resources ` +- :ref:`Compatibility Risks ` +- :ref:`Node CCE Agent Versions ` - :ref:`Node CPU Usage ` -- :ref:`CRD Check ` -- :ref:`Node Disk ` +- :ref:`CRDs ` +- :ref:`Node Disks ` - :ref:`Node DNS ` - :ref:`Node Key Directory File Permissions ` - :ref:`Kubelet ` - :ref:`Node Memory ` - :ref:`Node Clock Synchronization Server ` - :ref:`Node OS ` -- :ref:`Node CPU Count ` -- :ref:`Node Python Command ` +- :ref:`Node CPUs ` +- :ref:`Node Python Commands ` - :ref:`Node Readiness ` - :ref:`Node journald ` -- :ref:`containerd.sock Check ` -- :ref:`Internal Error ` -- :ref:`Node Mount Point ` -- :ref:`Kubernetes Node Taint ` -- :ref:`everest Restriction Check ` -- :ref:`cce-hpa-controller Restriction Check ` -- :ref:`Enhanced CPU Management Policy ` +- :ref:`containerd.sock ` +- :ref:`Internal Errors ` +- :ref:`Node Mount Points ` +- :ref:`Kubernetes Node Taints ` +- :ref:`everest Restrictions ` +- :ref:`cce-hpa-controller Restrictions ` +- :ref:`Enhanced CPU Policies ` - :ref:`Health of Worker Node Components ` - :ref:`Health of Master Node Components ` - :ref:`Memory Resource Limit of Kubernetes Components ` -- :ref:`Checking Deprecated Kubernetes APIs ` +- :ref:`Discarded Kubernetes APIs ` - :ref:`IPv6 Capabilities of a CCE Turbo Cluster ` - :ref:`Node NetworkManager ` - :ref:`Node ID File ` - :ref:`Node Configuration Consistency ` - :ref:`Node Configuration File ` -- :ref:`Checking CoreDNS Configuration Consistency ` +- :ref:`CoreDNS Configuration Consistency ` .. toctree:: :maxdepth: 1 :hidden: - performing_pre-upgrade_check - checking_the_node - checking_the_blocklist - checking_the_add-on - checking_the_helm_chart - checking_the_master_node_ssh_connectivity - checking_the_node_pool - checking_the_security_group - to-be-migrated_node - discarded_kubernetes_resource - compatibility_risk - node_cceagent_version + pre-upgrade_check + node_restrictions + blocklist + add-ons + helm_charts + ssh_connectivity_of_master_nodes + node_pools + security_groups + to-be-migrated_nodes + discarded_kubernetes_resources + compatibility_risks + node_cce_agent_versions node_cpu_usage - crd_check - node_disk + crds + node_disks node_dns node_key_directory_file_permissions kubelet node_memory node_clock_synchronization_server node_os - node_cpu_count - node_python_command + node_cpus + node_python_commands node_readiness node_journald - containerd.sock_check - internal_error - node_mount_point - kubernetes_node_taint - everest_restriction_check - cce-hpa-controller_restriction_check - enhanced_cpu_management_policy + containerd.sock + internal_errors + node_mount_points + kubernetes_node_taints + everest_restrictions + cce-hpa-controller_restrictions + enhanced_cpu_policies health_of_worker_node_components health_of_master_node_components memory_resource_limit_of_kubernetes_components - checking_deprecated_kubernetes_apis + discarded_kubernetes_apis ipv6_capabilities_of_a_cce_turbo_cluster node_networkmanager node_id_file node_configuration_consistency node_configuration_file - checking_coredns_configuration_consistency + coredns_configuration_consistency diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/internal_error.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/internal_errors.rst similarity index 86% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/internal_error.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/internal_errors.rst index a8ae92d..8696e78 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/internal_error.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/internal_errors.rst @@ -2,8 +2,8 @@ .. _cce_10_0458: -Internal Error -============== +Internal Errors +=============== Check Item ---------- diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubelet.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubelet.rst index 8550c19..aff6cf7 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubelet.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubelet.rst @@ -15,8 +15,8 @@ Solution - **Scenario 1: The kubelet status is abnormal.** - If the kubelet is abnormal, the node is unavailable. Restore the node and check again. + If the kubelet malfunctions, the node is unavailable. Restore the node and check again. For details, see -- **Scenario 2: The cce-pause version is abnormal.** +- **Scenario 2: The cce-pause version is incorrect.** The version of the pause container image on which kubelet depends is not cce-pause:3.1. If you continue the upgrade, pods will restart in batches. Currently, the upgrade is not supported. Contact technical support. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubernetes_node_taint.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubernetes_node_taints.rst similarity index 69% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubernetes_node_taint.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubernetes_node_taints.rst index eeb035e..b16dedb 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubernetes_node_taint.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/kubernetes_node_taints.rst @@ -2,20 +2,18 @@ .. _cce_10_0460: -Kubernetes Node Taint -===================== +Kubernetes Node Taints +====================== Check Item ---------- -Check whether the taint, as shown in :ref:`Table 1 `, exists on the node. - -.. _cce_10_0460__table1126154011128: +Check whether the taint needed for cluster upgrade exists on the node. .. table:: **Table 1** Taint checklist ========================== ========== - Name Impact + Taint Name Impact ========================== ========== node.kubernetes.io/upgrade NoSchedule ========================== ========== @@ -27,14 +25,16 @@ Scenario 1: The node is skipped during the cluster upgrade. #. For details about how to configure kubectl, see :ref:`Connecting to a Cluster Using kubectl `. -#. Check the kubelet version of the corresponding node. If the following information is expected: +#. Check the kubelet version of the corresponding node. The following information is expected: - |image1| + + .. figure:: /_static/images/en-us_image_0000001647417808.png + :alt: **Figure 1** kubelet version + + **Figure 1** kubelet version If the version of the node is different from that of other nodes, the node is skipped during the upgrade. Reset the node and upgrade the cluster again. For details about how to reset a node, see :ref:`Resetting a Node `. .. note:: Resetting a node will reset all node labels, which may affect workload scheduling. Before resetting a node, check and retain the labels that you have manually added to the node. - -.. |image1| image:: /_static/images/en-us_image_0000001568902601.png diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/memory_resource_limit_of_kubernetes_components.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/memory_resource_limit_of_kubernetes_components.rst index f9bd8ec..59074a7 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/memory_resource_limit_of_kubernetes_components.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/memory_resource_limit_of_kubernetes_components.rst @@ -13,10 +13,5 @@ Check whether the resources of Kubernetes components, such as etcd and kube-cont Solution -------- -Solution 1: Reducing Kubernetes resources - -Solution 2: :ref:`Expanding cluster scale ` - -|image1| - -.. |image1| image:: /_static/images/en-us_image_0000001579008782.png +- Solution 1: Reduce Kubernetes resources. +- Solution 2: :ref:`Scale out the cluster. ` diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cce_agent_versions.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cce_agent_versions.rst new file mode 100644 index 0000000..980d5bb --- /dev/null +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cce_agent_versions.rst @@ -0,0 +1,88 @@ +:original_name: cce_10_0442.html + +.. _cce_10_0442: + +Node CCE Agent Versions +======================= + +Check Item +---------- + +Check whether cce-agent on the current node is of the latest version. + +Solution +-------- + +- **Scenario 1: The error message "you cce-agent no update, please restart it" is displayed.** + + cce-agent does not need to be updated but is not restarted. In this case, log in to the node and manually restart cce-agent. + + Solution: Log in to the node and run the following command: + + .. code-block:: + + systemctl restart cce-agent + + Perform the pre-upgrade check again. + +- **Scenario 2: The error message "your cce-agent is not the latest version" is displayed.** + + cce-agent is not of the latest version, and the automatic update failed. This issue is typically caused by an invalid OBS path or the component version is outdated. + + Solution + + #. Log in to a node where the check succeeded, obtain the path of the cce-agent configuration file, and obtain the OBS address. + + .. code-block:: + + cat `ps aux | grep cce-agent | grep -v grep | awk -F '-f ' '{print $2}'` + + The OBS configuration address field in the configuration file is **packageFrom.addr**. + + + .. figure:: /_static/images/en-us_image_0000001695896445.png + :alt: **Figure 1** OBS address + + **Figure 1** OBS address + + #. Log in to a where the check failed, obtain the OBS address again by referring to the previous step, and check whether the OBS addresses are the same. If they are different, change the OBS address of the abnormal node to the correct address. + + #. Run the following commands to download the latest binary file: + + - x86 + + .. code-block:: + + curl -k "https://{OBS address you have obtained}/cluster-versions/base/cce-agent" > /tmp/cce-agent + + - Arm + + .. code-block:: + + curl -k "https://{OBS address you have obtained}/cluster-versions/base/cce-agent-arm" > /tmp/cce-agent-arm + + #. Replace the original cce-agent binary file. + + - x86 + + .. code-block:: + + mv -f /tmp/cce-agent /usr/local/bin/cce-agent + chmod 750 /usr/local/bin/cce-agent + chown root:root /usr/local/bin/cce-agent + + - Arm + + .. code-block:: + + mv -f /tmp/cce-agent-arm /usr/local/bin/cce-agent-arm + chmod 750 /usr/local/bin/cce-agent-arm + chown root:root /usr/local/bin/cce-agent-arm + + #. Restart cce-agent. + + .. code-block:: + + systemctl restart cce-agent + + If you have any questions about the preceding operations, contact technical support. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cceagent_version.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cceagent_version.rst deleted file mode 100644 index 5ed2cd5..0000000 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cceagent_version.rst +++ /dev/null @@ -1,70 +0,0 @@ -:original_name: cce_10_0442.html - -.. _cce_10_0442: - -Node CCEAgent Version -===================== - -Check Item ----------- - -Check whether cce-agent on the current node is of the latest version. - -Solution --------- - -If cce-agent is not of the latest version, the automatic update fails. This problem is usually caused by invalid OBS address or the version of the component is outdated. - -#. Log in to a normal node that passes the check, obtain the path of the cce-agent configuration file, and check the OBS address. - - .. code-block:: - - cat `ps aux | grep cce-agent | grep -v grep | awk -F '-f ''{print $2}'` - - The OBS configuration address field in the configuration file is **packageFrom.addr**. - - |image1| - -#. Log in to an abnormal node where the check fails, obtain the OBS address again by referring to the previous step, and check whether the OBS address is consistent. If they are different, change the OBS address of the abnormal node to the correct address. - -#. Run the following commands to download the latest binary file: - - - x86 - - .. code-block:: - - curl -k "https://{OBS address you have obtained}/cluster-versions/base/cce-agent" > /tmp/cce-agent - - - ARM - - .. code-block:: - - curl -k "https://{OBS address you have obtained}/cluster-versions/base/cce-agent-arm" > /tmp/cce-agent-arm - -#. Replace the original cce-agent binary file. - - - x86 - - .. code-block:: - - mv -f /tmp/cce-agent /usr/local/bin/cce-agent - chmod 750 /usr/local/bin/cce-agent - chown root:root /usr/local/bin/cce-agent - - - ARM - - .. code-block:: - - mv -f /tmp/cce-agent-arm /usr/local/bin/cce-agent-arm - chmod 750 /usr/local/bin/cce-agent-arm - chown root:root /usr/local/bin/cce-agent-arm - -#. Restart cce-agent. - - .. code-block:: - - systemctl restart cce-agent - - If you have any questions about the preceding operations, contact technical support. - -.. |image1| image:: /_static/images/en-us_image_0000001629186693.png diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_clock_synchronization_server.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_clock_synchronization_server.rst index 7b229c5..e2d013a 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_clock_synchronization_server.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_clock_synchronization_server.rst @@ -15,23 +15,28 @@ Solution - **Scenario 1: ntpd is running abnormally.** - Log in to the node and run the **systemctl status ntpd** command to query the running status of ntpd. If the command output is abnormal, run the **systemctl restart ntpd** command and query the status again. + Log in to the node and run the **systemctl status ntpd** command to obtain the running status of ntpd. If the command output is abnormal, run the **systemctl restart ntpd** command and obtain the status again. The normal command output is as follows: - |image1| + + .. figure:: /_static/images/en-us_image_0000001695737169.png + :alt: **Figure 1** Running status of ntpd + + **Figure 1** Running status of ntpd If the problem persists after ntpd is restarted, contact technical support. - **Scenario 2: chronyd is running abnormally.** - Log in to the node and run the **systemctl status chronyd** command to query the running status of chronyd. If the command output is abnormal, run the **systemctl restart chronyd** command and query the status again. + Log in to the node and run the **systemctl status chronyd** command to obtain the running status of chronyd. If the command output is abnormal, run the **systemctl restart chronyd** command and obtain the status again. The normal command output is as follows: - |image2| + + .. figure:: /_static/images/en-us_image_0000001695896453.png + :alt: **Figure 2** Running status of chronyd + + **Figure 2** Running status of chronyd If the problem persists after chronyd is restarted, contact technical support. - -.. |image1| image:: /_static/images/en-us_image_0000001568902509.png -.. |image2| image:: /_static/images/en-us_image_0000001518062624.png diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cpu_count.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cpus.rst similarity index 90% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cpu_count.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cpus.rst index 6b3bbb7..c3ee325 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cpu_count.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_cpus.rst @@ -2,8 +2,8 @@ .. _cce_10_0452: -Node CPU Count -============== +Node CPUs +========= Check Item ---------- diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_disk.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_disks.rst similarity index 84% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_disk.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_disks.rst index 8630d9a..210043d 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_disk.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_disks.rst @@ -2,8 +2,8 @@ .. _cce_10_0445: -Node Disk -========= +Node Disks +========== Check Item ---------- @@ -38,7 +38,7 @@ During the node upgrade, the key disks store the upgrade component package, and .. code-block:: - df -h /var/lib/docker + df -h /mnt/paas/kubernetes/kubelet - System disk: 10 GB for master nodes and 2 GB for worker nodes @@ -48,7 +48,7 @@ During the node upgrade, the key disks store the upgrade component package, and - **Scenario 2: The /tmp directory space is insufficient.** - Run the following command to check the space usage of the file system where the /tmp directory is located. Ensure that the space is greater than 500 MB and check again. + Run the following command to check the usage of the file system where the **/tmp** directory is located. Ensure that the space is greater than 500 MB and check again. .. code-block:: diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_dns.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_dns.rst index 69ab0fb..3d4de13 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_dns.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_dns.rst @@ -16,4 +16,4 @@ Check the following aspects: Solution -------- -During the node upgrade, you need to obtain the upgrade component package from OBS. If this check fails, contact technical support. +During the node upgrade, obtain the upgrade component package from OBS. If this check fails, contact technical support. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_journald.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_journald.rst index 843a433..95d299f 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_journald.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_journald.rst @@ -13,12 +13,14 @@ Check whether journald of a node is normal. Solution -------- -Log in to the node and run the **systemctl is-active systemd-journald** command to query the running status of journald. If the command output is abnormal, run the **systemctl restart systemd-journald** command and query the status again. +Log in to the node and run the **systemctl is-active systemd-journald** command to obtain the running status of journald. If the command output is abnormal, run the **systemctl restart systemd-journald** command and obtain the status again. The normal command output is as follows: -|image1| + +.. figure:: /_static/images/en-us_image_0000001647576916.png + :alt: **Figure 1** Running status of journald + + **Figure 1** Running status of journald If the problem persists after journald is restarted, contact technical support. - -.. |image1| image:: /_static/images/en-us_image_0000001517903128.png diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_key_directory_file_permissions.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_key_directory_file_permissions.rst index e862e80..9cc87fd 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_key_directory_file_permissions.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_key_directory_file_permissions.rst @@ -13,8 +13,14 @@ Check whether the key directory **/var/paas** on the nodes contain files with ab Solution -------- -CCE uses the **/var/paas** directory to manage nodes and store file data whose owner and owner group are both paas. +- **Scenario 1: The error message "xx file permission has been changed!" is displayed.** -During the current cluster upgrade, the owner and owner group of the files in the **/var/paas** directory are reset to paas. + Solution: Enable CCE to use the **/var/paas** directory to manage nodes and store file data whose owner and owner group are both **paas**. -Check whether file data is stored in the **/var/paas** directory. If yes, do not use this directory, remove abnormal files from this directory, and check again. Otherwise, the upgrade is prohibited. + During the current cluster upgrade, the owner and owner group of the files in the **/var/paas** directory are reset to paas. + + Check whether file data is stored in the **/var/paas** directory. If yes, do not use this directory, remove abnormal files from this directory, and check again. Otherwise, the upgrade is prohibited. + +- **Scenario 2: The error message "user paas must have at least read and execute permissions on the root directory" is displayed.** + + Solution: Change the permission on the root directory to the default permission 555. If the permission on the root directory of the node is modified, user **paas** does not have the read permission on the root directory. As a result, restarting the component failed during the upgrade. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_mount_point.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_mount_points.rst similarity index 79% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_mount_point.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_mount_points.rst index f454540..23b5779 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_mount_point.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_mount_points.rst @@ -2,8 +2,8 @@ .. _cce_10_0459: -Node Mount Point -================ +Node Mount Points +================= Check Item ---------- @@ -34,7 +34,7 @@ If network NFS (such as OBS, SFS, and SFS) is used by the node and the node is d - ps aux | grep "D " -#. If a process is in the D state, the problem occurs.You can only reset the node to solve the problem. Reset the node and upgrade the cluster again. For details about how to reset a node, see :ref:`Resetting a Node `. +#. If a process is in the D state, the problem occurs. You can only reset the node to solve the problem. Reset the node and upgrade the cluster again. For details about how to reset a node, see :ref:`Resetting a Node `. .. note:: diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_pools.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_pools.rst new file mode 100644 index 0000000..09438b5 --- /dev/null +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_pools.rst @@ -0,0 +1,18 @@ +:original_name: cce_10_0436.html + +.. _cce_10_0436: + +Node Pools +========== + +Check Item +---------- + +Check the node pool status. + +Solution +-------- + +**Scenario: The node pool malfunctions.** + +Log in to the CCE console, go to the target cluster and choose **Nodes**. On the displayed page, click **Node Pools** tab and check the node pool status. If the node pool is being scaled, wait until the node pool scaling is complete. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_python_command.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_python_commands.rst similarity index 88% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_python_command.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_python_commands.rst index e2493dc..65f3767 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_python_command.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_python_commands.rst @@ -2,8 +2,8 @@ .. _cce_10_0453: -Node Python Command -=================== +Node Python Commands +==================== Check Item ---------- diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_restrictions.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_restrictions.rst new file mode 100644 index 0000000..26f192a --- /dev/null +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/node_restrictions.rst @@ -0,0 +1,54 @@ +:original_name: cce_10_0431.html + +.. _cce_10_0431: + +Node Restrictions +================= + +Check Item +---------- + +Check the following aspects: + +- Check whether the node is available. +- Check whether the node OS supports the upgrade. +- Check whether there are unexpected node pool tags in the node. +- Check whether the Kubernetes node name is consistent with the ECS name. + +Solution +-------- + +- **Scenario 1: The node status is abnormal. Rectify the fault first.** + + Log in to the CCE console and access the cluster console. Choose **Nodes** in the navigation pane and check the node status. Ensure that the node is in the **Running** status. A node in the **Installing** or **Deleting** status cannot be upgraded. + + If the node status is abnormal, restore the node and retry the check task. + +- **Scenario 2: The node OS does not support the upgrade. Contact technical support.** + + The following table lists the node OSs that support the upgrade. You can reset the node OS to an available OS in the list. + + .. table:: **Table 1** OSs that support the upgrade + + +-------------+-----------------------------------------------------------------------------------------------------------------------+ + | OS | Constraint | + +=============+=======================================================================================================================+ + | EulerOS 2.x | None. | + +-------------+-----------------------------------------------------------------------------------------------------------------------+ + | Ubuntu | Some sites cannot perform upgrade. If the check result shows the upgrade is not supported, contact technical support. | + +-------------+-----------------------------------------------------------------------------------------------------------------------+ + +- **Scenario 3: The node belongs to the default node pool but contains a common node pool label, which affects the upgrade process.** + + If a node is migrated from a node pool to the default node pool, the node pool label **cce.cloud.com/cce-nodepool** is retained, affecting cluster upgrade. Check whether the load scheduling on the node depends on the label. + + - If there is no dependency, delete the tag. + - If yes, modify the load balancing policy, remove the dependency, and then delete the tag. + +- **Scenario 4: CNIProblem taints are detected on the node. Remove the taints first.** + + The node contains a taint whose key is **node.cloudprovider.kubernetes.io/cni-problem**, and the effect is **NoSchedule**. The taint is added by the npd add-on. You are advised to upgrade the npd add-on to the latest version and check again. If the problem persists, contact technical support. + +- **Scenario 5: The Kubernetes node corresponding to the node does not exist. The node may be being deleted. Check again later.** + + Check again later. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/performing_pre-upgrade_check.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/performing_pre-upgrade_check.rst deleted file mode 100644 index ffe8e21..0000000 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/performing_pre-upgrade_check.rst +++ /dev/null @@ -1,107 +0,0 @@ -:original_name: cce_10_0549.html - -.. _cce_10_0549: - -Performing Pre-upgrade Check -============================ - -The system performs a comprehensive pre-upgrade check before the cluster upgrade. If the cluster does not meet the pre-upgrade check conditions, the upgrade cannot continue. To prevent upgrade risks, you can perform pre-upgrade check according to the check items provided by this section. - -.. table:: **Table 1** Check items - - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Check Item | Description | - +=====================================================================+=========================================================================================================================================================================================================================================================+ - | :ref:`Checking the Node ` | - Check whether the node is available. | - | | - Check whether the node OS supports the upgrade. | - | | - Check whether there are unexpected node pool tags in the node. | - | | - Check whether the Kubernetes node name is consistent with the ECS name. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Checking the Blocklist ` | Check whether the current user is in the upgrade blocklist. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Checking the Add-on ` | - Check whether the add-on status is normal. | - | | - Check whether the add-on support the target version. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Checking the Helm Chart ` | Check whether the current HelmRelease record contains discarded Kubernetes APIs that are not supported by the target cluster version. If yes, the Helm chart may be unavailable after the upgrade. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Checking the Master Node SSH Connectivity ` | Check whether CCE can connect to your master nodes. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Checking the Node Pool ` | Check the node pool status. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Checking the Security Group ` | Check whether the security group allows the master node to access nodes using ICMP. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`To-Be-Migrated Node ` | Check whether the node needs to be migrated. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Discarded Kubernetes Resource ` | Check whether there are discarded resources in the clusters. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Compatibility Risk ` | Read the version compatibility differences and ensure that they are not affected. The patch upgrade does not involve version compatibility differences. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node CCEAgent Version ` | Check whether cce-agent on the current node is of the latest version. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node CPU Usage ` | Check whether the CPU usage of the node exceeds 90%. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`CRD Check ` | - Check whether the key CRD **packageversions.version.cce.io** of the cluster is deleted. | - | | - Check whether the cluster key CRD **network-attachment-definitions.k8s.cni.cncf.io** is deleted. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node Disk ` | - Check whether the key data disks on the node meet the upgrade requirements. | - | | - Check whether the **/tmp** directory has 500 MB available space. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node DNS ` | - Check whether the DNS configuration of the current node can resolve the OBS address. | - | | - Check whether the current node can access the OBS address of the storage upgrade component package. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node Key Directory File Permissions ` | Check whether the key directory **/var/paas** on the nodes contain files with abnormal owners or owner groups. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Kubelet ` | Check whether the kubelet on the node is running properly. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node Memory ` | Check whether the memory usage of the node exceeds 90%. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node Clock Synchronization Server ` | Check whether the clock synchronization server ntpd or chronyd of the node is running properly. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node OS ` | Check whether the OS kernel version of the node is supported by CCE. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node CPU Count ` | Check whether the number of CPUs on the master node is greater than 2. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node Python Command ` | Check whether the Python commands are available on a node. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node Readiness ` | Check whether the nodes in the cluster are ready. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node journald ` | Check whether journald of a node is normal. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`containerd.sock Check ` | Check whether the containerd.sock file exists on the node. This file affects the startup of container runtime in the Euler OS. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Internal Error ` | Before the upgrade, check whether an internal error occurs. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node Mount Point ` | Check whether inaccessible mount points exist on the node. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Kubernetes Node Taint ` | Check whether the taint needed for cluster upgrade exists on the node. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`everest Restriction Check ` | Check whether the current everest add-on has compatibility restrictions. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`cce-hpa-controller Restriction Check ` | Check whether the current cce-controller-hpa add-on has compatibility restrictions. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Enhanced CPU Management Policy ` | Check whether the current cluster version and the target version support enhanced CPU policy. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Health of Worker Node Components ` | Check whether the container runtime and network components on the worker nodes are healthy. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Health of Master Node Components ` | Check whether the Kubernetes, container runtime, and network components of the master nodes are healthy. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Memory Resource Limit of Kubernetes Components ` | Check whether the resources of Kubernetes components, such as etcd and kube-controller-manager, exceed the upper limit. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Checking Deprecated Kubernetes APIs ` | The system scans the audit logs of the past day to check whether the user calls the deprecated APIs of the target Kubernetes version. | - | | | - | | .. note:: | - | | | - | | Due to the limited time range of audit logs, this check item is only an auxiliary method. APIs to be deprecated may have been used in the cluster, but their usage is not included in the audit logs of the past day. Check the API usage carefully. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`IPv6 Capabilities of a CCE Turbo Cluster ` | If IPv6 is enabled for a CCE Turbo cluster, check whether the target cluster version supports IPv6. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node NetworkManager ` | Check whether NetworkManager of a node is normal. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node ID File ` | Check the ID file format. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node Configuration Consistency ` | When you upgrade a CCE cluster to v1.19 or later, the system checks whether the following configuration files have been modified in the background: | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Node Configuration File ` | Check whether the configuration files of key components exist on the node. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Checking CoreDNS Configuration Consistency ` | Check whether the current CoreDNS key configuration Corefile is different from the Helm release record. The difference may be overwritten during the add-on upgrade, **affecting domain name resolution in the cluster**. | - +---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/pre-upgrade_check.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/pre-upgrade_check.rst new file mode 100644 index 0000000..1905b06 --- /dev/null +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/pre-upgrade_check.rst @@ -0,0 +1,107 @@ +:original_name: cce_10_0549.html + +.. _cce_10_0549: + +Pre-upgrade Check +================= + +The system performs a comprehensive pre-upgrade check before the cluster upgrade. If the cluster does not meet the pre-upgrade check conditions, the upgrade cannot continue. To prevent upgrade risks, you can perform pre-upgrade check according to the check items provided by this section. + +.. table:: **Table 1** Check items + + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | No. | Check Item | Description | + +=======================+=====================================================================+=========================================================================================================================================================================================================================================================+ + | 1 | :ref:`Node Restrictions ` | - Check whether the node is available. | + | | | - Check whether the node OS supports the upgrade. | + | | | - Check whether there are unexpected node pool tags in the node. | + | | | - Check whether the Kubernetes node name is consistent with the ECS name. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 2 | :ref:`Blocklist ` | Check whether the current user is in the upgrade blocklist. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 3 | :ref:`Add-ons ` | - Check whether the add-on status is normal. | + | | | - Check whether the add-on support the target version. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 4 | :ref:`Helm Charts ` | Check whether the current HelmRelease record contains discarded Kubernetes APIs that are not supported by the target cluster version. If yes, the Helm chart may be unavailable after the upgrade. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 5 | :ref:`SSH Connectivity of Master Nodes ` | Check whether CCE can connect to your master nodes. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 6 | :ref:`Node Pools ` | Check the node pool status. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 7 | :ref:`Security Groups ` | Check whether the security group allows the master node to access nodes using ICMP. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 8 | :ref:`To-Be-Migrated Nodes ` | Check whether the node needs to be migrated. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 9 | :ref:`Discarded Kubernetes Resources ` | Check whether there are discarded resources in the clusters. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 10 | :ref:`Compatibility Risks ` | Read the version compatibility differences and ensure that they are not affected. The patch upgrade does not involve version compatibility differences. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 11 | :ref:`Node CCE Agent Versions ` | Check whether cce-agent on the current node is of the latest version. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 12 | :ref:`Node CPU Usage ` | Check whether the CPU usage of the node exceeds 90%. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 13 | :ref:`CRDs ` | - Check whether the key CRD **packageversions.version.cce.io** of the cluster is deleted. | + | | | - Check whether the cluster key CRD **network-attachment-definitions.k8s.cni.cncf.io** is deleted. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 14 | :ref:`Node Disks ` | - Check whether the key data disks on the node meet the upgrade requirements. | + | | | - Check whether the **/tmp** directory has 500 MiB available space. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 15 | :ref:`Node DNS ` | - Check whether the DNS configuration of the current node can resolve the OBS address. | + | | | - Check whether the current node can access the OBS address of the storage upgrade component package. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 16 | :ref:`Node Key Directory File Permissions ` | Check whether the key directory **/var/paas** on the nodes contain files with abnormal owners or owner groups. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 17 | :ref:`Kubelet ` | Check whether the kubelet on the node is running properly. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 18 | :ref:`Node Memory ` | Check whether the memory usage of the node exceeds 90%. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 19 | :ref:`Node Clock Synchronization Server ` | Check whether the clock synchronization server ntpd or chronyd of the node is running properly. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 20 | :ref:`Node OS ` | Check whether the OS kernel version of the node is supported by CCE. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 21 | :ref:`Node CPUs ` | Check whether the number of CPUs on the master node is greater than 2. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 22 | :ref:`Node Python Commands ` | Check whether the Python commands are available on a node. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 23 | :ref:`Node Readiness ` | Check whether the nodes in the cluster are ready. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 24 | :ref:`Node journald ` | Check whether journald of a node is normal. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 25 | :ref:`containerd.sock ` | Check whether the containerd.sock file exists on the node. This file affects the startup of container runtime in the Euler OS. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 26 | :ref:`Internal Errors ` | Before the upgrade, check whether an internal error occurs. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 27 | :ref:`Node Mount Points ` | Check whether inaccessible mount points exist on the node. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 28 | :ref:`Kubernetes Node Taints ` | Check whether the taint needed for cluster upgrade exists on the node. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 29 | :ref:`everest Restrictions ` | Check whether there are any compatibility restrictions on the current everest add-on. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 30 | :ref:`cce-hpa-controller Restrictions ` | Check whether the current cce-controller-hpa add-on has compatibility restrictions. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 31 | :ref:`Enhanced CPU Policies ` | Check whether the current cluster version and the target version support enhanced CPU policy. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 32 | :ref:`Health of Worker Node Components ` | Check whether the container runtime and network components on the worker nodes are healthy. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 33 | :ref:`Health of Master Node Components ` | Check whether the Kubernetes, container runtime, and network components of the master nodes are healthy. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 34 | :ref:`Memory Resource Limit of Kubernetes Components ` | Check whether the resources of Kubernetes components, such as etcd and kube-controller-manager, exceed the upper limit. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 35 | :ref:`Discarded Kubernetes APIs ` | The system scans the audit logs of the past day to check whether the user calls the deprecated APIs of the target Kubernetes version. | + | | | | + | | | .. note:: | + | | | | + | | | Due to the limited time range of audit logs, this check item is only an auxiliary method. APIs to be deprecated may have been used in the cluster, but their usage is not included in the audit logs of the past day. Check the API usage carefully. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 36 | :ref:`IPv6 Capabilities of a CCE Turbo Cluster ` | If IPv6 is enabled for a CCE Turbo cluster, check whether the target cluster version supports IPv6. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 37 | :ref:`Node NetworkManager ` | Check whether NetworkManager of a node is normal. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 38 | :ref:`Node ID File ` | Check the ID file format. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 39 | :ref:`Node Configuration Consistency ` | When you upgrade a CCE cluster to v1.19 or later, the system checks whether the following configuration files have been modified in the background. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 40 | :ref:`Node Configuration File ` | Check whether the configuration files of key components exist on the node. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | 41 | :ref:`CoreDNS Configuration Consistency ` | Check whether the current CoreDNS key configuration Corefile is different from the Helm release record. The difference may be overwritten during the add-on upgrade, **affecting domain name resolution in the cluster**. | + +-----------------------+---------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_security_group.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/security_groups.rst similarity index 62% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_security_group.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/security_groups.rst index db669df..e59e40c 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_security_group.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/security_groups.rst @@ -2,14 +2,18 @@ .. _cce_10_0437: -Checking the Security Group -=========================== +Security Groups +=============== Check Item ---------- Check whether the security group allows the master node to access nodes using ICMP. +.. note:: + + This check item is performed only for clusters using VPC networking. For clusters using other networking, skip this check item. + Solution -------- @@ -20,4 +24,4 @@ Log in to the VPC console, choose **Access Control** > **Security Groups**, and Click the security group of the node user and ensure that the following rules are configured to allow the master node to access the node using **ICMP**. -Otherwise, add a rule to the node security group. Set **Source** to **Security group**. +If the preceding security group rule is unavailable, add the rule with the following configurations to the node security group: Set **Protocol & Port** to **Protocols/ICMP** and **All**, and **Source** to **Security group** and the master security group. diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_master_node_ssh_connectivity.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/ssh_connectivity_of_master_nodes.rst similarity index 67% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_master_node_ssh_connectivity.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/ssh_connectivity_of_master_nodes.rst index 5e159fe..3b70cfd 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/checking_the_master_node_ssh_connectivity.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/ssh_connectivity_of_master_nodes.rst @@ -2,8 +2,8 @@ .. _cce_10_0435: -Checking the Master Node SSH Connectivity -========================================= +SSH Connectivity of Master Nodes +================================ Check Item ---------- diff --git a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/to-be-migrated_node.rst b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/to-be-migrated_nodes.rst similarity index 53% rename from umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/to-be-migrated_node.rst rename to umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/to-be-migrated_nodes.rst index 247cdb3..dd655c0 100644 --- a/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/to-be-migrated_node.rst +++ b/umn/source/clusters/upgrading_a_cluster/troubleshooting_for_pre-upgrade_check_exceptions/to-be-migrated_nodes.rst @@ -2,8 +2,8 @@ .. _cce_10_0439: -To-Be-Migrated Node -=================== +To-Be-Migrated Nodes +==================== Check Item ---------- @@ -13,11 +13,11 @@ Check whether the node needs to be migrated. Solution -------- -For the 1.15 cluster that is upgraded from 1.13 in rolling mode, you need to migrate (reset or create and replace) all nodes before performing the upgrade again. +For the 1.15 cluster that is upgraded from 1.13 in rolling mode, migrate (reset or create and replace) all nodes before performing the upgrade again. **Solution 1** -Go the CCE console and access the cluster console. Choose **Nodes** in the navigation pane and click **More** > **Reset Node** in the **Operation** column of the corresponding node. For details, see :ref:`Resetting a Node `. After the node is reset, retry the check task. +Go to the CCE console and access the cluster console. Choose **Nodes** in the navigation pane and click **More** > **Reset Node** in the **Operation** column of the corresponding node. For details, see :ref:`Resetting a Node `. After the node is reset, retry the check task. .. note:: diff --git a/umn/source/clusters/upgrading_a_cluster/upgrade_overview.rst b/umn/source/clusters/upgrading_a_cluster/upgrade_overview.rst index 7664f48..c15dcd6 100644 --- a/umn/source/clusters/upgrading_a_cluster/upgrade_overview.rst +++ b/umn/source/clusters/upgrading_a_cluster/upgrade_overview.rst @@ -11,29 +11,56 @@ After the latest Kubernetes version is available in CCE, CCE will describe the c You can use the CCE console to upgrade the Kubernetes version of a cluster. -An upgrade flag will be displayed on the cluster card view if there is a new version for the cluster to upgrade. +An upgrade tag will be displayed on the cluster card view if there is a new version for the cluster to upgrade. **How to check:** -Log in to the CCE console and check whether the message "New version available" is displayed in the lower left corner of the cluster. If yes, the cluster can be upgraded. If no, the cluster cannot be upgraded. +Log in to the CCE console and check whether the message "New version available" is displayed in the lower left corner of the cluster. If yes, the cluster can be upgraded. View the release notes for the latest version. For details, see :ref:`Release Notes for CCE Cluster Versions `. If no such a message is displayed, the cluster is of the latest version. -.. figure:: /_static/images/en-us_image_0000001568902653.png - :alt: **Figure 1** Cluster with the upgrade flag +.. figure:: /_static/images/en-us_image_0000001647417836.png + :alt: **Figure 1** Cluster with the upgrade tag - **Figure 1** Cluster with the upgrade flag + **Figure 1** Cluster with the upgrade tag -.. _cce_10_0197__section19981121648: +Cluster Upgrade Process +----------------------- + +The cluster upgrade process involves pre-upgrade check, backup, upgrade, and post-upgrade verification. + + +.. figure:: /_static/images/en-us_image_0000001647417828.png + :alt: **Figure 2** Process of upgrading a cluster + + **Figure 2** Process of upgrading a cluster + +After determining the target version of the cluster, read the :ref:`precautions ` carefully and prevent function incompatibility during the upgrade. + +#. **Pre-upgrade check** + + Before a cluster upgrade, CCE checks the compatibility of nodes, add-ons, and workloads in the cluster to reduce the probability of upgrade failures to the best extend. If any exception is detected, rectify the fault as prompted on the console. + +#. **Backup** + + During the upgrade, cluster data is backed up by default. You can also back up the entire master nodes as needed. Cloud Backup and Recovery (CBR) will be used for full-node backup. It takes about 20 minutes to back up one node. + +#. **Upgrade** + + During the upgrade, configure upgrade parameters, such as the step for add-on upgrade or node rolling upgrade. After the upgrade parameters are configured, the add-ons and nodes will be upgraded one by one. + +#. **Post-upgrade verification** + + After the upgrade, manually check services and ensure that services are not interrupted by the upgrade. Cluster Upgrade --------------- -The following table describes the target version to which each cluster version can be upgraded, the supported upgrade modes, and upgrade impacts. +The following table describes the target version to which each cluster version can be upgraded and the supported upgrade modes. -.. table:: **Table 1** Cluster upgrade paths and impacts +.. table:: **Table 1** Cluster upgrade +-----------------------+-----------------------+-----------------------+ - | Source Version | Target Version | Upgrade Modes | + | Source Version | Target Version | Upgrade Mode | +=======================+=======================+=======================+ | v1.23 | v1.25 | In-place upgrade | +-----------------------+-----------------------+-----------------------+ @@ -46,33 +73,29 @@ The following table describes the target version to which each cluster version c | | v1.21 | | +-----------------------+-----------------------+-----------------------+ | v1.17 | v1.19 | In-place upgrade | - | | | | - | v1.15 | | | + +-----------------------+-----------------------+-----------------------+ + | v1.15 | v1.19 | In-place upgrade | +-----------------------+-----------------------+-----------------------+ | v1.13 | v1.15 | Rolling upgrade | - | | | | - | | | Replace upgrade | +-----------------------+-----------------------+-----------------------+ Upgrade Modes ------------- -The upgrade processes are the same for master nodes. The differences between the upgrade modes of worker nodes are described as follows: +Different upgrade modes have different advantages and disadvantages. .. table:: **Table 2** Differences between upgrade modes and their advantages and disadvantages - +----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Upgrade Mode | Method | Advantage | Disadvantage | - +======================+==============================================================================================================================================================================================================================================================================================================+=========================================================================+=============================================================================================================================================================================================================================================+ - | **In-place upgrade** | Kubernetes components, network components, and CCE management components are upgraded on the node. During the upgrade, service pods and networks are not affected. The **SchedulingDisabled** label will be added to all existing nodes. After the upgrade is complete, you can properly use existing nodes. | You do not need to migrate services, ensuring service continuity. | In-place upgrade does not upgrade the OS of a node. If you want to upgrade the OS, clear the corresponding node data after the node upgrade is complete and reset the node to upgrade the OS to a new version. | - +----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | **Rolling upgrade** | Only the Kubernetes components and certain network components are upgraded on the node. The **SchedulingDisabled** label will be added to all existing nodes to ensure that the running applications are not affected. | Services are not interrupted. | - **After the upgrade is complete, you need to manually create nodes and gradually release the old nodes.** The new nodes are billed additionally. After services are migrated to the new nodes, the old nodes can be deleted. | - | | | | | - | | .. important:: | | - After the rolling upgrade is complete, if you want to continue the upgrade to a later version, you need to reset the old nodes first. Otherwise, the pre-upgrade check cannot be passed. Services may be interrupted during the upgrade. | - | | | | | - | | NOTICE: | | | - | | | | | - | | - **After the upgrade is complete, you need to manually create nodes and gradually release the old nodes**, thereby migrating your applications to the new nodes. In this mode, you can control the upgrade process. | | | - +----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | **Replace upgrade** | The latest worker node image is used to reset the node OS. | This is the fastest upgrade mode and requires few manual interventions. | Data or configurations on the node will be lost, and services will be interrupted for a period of time. | - +----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Upgrade Mode | Method | Advantage | Disadvantage | + +==================+==============================================================================================================================================================================================================================================================================================================+===================================================================+=================================================================================================================================================================================================================================+ + | In-place upgrade | Kubernetes components, network components, and CCE management components are upgraded on the node. During the upgrade, service pods and networks are not affected. The **SchedulingDisabled** label will be added to all existing nodes. After the upgrade is complete, you can properly use existing nodes. | You do not need to migrate services, ensuring service continuity. | In-place upgrade does not upgrade the OS of a node. If you want to upgrade the OS, clear the corresponding node data after the node upgrade is complete and reset the node to upgrade the OS to a new version. | + +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Rolling upgrade | Only the Kubernetes components and certain network components are upgraded on the node. The **SchedulingDisabled** label will be added to all existing nodes to ensure that the running applications are not affected. | Services are not interrupted. | - **After the upgrade is complete, manually create nodes and gradually release the old nodes.** The new nodes are billed additionally. After services are migrated to the new nodes, the old nodes can be deleted. | + | | | | | + | | .. important:: | | - After the rolling upgrade is complete, if you want to continue the upgrade to a later version, reset the old nodes first. Otherwise, the pre-upgrade check cannot be passed. Services may be interrupted during the upgrade. | + | | | | | + | | NOTICE: | | | + | | | | | + | | - **After the upgrade is complete, manually create nodes and gradually release the old nodes**, thereby migrating your applications to the new nodes. In this mode, you can control the upgrade process. | | | + +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/clusters/using_kubectl_to_run_a_cluster/common_kubectl_commands.rst b/umn/source/clusters/using_kubectl_to_run_a_cluster/common_kubectl_commands.rst deleted file mode 100644 index 6a3e620..0000000 --- a/umn/source/clusters/using_kubectl_to_run_a_cluster/common_kubectl_commands.rst +++ /dev/null @@ -1,453 +0,0 @@ -:original_name: cce_10_0139.html - -.. _cce_10_0139: - -Common kubectl Commands -======================= - -Getting Started ---------------- - -**get** - -The **get** command displays one or many resources of a cluster. - -This command prints a table of the most important information about all resources, including cluster nodes, running pods, Deployments, and Services. - -.. important:: - - A cluster can have multiple namespaces. If no namespace is specified, this command will run with the **--namespace=default** flag. - -Examples: - -To list all pods with detailed information: - -.. code-block:: - - kubectl get po -o wide - -To display pods in all namespaces: - -.. code-block:: - - kubectl get po --all-namespaces - -To list labels of pods in all namespaces: - -.. code-block:: - - kubectl get po --show-labels - -To list all namespaces of the node: - -.. code-block:: - - kubectl get namespace - -.. note:: - - To list information of other nodes, run this command with the **-s** flag. To list a specified type of resources, add the resource type to this command, for example, **kubectl get svc**, **kubectl get nodes**, and **kubectl get deploy**. - -To list a pod with a specified name in YAML output format: - -.. code-block:: - - kubectl get po -o yaml - -To list a pod with a specified name in JSON output format: - -.. code-block:: - - kubectl get po -o json - -.. code-block:: - - kubectl get po rc-nginx-2-btv4j -o=custom-columns=LABELS:.metadata.labels.app - -.. note:: - - **LABELS** indicates a comma separated list of user-defined column titles. **metadata.labels.app** indicates the data to be listed in either YAML or JSON output format. - -**create** - -The **create** command creates a cluster resource from a file or input. - -If there is already a resource descriptor (a YAML or JSON file), you can create the resource from the file by running the following command: - -.. code-block:: - - kubectl create -f filename - -**expose** - -The **expose** command exposes a resource as a new Kubernetes service. Possible resources include a pod, Service, and Deployment. - -.. code-block:: - - kubectl expose deployment deployname --port=81 --type=NodePort --target-port=80 --name=service-name - -.. note:: - - In the preceding command, **--port** indicates the port exposed by the Service, **--type** indicates the Service type, and **--target-port** indicates the port of the pod backing the Service. Visiting *ClusterIP*:*Port* allows you to access the applications in the cluster. - -**run** - -Examples: - -To run a particular image in the cluster: - -.. code-block:: - - kubectl run deployname --image=nginx:latest - -To run a particular image using a specified command: - -.. code-block:: - - kubectl run deployname -image=busybox --command -- ping baidu.com - -**set** - -The **set** command configures object resources. - -Example: - -To change the image of a deployment with the name specified in **deployname** to image 1.0: - -.. code-block:: - - kubectl set image deploy deployname containername=containername:1.0 - -**edit** - -The **edit** command edits a resource from the default editor. - -Examples: - -To update a pod: - -.. code-block:: - - kubectl edit po po-nginx-btv4j - -The example command yields the same effect as the following command: - -.. code-block:: - - kubectl get po po-nginx-btv4j -o yaml >> /tmp/nginx-tmp.yaml - vim /tmp/nginx-tmp.yaml - /*do some changes here */ - kubectl replace -f /tmp/nginx-tmp.yaml - -**explain** - -The **explain** command views documents or reference documents. - -Example: - -To get documentation of pods: - -.. code-block:: - - kubectl explain pod - -**delete** - -The **delete** command deletes resources by resource name or label. - -Example: - -To delete a pod with minimal delay: - -.. code-block:: - - kubectl delete po podname --now - -.. code-block:: - - kubectl delete -f nginx.yaml - kubectl delete deployment deployname - -Deployment Commands -------------------- - -**rolling-update\*** - -**rolling-update** is a very important command. It updates a running service with zero downtime. Pods are incrementally replaced by new ones. One pod is updated at a time. The old pod is deleted only after the new pod is up. New pods must be distinct from old pods by name, version, and label. Otherwise, an error message will be reported. - -.. code-block:: - - kubectl rolling-update poname -f newfilename - kubectl rolling-update poname -image=image:v2 - -If any problem occurs during the rolling update, run the command with the **-rollback** flag to abort the rolling update and revert to the previous pod. - -.. code-block:: - - kubectl rolling-update poname -rollback - -**rollout** - -The **rollout** command manages the rollout of a resource. - -Examples: - -To check the rollout status of a particular deployment: - -.. code-block:: - - kubectl rollout status deployment/deployname - -To view the rollout history of a particular deployment: - -.. code-block:: - - kubectl rollout history deployment/deployname - -To roll back to the previous deployment: (by default, a resource is rolled back to the previous version) - -.. code-block:: - - kubectl rollout undo deployment/test-nginx - -**scale** - -The **scale** command sets a new size for a resource by adjusting the number of resource replicas. - -.. code-block:: - - kubectl scale deployment deployname --replicas=newnumber - -**autoscale** - -The **autoscale** command automatically chooses and sets the number of pods. This command specifies the range for the number of pod replicas maintained by a replication controller. If there are too many pods, the replication controller terminates the extra pods. If there is too few, the replication controller starts more pods. - -.. code-block:: - - kubectl autoscale deployment deployname --min=minnumber --max=maxnumber - -Cluster Management Commands ---------------------------- - -**cordon, drain, uncordon\*** - -If a node to be upgraded is running many pods or is already down, perform the following steps to prepare the node for maintenance: - -#. Run the **cordon** command to mark a node as unschedulable. This means that new pods will not be scheduled onto the node. - - .. code-block:: - - kubectl cordon nodename - - Note: In CCE, **nodename** indicates the private network IP address of a node. - -#. Run the **drain** command to smoothly migrate the running pods from the node to another node. - - .. code-block:: - - kubectl drain nodename --ignore-daemonsets --ignore-emptydir - - **ignore-emptydir** ignores the pods that use emptyDirs. - -#. Perform maintenance operations on the node, such as upgrading the kernel and upgrading Docker. - -#. After node maintenance is completed, run the **uncordon** command to mark the node as schedulable. - - .. code-block:: - - kubectl uncordon nodename - -**cluster-info** - -To display the add-ons running in the cluster: - -.. code-block:: - - kubectl cluster-info - -To dump current cluster information to stdout: - -.. code-block:: - - kubectl cluster-info dump - -**top\*** - -The **top** command displays resource (CPU/memory/storage) usage. This command requires Heapster to be correctly configured and working on the server. - -**taint\*** - -The **taint** command updates the taints on one or more nodes. - -**certificate\*** - -The **certificate** command modifies the certificate resources. - -Fault Diagnosis and Debugging Commands --------------------------------------- - -**describe** - -The **describe** command is similar to the **get** command. The difference is that the **describe** command shows details of a specific resource or group of resources, whereas the **get** command lists one or more resources in a cluster. The **describe** command does not support the **-o** flag. For resources of the same type, resource details are printed out in the same format. - -.. note:: - - If the information about a resource is queried, you can use the get command to obtain more detailed information. If you want to check the status of a specific resource, for example, to check if a pod is in the running state, run the **describe** command to show more detailed status information. - - .. code-block:: - - kubectl describe po - -**logs** - -The **logs** command prints logs for a container in a pod or specified resource to stdout. To display logs in the **tail -f** mode, run this command with the **-f** flag. - -.. code-block:: - - kubectl logs -f podname - -**exec** - -The kubectl **exec** command is similar to the Docker **exec** command and executes a command in a container. If there are multiple containers in a pod, use the **-c** flag to choose a container. - -.. code-block:: - - kubectl exec -it podname bash - kubectl exec -it podname -c containername bash - -**port-forward\*** - -The **port-forward** command forwards one or more local ports to a pod. - -Example: - -To listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod: - -.. code-block:: - - kubectl port-forward podname 5000:6000 - -**proxy\*** - -The **proxy** command creates a proxy server between localhost and the Kubernetes API server. - -Example: - -To enable the HTTP REST APIs on the master node: - -.. code-block:: - - kubectl proxy -accept-hosts= '.*' -port=8001 -address= '0.0.0.0' - -**cp** - -The **cp** command copies files and directories to and from containers. - -.. code-block:: - - cp filename newfilename - -**auth\*** - -The **auth** command inspects authorization. - -**attach\*** - -The **attach** command is similar to the **logs -f** command and attaches to a process that is already running inside an existing container. To exit, run the **ctrl-c** command. If a pod contains multiple containers, to view the output of a specific container, use the **-c** flag and *containername* following *podname* to specify a container. - -.. code-block:: - - kubectl attach podname -c containername - -Advanced Commands ------------------ - -**replace** - -The **replace** command updates or replaces an existing resource by attributes including the number of replicas, labels, image versions, and ports. You can directly modify the original YAML file and then run the **replace** command. - -.. code-block:: - - kubectl replace -f filename - -.. important:: - - Resource names cannot be updated. - -**apply\*** - -The **apply** command provides a more strict control on resource updating than **patch** and **edit** commands. The **apply** command applies a configuration to a resource and maintains a set of configuration files in source control. Whenever there is an update, the configuration file is pushed to the server, and then the kubectl **apply** command applies the latest configuration to the resource. The Kubernetes compares the new configuration file with the original one and updates only the changed configuration instead of the whole file. The configuration that is not contained in the **-f** flag will remain unchanged. Unlike the **replace** command which deletes the resource and creates a new one, the **apply** command directly updates the original resource. Similar to the git operation, the **apply** command adds an annotation to the resource to mark the current apply. - -.. code-block:: - - kubectl apply -f - -**patch** - -If you want to modify attributes of a running container without first deleting the container or using the **replace** command, the **patch** command is to the rescue. The **patch** command updates field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch. For example, to change a pod label from **app=nginx1** to **app=nginx2** while the pod is running, use the following command: - -.. code-block:: - - kubectl patch pod podname -p '{"metadata":{"labels":{"app":"nginx2"}}}' - -**convent\*** - -The **convert** command converts configuration files between different API versions. - -Configuration Commands ----------------------- - -**label** - -The **label** command update labels on a resource. - -.. code-block:: - - kubectl label pods my-pod new-label=newlabel - -**annotate** - -The **annotate** command update annotations on a resource. - -.. code-block:: - - kubectl annotate pods my-pod icon-url=http://...... - -**completion** - -The **completion** command provides autocompletion for shell. - -Other Commands --------------- - -**api-versions** - -The **api-versions** command prints the supported API versions. - -.. code-block:: - - kubectl api-versions - -**api-resources** - -The **api-resources** command prints the supported API resources. - -.. code-block:: - - kubectl api-resources - -**config\*** - -The **config** command modifies kubeconfig files. An example use case of this command is to configure authentication information in API calls. - -**help** - -The **help** command gets all command references. - -**version** - -The **version** command prints the client and server version information for the current context. - -.. code-block:: - - kubectl version diff --git a/umn/source/clusters/using_kubectl_to_run_a_cluster/index.rst b/umn/source/clusters/using_kubectl_to_run_a_cluster/index.rst deleted file mode 100644 index 1e4e5b5..0000000 --- a/umn/source/clusters/using_kubectl_to_run_a_cluster/index.rst +++ /dev/null @@ -1,18 +0,0 @@ -:original_name: cce_10_0140.html - -.. _cce_10_0140: - -Using kubectl to Run a Cluster -============================== - -- :ref:`Connecting to a Cluster Using kubectl ` -- :ref:`Customizing a Cluster Certificate SAN ` -- :ref:`Common kubectl Commands ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - connecting_to_a_cluster_using_kubectl - customizing_a_cluster_certificate_san - common_kubectl_commands diff --git a/umn/source/configmaps_and_secrets/cluster_secrets.rst b/umn/source/configmaps_and_secrets/cluster_secrets.rst index e062cd3..fec3e65 100644 --- a/umn/source/configmaps_and_secrets/cluster_secrets.rst +++ b/umn/source/configmaps_and_secrets/cluster_secrets.rst @@ -18,7 +18,7 @@ The functions of these secrets are described as follows. default-secret -------------- -The type of **default-secret** is **kubernetes.io/dockerconfigjson**. The data is the credential for logging in to the SWR image repository and is used to pull images from SWR. If you need to pull an image from SWR when creating a workload on CCE, set **imagePullSecrets** to **default-secret**. +The type of **default-secret** is **kubernetes.io/dockerconfigjson**. The data is the credential for logging in to the SWR image repository and is used to pull images from SWR. To pull an image from SWR when creating a workload on CCE, set **imagePullSecrets** to **default-secret**. .. code-block:: @@ -83,6 +83,6 @@ By default, Kubernetes creates a service account named **default** for each name Labels: Annotations: Image pull secrets: - Mountable secrets: default-token-vssmw - Tokens: default-token-vssmw + Mountable secrets: default-token-xxxxx + Tokens: default-token-xxxxx Events: diff --git a/umn/source/configmaps_and_secrets/creating_a_configmap.rst b/umn/source/configmaps_and_secrets/creating_a_configmap.rst index 3e7c1a2..e282f93 100644 --- a/umn/source/configmaps_and_secrets/creating_a_configmap.rst +++ b/umn/source/configmaps_and_secrets/creating_a_configmap.rst @@ -18,8 +18,8 @@ Benefits of ConfigMaps: - Deploy workloads in different environments. Multiple versions are supported for configuration files so that you can update and roll back workloads easily. - Quickly import configurations in the form of files to containers. -Notes and Constraints ---------------------- +Constraints +----------- - The size of a ConfigMap resource file cannot exceed 2 MB. - ConfigMaps cannot be used in `static pods `__. @@ -27,7 +27,7 @@ Notes and Constraints Procedure --------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **ConfigMaps and Secrets** in the navigation pane and click **Create ConfigMap** in the upper right corner. @@ -62,7 +62,7 @@ Creating a ConfigMap Using kubectl #. Configure the **kubectl** command to connect an ECS to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. -#. Create and edit the **cce-configmap.yaml** file. +#. Create a file named **cce-configmap.yaml** and edit it. **vi cce-configmap.yaml** @@ -123,4 +123,4 @@ After creating a configuration item, you can update or delete it as described in | | Follow the prompts to delete the ConfigMap. | +-----------------------------------+------------------------------------------------------------------------------------------------------+ -.. |image1| image:: /_static/images/en-us_image_0000001568902541.png +.. |image1| image:: /_static/images/en-us_image_0000001647576860.png diff --git a/umn/source/configmaps_and_secrets/creating_a_secret.rst b/umn/source/configmaps_and_secrets/creating_a_secret.rst index d6f061d..fc22ba8 100644 --- a/umn/source/configmaps_and_secrets/creating_a_secret.rst +++ b/umn/source/configmaps_and_secrets/creating_a_secret.rst @@ -10,15 +10,15 @@ Scenario A secret is a type of resource that holds sensitive data, such as authentication and key information. Its content is user-defined. After creating secrets, you can use them as files or environment variables in a containerized workload. -Notes and Constraints ---------------------- +Constraints +----------- Secrets cannot be used in `static pods `__. Procedure --------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **ConfigMaps and Secrets** in the navigation pane, click the **Secrets** tab, and click **Create Secret** in the upper right corner. @@ -28,36 +28,36 @@ Procedure .. table:: **Table 1** Parameters for creating a secret - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+===============================================================================================================================================+ - | Name | Name of the secret you create, which must be unique. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | Namespace | Namespace to which the secret belongs. If you do not specify this parameter, the value **default** is used by default. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | Description | Description of a secret. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | Type | Type of the secret you create. | - | | | - | | - Opaque: common secret. | - | | - kubernetes.io/dockerconfigjson: a secret that stores the authentication information required for pulling images from a private repository. | - | | - **kubernetes.io/tls**: Kubernetes TLS secret, which is used to store the certificate required by layer-7 load balancing Services. | - | | - **IngressTLS**: TLS secret provided by CCE to store the certificate required by layer-7 load balancing Services. | - | | - Other: another type of secret, which is specified manually. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | Secret Data | Workload secret data can be used in containers. | - | | | - | | - If **Secret Type** is **Opaque**, click |image1|. In the dialog box displayed, enter a key-value pair and select **Auto Base64 Encoding**. | - | | - If the secret type is kubernetes.io/dockerconfigjson, enter the account and password of the private image repository. | - | | - If **Secret Type** is **kubernetes.io/tls** or **IngressTLS**, upload the certificate file and private key file. | - | | | - | | .. note:: | - | | | - | | - A certificate is a self-signed or CA-signed credential used for identity authentication. | - | | - A certificate request is a request for a signature with a private key. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | Secret Label | Label of the secret. Enter a key-value pair and click **Add**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+==================================================================================================================================================================================================================================================================================================================+ + | Name | Name of the secret you create, which must be unique. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Namespace | Namespace to which the secret belongs. If you do not specify this parameter, the value **default** is used by default. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Description | Description of a secret. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Type | Type of the secret you create. | + | | | + | | - Opaque: common secret. | + | | - kubernetes.io/dockerconfigjson: a secret that stores the authentication information required for pulling images from a private repository. | + | | - **kubernetes.io/tls**: Kubernetes TLS secret, which is used to store the certificate required by layer-7 load balancing Services. For details about examples of the kubernetes.io/tls secret and its description, see `TLS secrets `__. | + | | - **IngressTLS**: TLS secret provided by CCE to store the certificate required by layer-7 load balancing Services. | + | | - Other: another type of secret, which is specified manually. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Secret Data | Workload secret data can be used in containers. | + | | | + | | - If **Secret Type** is **Opaque**, click |image1|. In the dialog box displayed, enter a key-value pair and select **Auto Base64 Encoding**. | + | | - If **Secret Type** is **kubernetes.io/dockerconfigjson**, enter the account and password of the private image repository. | + | | - If **Secret Type** is **kubernetes.io/tls** or **IngressTLS**, upload the certificate file and private key file. | + | | | + | | .. note:: | + | | | + | | - A certificate is a self-signed or CA-signed credential used for identity authentication. | + | | - A certificate request is a request for a signature with a private key. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Secret Label | Label of the secret. Enter a key-value pair and click **Add**. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ #. After the configuration is complete, click **OK**. @@ -65,12 +65,12 @@ Procedure .. _cce_10_0153__section187197531454: -Secret Resource File Configuration ----------------------------------- +Secret Resource File Configuration Example +------------------------------------------ This section describes configuration examples of secret resource description files. -- Opaque +- Opaque type The **secret.yaml** file is defined as shown below. The **data** field is filled in as a key-value pair, and the **value** field must be encoded using Base64. For details about the Base64 encoding method, see :ref:`Base64 Encoding `. @@ -85,7 +85,7 @@ This section describes configuration examples of secret resource description fil : # Enter a key-value pair. The value must be encoded using Base64. type: Opaque -- kubernetes.io/dockerconfigjson +- kubernetes.io/dockerconfigjson type The **secret.yaml** file is defined as shown below. The value of **.dockerconfigjson** must be encoded using Base64. For details, see :ref:`Base64 Encoding `. @@ -102,13 +102,13 @@ This section describes configuration examples of secret resource description fil To obtain the **.dockerconfigjson** content, perform the following steps: - #. Obtain the login information of the image repository. + #. Obtain the following login information of the image repository. - Image repository address: The section uses *address* as an example. Replace it with the actual address. - Username: The section uses *username* as an example. Replace it with the actual username. - Password: The section uses *password* as an example. Replace it with the actual password. - #. Use Base64 to encode the key-value pair **username:password** and fill the encoded content in step :ref:`3 `. + #. Use Base64 to encode the key-value pair *username:password* and fill the encoded content in :ref:`3 `. .. code-block:: @@ -136,7 +136,7 @@ This section describes configuration examples of secret resource description fil The encoded content is the **.dockerconfigjson** content. -- kubernetes.io/tls +- kubernetes.io/tls type The value of **tls.crt** and **tls.key** must be encoded using Base64. For details, see :ref:`Base64 Encoding `. @@ -152,7 +152,7 @@ This section describes configuration examples of secret resource description fil tls.key: LS0tLS1CRU*****VZLS0tLS0= # Private key content, which must be encoded using Base64. type: kubernetes.io/tls -- IngressTLS +- IngressTLS type The value of **tls.crt** and **tls.key** must be encoded using Base64. For details, see :ref:`Base64 Encoding `. @@ -182,7 +182,7 @@ Creating a Secret Using kubectl **vi cce-secret.yaml** - The following YAML file uses the Opaque type as an example. For details about other types, see :ref:`Secret Resource File Configuration `. + The following YAML file uses the Opaque type as an example. For details about other types, see :ref:`Secret Resource File Configuration Example `. .. code-block:: @@ -245,4 +245,4 @@ To Base64-encode a string, run the **echo -n content to be encoded \| base64** c root@ubuntu:~# echo -n "content to be encoded" | base64 ****** -.. |image1| image:: /_static/images/en-us_image_0000001518222636.png +.. |image1| image:: /_static/images/en-us_image_0000001695737281.png diff --git a/umn/source/configmaps_and_secrets/using_a_configmap.rst b/umn/source/configmaps_and_secrets/using_a_configmap.rst index 1645e45..af5a7ea 100644 --- a/umn/source/configmaps_and_secrets/using_a_configmap.rst +++ b/umn/source/configmaps_and_secrets/using_a_configmap.rst @@ -25,11 +25,11 @@ The following example shows how to use a ConfigMap. - When a ConfigMap is used in a workload, the workload and ConfigMap must be in the same cluster and namespace. - - When a ConfigMap is mounted as a data volume and is updated, Kubernetes updates the data in the data volume at the same time. + - When a ConfigMap is mounted as a data volume and the ConfigMap is updated, Kubernetes updates the data in the data volume at the same time. - When a ConfigMap data volume mounted in `subPath `__ mode is updated, Kubernetes cannot automatically update the data in the data volume. + For a ConfigMap data volume mounted in `subPath `__ mode, Kubernetes cannot automatically update data in the data volume when the ConfigMap is updated. - - When a ConfigMap is used as an environment variable, data cannot be automatically updated when the ConfigMap is updated. To update the data, you need to restart the pod. + - When a ConfigMap is used as an environment variable, data is not automatically updated when the ConfigMap is updated. To update the data, restart the pod. .. _cce_10_0015__section1737733192813: @@ -38,7 +38,7 @@ Setting Workload Environment Variables **Using the console** -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. In the navigation pane, choose **Workloads**. Then, click **Create Workload**. @@ -55,7 +55,7 @@ Setting Workload Environment Variables #. Configure other workload parameters and click **Create Workload**. - After the workload runs properly, :ref:`access the container ` and run the following command to check whether the ConfigMap has been set as an environment variable of the workload: + After the workload runs properly, :ref:`log in to the container ` and run the following statement to check whether the ConfigMap has been set as an environment variable of the workload: .. code-block:: @@ -69,7 +69,7 @@ Setting Workload Environment Variables **Using kubectl** -#. According to :ref:`Connecting to a Cluster Using kubectl `, configure the **kubectl** command to connect an ECS to the cluster. +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. #. Create a file named **nginx-configmap.yaml** and edit it. @@ -77,7 +77,7 @@ Setting Workload Environment Variables Content of the YAML file: - - **Added from ConfigMap**: To add all data in a ConfigMap to environment variables, use the **envFrom** parameter. The keys in the ConfigMap will become names of environment variables in a pod. + - **Added from a ConfigMap**: To add all data in a ConfigMap to environment variables, use the **envFrom** parameter. The keys in the ConfigMap will become names of environment variables in the workload. .. code-block:: @@ -125,13 +125,13 @@ Setting Workload Environment Variables containers: - name: container-1 image: nginx:latest - env: # Set environment variables in the workload. + env: # Set the environment variable in the workload. - name: SPECIAL_LEVEL # Name of the environment variable in the workload. - valueFrom: # Use valueFrom to specify an environment variable to reference a ConfigMap. + valueFrom: # Specify a ConfigMap to be referenced by the environment variable. configMapKeyRef: name: cce-configmap # Name of the referenced ConfigMap. key: SPECIAL_LEVEL # Key in the referenced ConfigMap. - - name: SPECIAL_TYPE # Add multiple environment variables. Multiple environment variables can be imported at the same time. + - name: SPECIAL_TYPE # Add multiple environment variables to import them at the same time. valueFrom: configMapKeyRef: name: cce-configmap @@ -143,7 +143,7 @@ Setting Workload Environment Variables **kubectl apply -f nginx-configmap.yaml** -#. View the environment variables in the pod. +#. View the environment variable in the pod. a. Run the following command to view the created pod: @@ -170,7 +170,7 @@ Setting Workload Environment Variables Hello CCE - The ConfigMap has been set as an environment variable of the workload. + The ConfigMap has been set as environment variables of the workload. .. _cce_10_0015__section17930105710189: @@ -181,7 +181,7 @@ You can use a ConfigMap as an environment variable to set commands or parameter **Using the console** -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. In the navigation pane, choose **Workloads**. Then, click **Create Workload**. @@ -201,9 +201,9 @@ You can use a ConfigMap as an environment variable to set commands or parameter -c echo $SPECIAL_LEVEL $SPECIAL_TYPE > /usr/share/nginx/html/index.html -#. Configure other workload parameters and click **Create Workload**. +#. Set other workload parameters and click **Create Workload**. - After the workload runs properly, :ref:`access the container ` and run the following command to check whether the ConfigMap has been set as an environment variable of the workload: + After the workload runs properly, :ref:`log in to the container ` and run the following statement to check whether the ConfigMap has been set as an environment variable of the workload: .. code-block:: @@ -217,13 +217,13 @@ You can use a ConfigMap as an environment variable to set commands or parameter **Using kubectl** -#. According to :ref:`Connecting to a Cluster Using kubectl `, configure the **kubectl** command to connect an ECS to the cluster. +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. #. Create a file named **nginx-configmap.yaml** and edit it. **vi nginx-configmap.yaml** - As shown in the following example, the **cce-configmap** ConfigMap is imported to the workload. **SPECIAL_LEVEL** and **SPECIAL_TYPE** are environment variable names, that is, key names in the **cce-configmap** ConfigMap. + As shown in the following example, the **cce-configmap** ConfigMap is imported to the workload. *SPECIAL_LEVEL* and *SPECIAL_TYPE* are the environment variable names in the workload, that is, the key names in the **cce-configmap** ConfigMap. .. code-block:: @@ -289,54 +289,52 @@ You can use a ConfigMap as an environment variable to set commands or parameter Attaching a ConfigMap to the Workload Data Volume ------------------------------------------------- -The data stored in a ConfigMap can be referenced in a volume of type ConfigMap. You can mount such a volume to a specified container path. The platform supports the separation of workload codes and configuration files. ConfigMap volumes are used to store workload configuration parameters. Before that, you need to create ConfigMaps in advance. For details, see :ref:`Creating a ConfigMap `. +The data stored in a ConfigMap can be referenced in a volume of type ConfigMap. You can mount such a volume to a specified container path. The platform supports the separation of workload codes and configuration files. ConfigMap volumes are used to store workload configuration parameters. Before that, create ConfigMaps in advance. For details, see :ref:`Creating a ConfigMap `. **Using the console** -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. In the navigation pane, choose **Workloads**. Then, click **Create Workload**. When creating a workload, click **Data Storage** in the **Container Settings** area. Click **Add Volume** and select **ConfigMap** from the drop-down list. -#. Set the local volume type to **ConfigMap** and set parameters for adding a local volume, as shown in :ref:`Table 1 `. - - .. _cce_10_0015__table1776324831114: +#. Configure the parameters. .. table:: **Table 1** Mounting a ConfigMap volume - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | Option | Select the desired ConfigMap name. | - | | | - | | A ConfigMap must be created in advance. For details, see :ref:`Creating a ConfigMap `. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Add Container Path | Configure the following parameters: | - | | | - | | a. **Container Path**: Enter the path of the container, for example, **/tmp**. | - | | | - | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the container to an empty directory. If the directory is not empty, ensure that there are no files affecting container startup in the directory. Otherwise, such files will be replaced, resulting in failures to start the container and create the workload. | - | | | - | | .. important:: | - | | | - | | NOTICE: | - | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | - | | | - | | b. **subPath**: Enter a subpath, for example, **tmp**. | - | | | - | | - A subpath is used to mount a local volume so that the same data volume is used in a single pod. | - | | - The subpath can be the key and value of a ConfigMap or secret. If the subpath is a key-value pair that does not exist, the data import does not take effect. | - | | - The data imported by specifying a subpath will not be updated along with the ConfigMap/secret updates. | - | | | - | | c. Set the permission to **Read-only**. Data volumes in the path are read-only. | - | | | - | | You can click |image3| to add multiple paths and subpaths. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | ConfigMap | Select the desired ConfigMap. | + | | | + | | A ConfigMap must be created in advance. For details, see :ref:`Creating a ConfigMap `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Add Container Path | Configure the following parameters: | + | | | + | | a. **Mount Path**: Enter a path of the container, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | + | | | + | | b. **Subpath**: Enter a subpath, for example, **tmp**. | + | | | + | | - A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + | | - The subpath can be the key and value of a ConfigMap or secret. If the subpath is a key-value pair that does not exist, the data import does not take effect. | + | | - The data imported by specifying a subpath will not be updated along with the ConfigMap/secret updates. | + | | | + | | c. Set the permission to **Read-only**. Data volumes in the path are read-only. | + | | | + | | You can click |image3| to add multiple paths and subpaths. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Using kubectl** -#. According to :ref:`Connecting to a Cluster Using kubectl `, configure the **kubectl** command to connect an ECS to the cluster. +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. #. Create a file named **nginx-configmap.yaml** and edit it. @@ -402,6 +400,6 @@ The data stored in a ConfigMap can be referenced in a volume of type ConfigMap. Hello -.. |image1| image:: /_static/images/en-us_image_0000001568822917.png -.. |image2| image:: /_static/images/en-us_image_0000001568902649.png -.. |image3| image:: /_static/images/en-us_image_0000001569023025.png +.. |image1| image:: /_static/images/en-us_image_0000001695896853.png +.. |image2| image:: /_static/images/en-us_image_0000001695896849.png +.. |image3| image:: /_static/images/en-us_image_0000001647577176.png diff --git a/umn/source/configmaps_and_secrets/using_a_secret.rst b/umn/source/configmaps_and_secrets/using_a_secret.rst index 9bdb188..11f523b 100644 --- a/umn/source/configmaps_and_secrets/using_a_secret.rst +++ b/umn/source/configmaps_and_secrets/using_a_secret.rst @@ -43,7 +43,7 @@ Setting Environment Variables of a Workload **Using the console** -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. In the navigation pane, choose **Workloads**. Then, click **Create Workload**. @@ -58,19 +58,19 @@ Setting Environment Variables of a Workload For example, after you import the value of **username** in secret **mysecret** as the value of workload environment variable **username**, an environment variable named **username** exists in the container. -#. Configure other workload parameters and click **Create Workload**. +#. Set other workload parameters and click **Create Workload**. - After the workload runs properly, :ref:`access the container ` and run the following command to check whether the secret has been set as an environment variable of the workload: + After the workload runs properly, :ref:`log in to the container ` and run the following statement to check whether the secret has been set as an environment variable of the workload: .. code-block:: printenv username - If the output is the same as that in the secret, the secret has been set as an environment variable of the workload. + If the output is the same as the content in the secret, the secret has been set as an environment variable of the workload. **Using kubectl** -#. According to :ref:`Connecting to a Cluster Using kubectl `, configure the **kubectl** command to connect an ECS to the cluster. +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. #. Create a file named **nginx-secret.yaml** and edit it. @@ -78,7 +78,7 @@ Setting Environment Variables of a Workload Content of the YAML file: - - **Added from a secret**: To add all data in a secret to environment variables, use the **envFrom** parameter. The keys in the ConfigMap will become names of environment variables in a workload. + - **Added from a secret**: To add all data in a secret to environment variables, use the **envFrom** parameter. The keys in the secret will become names of environment variables in a workload. .. code-block:: @@ -105,7 +105,7 @@ Setting Environment Variables of a Workload imagePullSecrets: - name: default-secret - - **Added from a secret key**: When creating a workload, you can set a secret as an environment variable and use the **valueFrom** parameter to reference the key-value pair in the secret separately. + - **Added from a secret key**: When creating a workload, you can use a secret to set environment variables and use the **valueFrom** parameter to reference the key-value pair in the secret separately. .. code-block:: @@ -126,13 +126,13 @@ Setting Environment Variables of a Workload containers: - name: container-1 image: nginx:latest - env: # Set environment variables in the workload. - - name: SECRET_USERNAME # Name of the environment variable in the workload. - valueFrom: # Use envFrom to specify a secret to be referenced by environment variables. + env: # Set the environment variable in the workload. + - name: SECRET_USERNAME # Name of the environment variable in the workload. + valueFrom: # Use valueFrom to specify a secret to be referenced by environment variables. secretKeyRef: name: mysecret # Name of the referenced secret. - key: username # Name of the referenced key. - - name: SECRET_PASSWORD # Add multiple environment variables. Multiple environment variables can be imported at the same time. + key: username # Key in the referenced secret. + - name: SECRET_PASSWORD # Add multiple environment variables to import them at the same time. valueFrom: secretKeyRef: name: mysecret @@ -164,61 +164,59 @@ Setting Environment Variables of a Workload kubectl exec nginx-secret-*** -- printenv SPECIAL_USERNAME SPECIAL_PASSWORD - If the output is the same as that in the secret, the secret has been set as an environment variable of the workload. + If the output is the same as the content in the secret, the secret has been set as an environment variable of the workload. .. _cce_10_0016__section472505211214: Configuring the Data Volume of a Workload ----------------------------------------- -You can mount a secret as a volume to the specified container path. Contents in a secret are user-defined. Before that, you need to create a secret. For details, see :ref:`Creating a Secret `. +You can mount a secret as a volume to the specified container path. Contents in a secret are user-defined. Before that, create a secret. For details, see :ref:`Creating a Secret `. **Using the console** -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. In the navigation pane on the left, click **Workloads**. In the right pane, click the **Deployments** tab. Click **Create Workload** in the upper right corner. When creating a workload, click **Data Storage** in the **Container Settings** area. Click **Add Volume** and select **Secret** from the drop-down list. -#. Set the local volume type to **Secret** and set parameters for adding a local volume, as shown in :ref:`Table 1 `. +#. Configure the parameters. - .. _cce_10_0016__table861818920109: + .. table:: **Table 1** Mounting a Secret volume - .. table:: **Table 1** Secret - - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | Secret | Select the desired secret name. | - | | | - | | A secret must be created in advance. For details, see :ref:`Creating a Secret `. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Add Container Path | Configure the following parameters: | - | | | - | | a. **Container Path**: Enter the path of the container, for example, **/tmp**. | - | | | - | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the container to an empty directory. If the directory is not empty, ensure that there are no files affecting container startup in the directory. Otherwise, such files will be replaced, resulting in failures to start the container and create the workload. | - | | | - | | .. important:: | - | | | - | | NOTICE: | - | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | - | | | - | | b. **subPath**: Enter a subpath, for example, **tmp**. | - | | | - | | - A subpath is used to mount a local volume so that the same data volume is used in a single pod. | - | | - The subpath can be the key and value of a ConfigMap or secret. If the subpath is a key-value pair that does not exist, the data import does not take effect. | - | | - The data imported by specifying a subpath will not be updated along with the ConfigMap/secret updates. | - | | | - | | c. Set the permission to **Read-only**. Data volumes in the path are read-only. | - | | | - | | You can click |image2| to add multiple paths and subpaths. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Secret | Select the desired secret. | + | | | + | | A secret must be created in advance. For details, see :ref:`Creating a Secret `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Add Container Path | Configure the following parameters: | + | | | + | | a. **Mount Path**: Enter a path of the container, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | + | | | + | | b. **Subpath**: Enter a subpath, for example, **tmp**. | + | | | + | | - A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + | | - The subpath can be the key and value of a ConfigMap or secret. If the subpath is a key-value pair that does not exist, the data import does not take effect. | + | | - The data imported by specifying a subpath will not be updated along with the ConfigMap/secret updates. | + | | | + | | c. Set the permission to **Read-only**. Data volumes in the path are read-only. | + | | | + | | You can click |image2| to add multiple paths and subpaths. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Using kubectl** -#. According to :ref:`Connecting to a Cluster Using kubectl `, configure the **kubectl** command to connect an ECS to the cluster. +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. #. Create a file named **nginx-secret.yaml** and edit it. @@ -254,11 +252,11 @@ You can mount a secret as a volume to the specified container path. Contents in secret: secretName: mysecret # Name of the referenced secret. - You can also use the **items** field to control the mapping path of the secret key. For example, store the **username** in the **/etc/foo/my-group/my-username** directory of the container. + You can also use the **items** field to control the mapping path of secret keys. For example, store username in the **/etc/foo/my-group/my-username** directory in the container. .. note:: - - After the **items** field is used to specify the mapping path of the secret key, the keys that are not specified will not be created as files. For example, if the password key in the following example is not specified, the file will not be created. + - If you use the **items** field to specify the mapping path of the secret keys, the keys that are not specified will not be created as files. For example, if the **password** key in the following example is not specified, the file will not be created. - If you want to use all keys in a secret, you must list all keys in the **items** field. - All keys listed in the **items** field must exist in the corresponding secret. Otherwise, the volume is not created. @@ -291,7 +289,7 @@ You can mount a secret as a volume to the specified container path. Contents in secretName: mysecret # Name of the referenced secret. items: - key: username # Name of the referenced key. - path: my-group/my-username # Mapping path of the secret key. + path: my-group/my-username # Mapping path of the secret key #. Create a workload. @@ -317,7 +315,7 @@ You can mount a secret as a volume to the specified container path. Contents in kubectl exec nginx-secret-*** -- /etc/foo/username - The expected output is the same as that in the secret. + The expected output is the same as the content in the secret. -.. |image1| image:: /_static/images/en-us_image_0000001518062644.png -.. |image2| image:: /_static/images/en-us_image_0000001569182625.png +.. |image1| image:: /_static/images/en-us_image_0000001647417524.png +.. |image2| image:: /_static/images/en-us_image_0000001647576792.png diff --git a/umn/source/charts/converting_a_release_from_helm_v2_to_v3.rst b/umn/source/helm_chart/converting_a_release_from_helm_v2_to_v3.rst similarity index 96% rename from umn/source/charts/converting_a_release_from_helm_v2_to_v3.rst rename to umn/source/helm_chart/converting_a_release_from_helm_v2_to_v3.rst index 2bbe38f..aa2bb87 100644 --- a/umn/source/charts/converting_a_release_from_helm_v2_to_v3.rst +++ b/umn/source/helm_chart/converting_a_release_from_helm_v2_to_v3.rst @@ -12,8 +12,8 @@ CCE fully supports Helm v3. This section guides you to convert a Helm v2 release For details, see the `community documentation `__. -Note: ------ +Precautions +----------- - Helm v2 stores release information in ConfigMaps. Helm v3 does so in secrets. - When you query, update, or operate a Helm v2 release on the CCE console, CCE will attempt to convert the release to v3. If you operate in the background, convert the release by following the instructions below. @@ -33,7 +33,7 @@ Conversion Process (Without Using the Helm v3 Client) tar -xzvf helm-2to3_0.10.2_linux_amd64.tar.gz -3. Simulate the conversion. +3. Perform the simulated conversion. Take the test-convert release as an example. Run the following command to simulate the conversion: If the following information is displayed, the simulation is successful. @@ -46,7 +46,7 @@ Conversion Process (Without Using the Helm v3 Client) [Helm 3] Release "test-convert" will be created. [Helm 3] ReleaseVersion "test-convert.v1" will be created. -4. Perform the conversion. If the following information is displayed, the conversion is successful: +4. Perform the conversion. If the following information is displayed, the conversion is successful. .. code-block:: @@ -103,7 +103,7 @@ Conversion Process (Using the Helm v3 Client) https://github.com/helm/helm-2to3/releases/download/v0.10.2/helm-2to3_0.10.2_linux_amd64.tar.gz Installed plugin: 2to3 -#. Check the installed plug-in and ensure that the plug-in is installed. +#. Check whether the plug-in has been installed. .. code-block:: @@ -124,7 +124,7 @@ Conversion Process (Using the Helm v3 Client) [Helm 3] Release "test-convert" will be created. [Helm 3] ReleaseVersion "test-convert.v1" will be created. -#. Perform the formal conversion. If the following information is displayed, the conversion is successful: +#. Perform the conversion. If the following information is displayed, the conversion is successful. .. code-block:: diff --git a/umn/source/charts/deploying_an_application_from_a_chart.rst b/umn/source/helm_chart/deploying_an_application_from_a_chart.rst similarity index 95% rename from umn/source/charts/deploying_an_application_from_a_chart.rst rename to umn/source/helm_chart/deploying_an_application_from_a_chart.rst index 2d9716e..e9e6a40 100644 --- a/umn/source/charts/deploying_an_application_from_a_chart.rst +++ b/umn/source/helm_chart/deploying_an_application_from_a_chart.rst @@ -7,8 +7,8 @@ Deploying an Application from a Chart On the CCE console, you can upload a Helm chart package, deploy it, and manage the deployed pods. -Notes and Constraints ---------------------- +Constraints +----------- - The number of charts that can be uploaded by a single user is limited. The value displayed on the console of each region is the allowed quantity. - A chart with multiple versions consumes the same amount of portion of chart quota. @@ -74,7 +74,7 @@ The Redis workload is used as an example to illustrate the chart specifications. +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | \* Chart.yaml | Basic information about the chart. | | | | - | | Note: Helm v3 bumps the apiVersion from v1 to v2. | + | | Note: The API version of Helm v3 is switched from v1 to v2. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | .helmignore | Files or data that does not need to read templates during workload installation. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -82,7 +82,7 @@ The Redis workload is used as an example to illustrate the chart specifications. Uploading a Chart ----------------- -#. Log in to the CCE console, click the cluster name, and access the cluster console. Choose **Charts** in the navigation pane and click **Upload Chart** in the upper right corner. +#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Charts** in the navigation pane and click **Upload Chart** in the upper right corner. #. Click **Select File**, select the chart to be uploaded, and click **Upload**. .. note:: @@ -92,7 +92,7 @@ Uploading a Chart Creating a Release ------------------ -#. Log in to the CCE console, click the cluster name, and access the cluster console. In the navigation pane, choose **Charts**. +#. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose **Charts**. #. On the **My Charts** tab page, click **Install** of the target chart. @@ -130,7 +130,7 @@ Creating a Release Upgrading a Chart-based Workload -------------------------------- -#. Log in to the CCE console, click the cluster name, and access the cluster console. Choose **Charts** in the navigation pane and click the **Releases** tab. +#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Charts** in the navigation pane and click the **Releases** tab. #. Click **Upgrade** in the row where the desired workload resides and set the parameters for the workload. #. Select a chart version for **Chart Version**. #. Follow the prompts to modify the chart parameters. Click **Upgrade**, and then click **Submit**. @@ -139,7 +139,7 @@ Upgrading a Chart-based Workload Rolling Back a Chart-based Workload ----------------------------------- -#. Log in to the CCE console, click the cluster name, and access the cluster console. Choose **Charts** in the navigation pane and click the **Releases** tab. +#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Charts** in the navigation pane and click the **Releases** tab. #. Click **More** > **Roll Back** for the workload to be rolled back, select the workload version, and click **Roll back** **to this version**. @@ -148,5 +148,5 @@ Rolling Back a Chart-based Workload Uninstalling a Chart-based Workload ----------------------------------- -#. Log in to the CCE console, click the cluster name, and access the cluster console. Choose **Charts** in the navigation pane and click the **Releases** tab. +#. Log in to the CCE console and click the cluster name to access the cluster console. Choose **Charts** in the navigation pane and click the **Releases** tab. #. Click **More** > **Uninstall** next to the release to be uninstalled, and click **Yes**. Exercise caution when performing this operation because releases cannot be restored after being uninstalled. diff --git a/umn/source/charts/deploying_an_application_through_the_helm_v2_client.rst b/umn/source/helm_chart/deploying_an_application_through_the_helm_v2_client.rst similarity index 87% rename from umn/source/charts/deploying_an_application_through_the_helm_v2_client.rst rename to umn/source/helm_chart/deploying_an_application_through_the_helm_v2_client.rst index 71207ce..0abed64 100644 --- a/umn/source/charts/deploying_an_application_through_the_helm_v2_client.rst +++ b/umn/source/helm_chart/deploying_an_application_through_the_helm_v2_client.rst @@ -13,7 +13,7 @@ The Kubernetes cluster created on CCE has been connected to kubectl. For details Installing Helm v2 ------------------ -This document uses Helm v2.17.0 as an example. +This section uses Helm v2.17.0 as an example. For other versions, visit https://github.com/helm/helm/releases. @@ -35,7 +35,7 @@ For other versions, visit https://github.com/helm/helm/releases. mv linux-amd64/helm /usr/local/bin/helm -#. RBAC is enabled on the Kubernetes API server. Therefore, you need to create the service account name **tiller** for the tiller and assign cluster-admin, a system ClusterRole, to the tiller. Create a tiller resource account as follows: +#. RBAC is enabled on the Kubernetes API server. Create the service account name **tiller** for the tiller and assign cluster-admin, a system ClusterRole, to the tiller. Create a tiller resource account as follows: **vim tiller-rbac.yaml** @@ -66,7 +66,7 @@ For other versions, visit https://github.com/helm/helm/releases. kubectl apply -f tiller-rbac.yaml -#. Initialize the Helm and deploy the pod of Tiller. +#. Initialize the Helm and deploy the pod of tiller. .. code-block:: @@ -78,7 +78,7 @@ For other versions, visit https://github.com/helm/helm/releases. kubectl get pod -n kube-system -l app=helm - Example command output: + Command output: .. code-block:: @@ -117,7 +117,7 @@ You can obtain the required chart in the **stable** directory on this `website < Common Issues ------------- -- The following error message is displayed after the **helm version** command is run: +- The following error message is displayed after the **Helm version** command is run: .. code-block:: @@ -136,15 +136,17 @@ Common Issues yum install socat -y -- When you run the **yum install socat -y** command on a node running EulerOS 2.9 and the following error message is displayed: +- When you run the **yum install socat -y** command on a node running EulerOS 2.9 , if the following error message is displayed: No match for argument: socat Error: Unable to find a match: socat - Manually download the socat image and run the following command to install it: + The image does not contain socat. In this case, manually download the RPM chart and run the following command to install it (replace the RPM chart name with the actual one): - rpm -i socat-1.7.3.2-8.oe1.x86_64.rpm + .. code-block:: + + rpm -i socat-1.7.3.2-8.oe1.x86_64.rpm - When the socat has been installed and the following error message is displayed after the **helm version** command is run: diff --git a/umn/source/helm_chart/deploying_an_application_through_the_helm_v3_client.rst b/umn/source/helm_chart/deploying_an_application_through_the_helm_v3_client.rst new file mode 100644 index 0000000..7bd8195 --- /dev/null +++ b/umn/source/helm_chart/deploying_an_application_through_the_helm_v3_client.rst @@ -0,0 +1,165 @@ +:original_name: cce_10_0144.html + +.. _cce_10_0144: + +Deploying an Application Through the Helm v3 Client +=================================================== + +Prerequisites +------------- + +The Kubernetes cluster created on CCE has been connected to kubectl. For details, see :ref:`Using kubectl `. + +.. _cce_10_0144__en-us_topic_0226102212_en-us_topic_0179003017_section3719193213815: + +Installing Helm v3 +------------------ + +This section uses Helm v3.3.0 as an example. + +For other versions, visit https://github.com/helm/helm/releases. + +#. Download the Helm client from the VM connected to the cluster. + + .. code-block:: + + wget https://get.helm.sh/helm-v3.3.0-linux-amd64.tar.gz + +#. Decompress the Helm package. + + .. code-block:: + + tar -xzvf helm-v3.3.0-linux-amd64.tar.gz + +#. Copy Helm to the system path, for example, **/usr/local/bin/helm**. + + .. code-block:: + + mv linux-amd64/helm /usr/local/bin/helm + +#. Query the Helm version. + + .. code-block:: + + helm version + version.BuildInfo{Version:"v3.3.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"} + +Installing the Helm Chart +------------------------- + +You can use Helm to install a chart. Before using Helm, you may need to understand the following concepts to better use Helm: + +- Chart: contains resource definitions and a large number of configuration files of Kubernetes applications. +- Repository: stores shared charts. You can download charts from the repository to a local path for installation or install them online. +- Release: running result of after a chart is installed in a Kubernetes cluster using Helm. A chart can be installed multiple times in a cluster. A new release will be created for each installation. A MySQL chart is used as an example. To run two databases in a cluster, install the chart twice. Each database has its own release and release name. + +For more details, see `Using Helm `__. + +#. .. _cce_10_0144__li125132594918: + + Search for a chart from the `Artifact Hub `__ repository recommended by Helm and configure the Helm repository. + + .. code-block:: + + helm repo add {repo_name} {repo_addr} + + The following uses the `WordPress chart `__ as an example: + + .. code-block:: + + helm repo add bitnami https://charts.bitnami.com/bitnami + +#. Run the **helm install** command to install the chart. + + - Default installation: This is the simplest method, which requires only two parameters. + + .. code-block:: + + helm install {release_name} {chart_name} + + For example, to install WordPress, the WordPress chart added in :ref:`step 1 ` is **bitnami/wordpress**, and the release name is **my-wordpress**. + + .. code-block:: + + helm install my-wordpress bitnami/wordpress + + - Custom installation: The default installation uses the default settings in the chart. Use custom installation to custom parameter settings. Run the **helm show values** *{chart_name}* command to view the configurable options of the chart. For example, to view the configurable items of WordPress, run the following command: + + .. code-block:: + + helm show values bitnami/wordpress + + Overwrite specified parameters by running the following commands: + + .. code-block:: + + helm install my-wordpress bitnami/wordpress \ + --set mariadb.primary.persistence.enabled=true \ + --set mariadb.primary.persistence.storageClass=csi-disk \ + --set mariadb.primary.persistence.size=10Gi \ + --set persistence.enabled=false + +#. View the installed chart release. + + .. code-block:: + + helm list + +Common Issues +------------- + +- The following error message is displayed after the **helm version** command is run: + + .. code-block:: + + Client: + &version.Version{SemVer:"v3.3.0", + GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"} + E0718 11:46:10.132102 7023 portforward.go:332] an error occurred + forwarding 41458 -> 44134: error forwarding port 44134 to pod + d566b78f997eea6c4b1c0322b34ce8052c6c2001e8edff243647748464cd7919, uid : unable + to do port forwarding: socat not found. + Error: cannot connect to Tiller + + The preceding information is displayed because the socat is not installed. Run the following command to install the socat: + + .. code-block:: + + yum install socat -y + +- When you run the **yum install socat -y** command on a node running EulerOS 2.9 , if the following error message is displayed: + + .. code-block:: + + No match for argument: socat + Error: Unable to find a match: socat + + The node image does not contain socat. In this case, manually download the RPM chart and run the following command to install it (replace the RPM chart name with the actual one): + + .. code-block:: + + rpm -i socat-1.7.3.2-8.oe1.x86_64.rpm + +- When the socat has been installed and the following error message is displayed after the **helm version** command is run: + + .. code-block:: + + $ helm version + Client: &version.Version{SemVer:"v3.3.0", GitCommit:"021cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"} + Error: cannot connect to Tiller + + The Helm chart reads the configuration certificate in **.Kube/config** to communicate with Kubernetes. The preceding error indicates that the kubectl configuration is incorrect. In this case, reconnect the cluster to kubectl. For details, see :ref:`Using kubectl `. + +- Storage fails to be created after you have connected to cloud storage services. + + This issue may be caused by the **annotation** field in the created PVC. Change the chart name and install the chart again. + +- If kubectl is not properly configured, the following error message is displayed after the **helm install** command is run: + + .. code-block:: + + # helm install prometheus/ --generate-name + WARNING: This chart is deprecated + Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp [::1]:8080: connect: connection refused + + **Solution**: Configure kubeconfig for the node. For details, see :ref:`Using kubectl `. diff --git a/umn/source/charts/differences_between_helm_v2_and_helm_v3_and_adaptation_solutions.rst b/umn/source/helm_chart/differences_between_helm_v2_and_helm_v3_and_adaptation_solutions.rst similarity index 90% rename from umn/source/charts/differences_between_helm_v2_and_helm_v3_and_adaptation_solutions.rst rename to umn/source/helm_chart/differences_between_helm_v2_and_helm_v3_and_adaptation_solutions.rst index d8fd13e..3105684 100644 --- a/umn/source/charts/differences_between_helm_v2_and_helm_v3_and_adaptation_solutions.rst +++ b/umn/source/helm_chart/differences_between_helm_v2_and_helm_v3_and_adaptation_solutions.rst @@ -17,7 +17,7 @@ Changes since Helm v2: Helm v2 used a two-way strategic merge patch. During an upgrade, it compared the most recent chart's manifest against the proposed chart's manifest to determine what changes needed to be applied to the resources in Kubernetes. If changes were applied to the cluster out-of-band (such as during a kubectl edit), those changes were not considered. This resulted in resources being unable to roll back to its previous state. - Helm v3 uses a three-way strategic merge patch. Helm considers the old manifest, its live state, and the new manifest when generating a patch. Helm compares the current live state with that of the old manifest, checks whether the new manifest is modified, and automatically supplements the new manifest to generate the final update patch. + Helm v3 uses a three-way strategic merge patch. Helm considers the old manifest, its live state, and the new manifest when generating a patch. Helm compares the current live state with the live state of the old manifest, checks whether the new manifest is modified, and automatically supplements the new manifest to generate the final update patch. For details and examples, see https://v3.helm.sh/docs/faq/changes_since_helm2. @@ -29,7 +29,7 @@ Changes since Helm v2: In Helm v2, the information about each release was stored in the same namespace as Tiller. In practice, this meant that once a name was used by a release, no other release could use that same name, even if it was deployed in a different namespace. In Helm v3, information about a particular release is now stored in the same namespace as the release itself. This means that the release name can be used in different namespaces. The namespace of the application is the same as that of the release. -#. **Verification mode** +#. **Verification mode change** Helm v3 verifies the chart format more strictly. For example, Helm v3 bumps the apiVersion in Chart.yaml from v1 to v2. For the Chart.yaml of v2, apiVersion must be set to v1. After installing the Helm v3 client, you can run the **helm lint** command to check whether the chart format complies with the Helm v3 specifications. @@ -43,7 +43,7 @@ Changes since Helm v2: #. **Resources that are not created using Helm are not forcibly updated. Releases are not forcibly upgraded by default.** - The forcible upgrade logic of Helm v3 is changed. After the upgrade fails, the system does not delete and rebuild the Helm v3. Instead, the system directly uses the **put** logic. Therefore, the CCE release upgrade uses the non-forcible update logic by default. Resources that cannot be updated through patches will make the release unable to be upgraded. If a resource with the same name exists in the environment and does not have the Helm v3 ownership tag **app.kubernetes.io/managed-by: Helm**, a resource conflict message is displayed. + The forcible upgrade logic of Helm v3 is changed. After the upgrade fails, the system does not delete and rebuild the Helm v3. Instead, the system directly uses the **put** logic. Therefore, the CCE release upgrade uses the non-forcible update logic by default. Resources that cannot be updated through patches will make the release unable to be upgraded. If a release with the same name exists in the environment and does not have the home tag **app.kubernetes.io/managed-by: Helm** of Helm v3, a conflict message is displayed. **Adaptation solution**: Delete related resources and create them using Helm. diff --git a/umn/source/charts/index.rst b/umn/source/helm_chart/index.rst similarity index 97% rename from umn/source/charts/index.rst rename to umn/source/helm_chart/index.rst index 6eed788..dc413a2 100644 --- a/umn/source/charts/index.rst +++ b/umn/source/helm_chart/index.rst @@ -2,8 +2,8 @@ .. _cce_10_0019: -Charts -====== +Helm Chart +========== - :ref:`Overview ` - :ref:`Deploying an Application from a Chart ` diff --git a/umn/source/charts/overview.rst b/umn/source/helm_chart/overview.rst similarity index 96% rename from umn/source/charts/overview.rst rename to umn/source/helm_chart/overview.rst index 3e359af..3636854 100644 --- a/umn/source/charts/overview.rst +++ b/umn/source/helm_chart/overview.rst @@ -33,4 +33,4 @@ Helm can help application orchestration for Kubernetes: - Controls phases in a deployment cycle. - Tests and verifies the released version. -.. |image1| image:: /_static/images/en-us_image_0000001518062492.png +.. |image1| image:: /_static/images/en-us_image_0000001695736889.png diff --git a/umn/source/high-risk_operations_and_solutions.rst b/umn/source/high-risk_operations_and_solutions.rst index 44a5f1d..3dcf41b 100644 --- a/umn/source/high-risk_operations_and_solutions.rst +++ b/umn/source/high-risk_operations_and_solutions.rst @@ -12,117 +12,142 @@ Clusters and Nodes .. table:: **Table 1** High-risk operations and solutions - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Category | Operation | Impact | Solution | - +=================+=======================================================================================================+======================================================================================================================================================================================================================================================================================+===================================================================================================================================================+ - | Master node | Modifying the security group of a node in a cluster | The master node may be unavailable. | Restore the security group by referring to the security group of the new cluster and allow traffic from the security group to pass through. | - | | | | | - | | | .. note:: | | - | | | | | - | | | Naming rule of a master node: *Cluster name*\ ``-``\ **cce-control**\ ``-``\ *Random number* | | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Letting the node expire or destroying the node | The master node will be unavailable. | This operation cannot be undone. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Reinstalling the OS | Components on the master node will be deleted. | This operation cannot be undone. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Upgrading components on the master or etcd node | The cluster may be unavailable. | Roll back to the original version. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Deleting or formatting core directory data such as **/etc/kubernetes** on the node | The master node will be unavailable. | This operation cannot be undone. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Changing the node IP address | The master node will be unavailable. | Change the IP address back to the original one. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Modifying parameters of core components (such as etcd, kube-apiserver, and docker) | The master node may be unavailable. | Restore the parameter settings to the recommended values. For details, see :ref:`Cluster Configuration Management `. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Replacing the master or etcd certificate | The cluster may become unavailable. | This operation cannot be undone. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Worker node | Modifying the security group of a node in a cluster | The node may be unavailable. | Restore the security group by referring to :ref:`Creating a CCE Cluster ` and allow traffic from the security group to pass through. | - | | | | | - | | | .. note:: | | - | | | | | - | | | Naming rule of a worker node: *Cluster name*\ ``-``\ **cce-node**\ ``-``\ *Random number* | | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Deleting the node | The node will become unavailable. | This operation cannot be undone. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Reinstalling the OS | Node components are deleted, and the node becomes unavailable. | Reset the node. For details, see :ref:`Resetting a Node `. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Upgrading the node kernel | The node may be unavailable or the network may be abnormal. | For details, see :ref:`Resetting a Node `. | - | | | | | - | | | .. note:: | | - | | | | | - | | | Node running depends on the system kernel version. Do not use the **yum update** command to update or reinstall the operating system kernel of a node unless necessary. (Reinstalling the operating system kernel using the original image or other images is a risky operation.) | | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Changing the node IP address | The node will become unavailable. | Change the IP address back to the original one. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Modifying parameters of core components (such as kubelet and kube-proxy) | The node may become unavailable, and components may be insecure if security-related configurations are modified. | Restore the parameter settings to the recommended values. For details, see :ref:`Configuring a Node Pool `. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Modifying OS configuration | The node may be unavailable. | Restore the configuration items or reset the node. For details, see :ref:`Resetting a Node `. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Deleting or modifying the **/opt/cloud/cce** and **/var/paas** directories, and delete the data disk. | The node will become unready. | You can reset the node. For details, see :ref:`Resetting a Node `. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Modifying the node directory permission and the container directory permission | The permissions will be abnormal. | You are not advised to modify the permissions. Restore the permissions if they are modified. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Formatting or partitioning system disks, Docker disks, and kubelet disks on nodes. | The node may be unavailable. | You can reset the node. For details, see :ref:`Resetting a Node `. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Installing other software on nodes | This may cause exceptions on Kubernetes components installed on the node, and make the node unavailable. | Uninstall the software that has been installed and restore or reset the node. For details, see :ref:`Resetting a Node `. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Modifying NetworkManager configurations | The node will become unavailable. | Reset the node. For details, see :ref:`Resetting a Node `. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Delete system images such as **cfe-pause** from the node. | Containers cannot be created and system images cannot be pulled. | Copy the image from another normal node for restoration. | - +-----------------+-------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | Category | Operation | Impact | Solution | + +=================+========================================================================================================+======================================================================================================================================================================================================================================================================================+=======================================================================================================================================+ + | Master node | Modifying the security group of a node in a cluster | The master node may be unavailable. | Restore the security group by referring to "Creating a Cluster" and allow traffic from the security group to pass through. | + | | | | | + | | | .. note:: | | + | | | | | + | | | Naming rule of a master node: *Cluster name*\ ``-``\ **cce-control**\ ``-``\ *Random number* | | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Letting the node expire or destroying the node | The master node will be unavailable. | This operation cannot be undone. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Reinstalling the OS | Components on the master node will be deleted. | This operation cannot be undone. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Upgrading components on the master or etcd node | The cluster may be unavailable. | Roll back to the original version. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Deleting or formatting core directory data such as **/etc/kubernetes** on the node | The master node will be unavailable. | This operation cannot be undone. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Changing the node IP address | The master node will be unavailable. | Change the IP address back to the original one. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying parameters of core components (such as etcd, kube-apiserver, and docker) | The master node may be unavailable. | Restore the parameter settings to the recommended values. For details, see :ref:`Cluster Configuration Management `. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Replacing the master or etcd certificate | The cluster may be unavailable. | This operation cannot be undone. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | Worker node | Modifying the security group of a node in a cluster | The node may be unavailable. | Restore the security group and allow traffic from the security group to pass through. | + | | | | | + | | | .. note:: | | + | | | | | + | | | Naming rule of a worker node: *Cluster name*\ ``-``\ **cce-node**\ ``-``\ *Random number* | | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Deleting the node | The node will become unavailable. | This operation cannot be undone. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Reinstalling the OS | Node components are deleted, and the node becomes unavailable. | Reset the node. For details, see :ref:`Resetting a Node `. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Upgrading the node kernel | The node may be unavailable or the network may be abnormal. | For details, see :ref:`Resetting a Node `. | + | | | | | + | | | .. note:: | | + | | | | | + | | | Node running depends on the system kernel version. Do not use the **yum update** command to update or reinstall the operating system kernel of a node unless necessary. (Reinstalling the operating system kernel using the original image or other images is a risky operation.) | | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Changing the node IP address | The node will become unavailable. | Change the IP address back to the original one. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying parameters of core components (such as kubelet and kube-proxy) | The node may become unavailable, and components may be insecure if security-related configurations are modified. | Restore the parameter settings to the recommended values. For details, see :ref:`Configuring a Node Pool `. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying OS configuration | The node may be unavailable. | Restore the configuration items or reset the node. For details, see :ref:`Resetting a Node `. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Deleting or modifying the **/opt/cloud/cce** and **/var/paas** directories, and deleting the data disk | The node will become unready. | Reset the node. For details, see :ref:`Resetting a Node `. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying the node directory permission and the container directory permission | The permissions will be abnormal. | You are not advised to modify the permissions. Restore the permissions if they are modified. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Formatting or partitioning system disks, Docker disks, and kubelet disks on nodes. | The node may be unavailable. | Reset the node. For details, see :ref:`Resetting a Node `. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Installing other software on nodes | This may cause exceptions on Kubernetes components installed on the node, and make the node unavailable. | Uninstall the software that has been installed and restore or reset the node. For details, see :ref:`Resetting a Node `. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Modifying NetworkManager configurations | The node will become unavailable. | Reset the node. For details, see :ref:`Resetting a Node `. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ + | | Delete system images such as **cce-pause** from the node. | Containers cannot be created and system images cannot be pulled. | Copy the image from another normal node for restoration. | + +-----------------+--------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+ -Networking and Load Balancing ------------------------------ +Networking +---------- .. table:: **Table 2** High-risk operations and solutions - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Operation | Impact | How to Avoid/Fix | - +===================================================================================================================+============================================================================+===================================================================================================================================================+ - | Changing the value of the kernel parameter **net.ipv4.ip_forward** to **0** | The network becomes inaccessible. | Change the value to **1**. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Changing the value of the kernel parameter **net.ipv4.tcp_tw_recycle** to **1** | The NAT service becomes abnormal. | Change the value to **0**. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Changing the value of the kernel parameter **net.ipv4.tcp_tw_reuse** to **1** | The network becomes abnormal. | Change the value to **0**. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Not configuring the node security group to allow UDP packets to pass through port 53 of the container CIDR block | The DNS in the cluster cannot work properly. | Restore the security group by referring to :ref:`Creating a CCE Cluster ` and allow traffic from the security group to pass through. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Creating a custom listener on the ELB console for the load balancer managed by CCE | The modified items are reset by CCE or the ingress is faulty. | Use the YAML file of the Service to automatically create a listener. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Binding a user-defined backend on the ELB console to the load balancer managed by CCE. | | Do not manually bind any backend. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Changing the ELB certificate on the ELB console for the load balancer managed by CCE. | | Use the YAML file of the ingress to automatically manage certificates. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Changing the listener name on the ELB console for the ELB listener managed by CCE. | | Do not change the name of the ELB listener managed by CCE. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Changing the description of load balancers, listeners, and forwarding policies managed by CCE on the ELB console. | | Do not modify the description of load balancers, listeners, or forwarding policies managed by CCE. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ - | Delete CRD resources of network-attachment-definitions of default-network. | The container network is disconnected, or the cluster fails to be deleted. | If the resources are deleted by mistake, use the correct configurations to create the default-network resources. | - +-------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+ + +------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Impact | Solution | + +==================================================================================================================+============================================================================+===============================================================================================================================================+ + | Changing the value of the kernel parameter **net.ipv4.ip_forward** to **0** | The network becomes inaccessible. | Change the value to **1**. | + +------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ + | Changing the value of the kernel parameter **net.ipv4.tcp_tw_recycle** to **1** | The NAT service becomes abnormal. | Change the value to **0**. | + +------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ + | Changing the value of the kernel parameter **net.ipv4.tcp_tw_reuse** to **1** | The network becomes abnormal. | Change the value to **0**. | + +------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ + | Not configuring the node security group to allow UDP packets to pass through port 53 of the container CIDR block | The DNS in the cluster cannot work properly. | Restore the security group by referring to :ref:`Creating a Cluster ` and allow traffic from the security group to pass through. | + +------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ + | Delete CRD resources of network-attachment-definitions of default-network. | The container network is disconnected, or the cluster fails to be deleted. | If the resources are deleted by mistake, use the correct configurations to create the default-network resources. | + +------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ + +Load Balancing +-------------- + +.. table:: **Table 3** Service ELB + + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Impact | Solution | + +==============================================================================================================================================================+==========================================================================================================================================================================================================================================================+=================================================================================================================================================+ + | Changing the private IPv4 address of a load balancer on the ELB console | - The network traffic forwarded using the private IPv4 addresses will be interrupted. | You are not advised to modify the permissions. Restore the permissions if they are modified. | + | | - The IP address in the **status** field of the Service/ingress YAML file is changed. | | + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ + | Unbinding the IPv4 EIP from a load balancer on the ELB console | After the EIP is unbound from the load balancer, the load balancer will not be able to forward Internet traffic. | Restore the EIP binding. | + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creating a custom listener on the ELB console for the load balancer managed by CCE | If a load balancer is automatically created when a Service or an ingress is created, the custom listener of the load balancer cannot be deleted when the Service or ingress is deleted. In this case, the load balancer cannot be automatically deleted. | Use the listener automatically created through a Service or an ingress. If a custom listener is used, manually delete the target load balancer. | + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ + | Deleting a listener automatically created by CCE on the ELB console | - Service/Ingress access fails. | Re-create or update the Service or ingress. | + | | - After the master nodes are restarted, for example, due to a cluster upgrade, all your modifications will be reset by CCE. | | + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ + | Modifying the basic configurations such as the name, access control, timeout, or description of a listener created by CCE on the ELB console | After the master nodes are restarted, for example, due to a cluster upgrade, all your modifications will be reset by CCE if the listener is deleted. | You are not advised to modify the permissions. Restore the permissions if they are modified. | + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ + | Modifying the backend server group of a listener created by CCE on the ELB console, including adding or deleting backend servers to or from the server group | - Service/Ingress access fails. | Re-create or update the Service or ingress. | + | | - After the master nodes are restarted, for example, due to a cluster upgrade, all your modifications will be reset by CCE. | | + | | | | + | | - The deleted backend server will be restored. | | + | | - The added backend server will be removed. | | + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ + | Replacing the backend server group of a listener created by CCE on the ELB console | - Service/Ingress access fails. | Re-create or update the Service or ingress. | + | | - After the master nodes are restarted, for example, due to a cluster upgrade, all servers in the backend server group will be reset by CCE. | | + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ + | Modifying the forwarding policy of a listener created by CCE on the ELB console, including adding or deleting a forwarding rule | - Service/Ingress access fails. | You are not advised to modify the permissions. Restore the permissions if they are modified. | + | | - After the master nodes are restarted, for example, due to a cluster upgrade, all your modifications will be reset by CCE if the forwarding rule is added by the ingress. | | + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ + | Changing the ELB certificate on the ELB console for the load balancer managed by CCE | After the master nodes are restarted, for example, due to a cluster upgrade, all servers in the backend server group will be reset by CCE. | Use the YAML file of the ingress to automatically manage certificates. | + +--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+ Logs ---- -.. table:: **Table 3** High-risk operations and solutions +.. table:: **Table 4** High-risk operations and solutions +------------------------------------------------------------------------------+--------------------------------+----------+ | Operation | Impact | Solution | +==============================================================================+================================+==========+ | Deleting the **/tmp/ccs-log-collector/pos** directory on the host machine | Logs are collected repeatedly. | None | +------------------------------------------------------------------------------+--------------------------------+----------+ - | Deleting the **/tmp/ccs-log-collector/buffer** directory of the host machine | Logs are lost. | None | + | Deleting the **/tmp/ccs-log-collector/buffer** directory on the host machine | Logs are lost. | None | +------------------------------------------------------------------------------+--------------------------------+----------+ EVS Disks --------- -.. table:: **Table 4** High-risk operations and solutions +.. table:: **Table 5** High-risk operations and solutions - +------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------------------+ - | Operation | Impact | Solution | Remarks | - +================================================+============================================================================+=================================================================+===========================================================================+ - | Manually unmounting an EVS disk on the console | An I/O error is reported when the pod data is being written into the disk. | Delete the mount path from the node and schedule the pod again. | The file in the pod records the location where files are to be collected. | - +------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------------------+ - | Unmounting the disk mount path on the node | Pod data is written into a local disk. | Remount the corresponding path to the pod. | The buffer contains log cache files to be consumed. | - +------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------------------+ - | Operating EVS disks on the node | Pod data is written into a local disk. | None | None | - +------------------------------------------------+----------------------------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------------------+ + +------------------------------------------------+------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------------------+ + | Operation | Impact | Solution | Remarks | + +================================================+======================================================+=================================================================+===========================================================================+ + | Manually unmounting an EVS disk on the console | An I/O error occurs when data is written into a pod. | Delete the mount path from the node and schedule the pod again. | The file in the pod records the location where files are to be collected. | + +------------------------------------------------+------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------------------+ + | Unmounting the disk mount path on the node | Pod data is written into a local disk. | Remount the corresponding path to the pod. | The buffer contains log cache files to be consumed. | + +------------------------------------------------+------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------------------+ + | Operating EVS disks on the node | Pod data is written into a local disk. | None | None | + +------------------------------------------------+------------------------------------------------------+-----------------------------------------------------------------+---------------------------------------------------------------------------+ diff --git a/umn/source/index.rst b/umn/source/index.rst index d28d39d..d573b3f 100644 --- a/umn/source/index.rst +++ b/umn/source/index.rst @@ -13,18 +13,16 @@ Cloud Container Engine - User Guide nodes/index node_pools/index workloads/index - networking/index + scheduling/index + network/index storage/index - monitoring_and_alarm/index - logging/index + observability/index namespaces/index configmaps_and_secrets/index auto_scaling/index add-ons/index - charts/index - permissions_management/index - cloud_trace_service_cts/index - storage_management_flexvolume_deprecated/index + helm_chart/index + permissions/index best_practice/index faqs/index migrating_data_from_cce_1.0_to_cce_2.0/index diff --git a/umn/source/monitoring_and_alarm/custom_monitoring.rst b/umn/source/monitoring_and_alarm/custom_monitoring.rst deleted file mode 100644 index 483eadb..0000000 --- a/umn/source/monitoring_and_alarm/custom_monitoring.rst +++ /dev/null @@ -1,202 +0,0 @@ -:original_name: cce_10_0201.html - -.. _cce_10_0201: - -Custom Monitoring -================= - -CCE allows you to upload custom metrics to AOM. The ICAgent on a node periodically calls the metric monitoring API configured on a workload to read monitoring data and then uploads the data to AOM. - -|image1| - -The custom metric API of a workload can be configured when the workload is created. This section uses an Nginx application as an example to describe how to report custom metrics to AOM. - -Constraints ------------ - -- The ICAgent is compatible with the monitoring data specifications of `Prometheus `__. The custom metrics provided by pods can be collected by the ICAgent only when they meet the monitoring data specifications of Prometheus. -- The ICAgent supports only `Gauge `__ metrics. -- The interval for the ICAgent to call the custom metric API is 1 minute, which cannot be changed. - -Prometheus Monitoring Data Collection -------------------------------------- - -Prometheus periodically calls the metric monitoring API (**/metrics** by default) of an application to obtain monitoring data. The application needs to provide the metric monitoring API for Prometheus to call, and the monitoring data must meet the following specifications of Prometheus: - -.. code-block:: - - # TYPE nginx_connections_active gauge - nginx_connections_active 2 - # TYPE nginx_connections_reading gauge - nginx_connections_reading 0 - -Prometheus provides clients in various languages. For details about the clients, see `Prometheus CLIENT LIBRARIES `__. For details about how to develop an exporter, see `WRITING EXPORTERS `__. The Prometheus community provides various third-party exporters that can be directly used. For details, see `EXPORTERS AND INTEGRATIONS `__. - -Preparing an Application ------------------------- - -Nginx has a module named **ngx_http_stub_status_module**, which provides basic monitoring functions. You can configure the **nginx.conf** file to provide an API for external systems to access Nginx monitoring data. As shown in the following figure, after the server configuration is added to **http**, Nginx can provide an API for external systems to access Nginx monitoring data. - -.. code-block:: - - user nginx; - worker_processes auto; - - error_log /var/log/nginx/error.log warn; - pid /var/run/nginx.pid; - - events { - worker_connections 1024; - } - - http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - sendfile on; - #tcp_nopush on; - keepalive_timeout 65; - #gzip on; - include /etc/nginx/conf.d/*.conf; - - server { - listen 8080; - server_name localhost; - location /stub_status { - stub_status on; - access_log off; - } - } - } - -Save the preceding configuration to the **nginx.conf** file and use the configuration to create a new image. The Dockerfile file is as follows: - -.. code-block:: - - FROM nginx:1.21.5-alpine - ADD nginx.conf /etc/nginx/nginx.conf - EXPOSE 80 - CMD ["nginx", "-g", "daemon off;"] - -Use the preceding Dockerfile file to build an image and upload it to SWR. The image name is **nginx:exporter**. - -**docker build -t nginx:exporter .** - -**docker tag nginx:exporter {swr-address}/{group}/nginx:exporter** - -**docker push {swr-address}/{group}/nginx:exporter** - -After running a container with image **nginx:exporter**, you can obtain Nginx monitoring data by calling http://**:8080/stub_status. *< ip_address >* indicates the IP address of the container. The monitoring data is as follows: - -.. code-block:: - - # curl http://127.0.0.1:8080/stub_status - Active connections: 3 - server accepts handled requests - 146269 146269 212 - Reading: 0 Writing: 1 Waiting: 2 - -Deploying an Application ------------------------- - -The data format of the monitoring data provided by **nginx:exporter** does not meet the requirements of Prometheus. You need to convert the data format to the format required by Prometheus. To convert the format of Nginx metrics, use `nginx-prometheus-exporter `__, as shown in the following figure. - -|image2| - -Deploy **nginx:exporter** and **nginx-prometheus-exporter** in the same pod. - -.. code-block:: - - kind: Deployment - apiVersion: apps/v1 - metadata: - name: nginx-exporter - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: nginx-exporter - template: - metadata: - labels: - app: nginx-exporter - annotations: - metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"prometheus","path":"/metrics","port":"9113","names":""}]' - spec: - containers: - - name: container-0 - image: 'nginx:exporter' # Replace it with the address of the image you uploaded to SWR. - resources: - limits: - cpu: 250m - memory: 512Mi - requests: - cpu: 250m - memory: 512Mi - - name: container-1 - image: 'nginx/nginx-prometheus-exporter:0.9.0' - command: - - nginx-prometheus-exporter - args: - - '-nginx.scrape-uri=http://127.0.0.1:8080/stub_status' - imagePullSecrets: - - name: default-secret - -.. note:: - - The nginx/nginx-prometheus-exporter:0.9.0 image needs to be pulled from the public network. Therefore, each node in the cluster must have a public IP address. - -nginx-prometheus-exporter requires a startup command. **nginx-prometheus-exporter -nginx.scrape-uri=http://127.0.0.1:8080/stub_status** is used to obtain Nginx monitoring data. - -In addition, you need to add an annotation **metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"prometheus","path":"/metrics","port":"9113","names":""}]'** to the pod. - -Verification ------------- - -After an application is deployed, you can access Nginx to construct some access data and check whether the corresponding monitoring data can be obtained in AOM. - -.. code-block:: - - $ kubectl get pod - NAME READY STATUS RESTARTS AGE - nginx-exporter-78859765db-6j8sw 2/2 Running 0 4m - $ kubectl exec -it nginx-exporter-78859765db-6j8sw -- /bin/sh - Defaulting container name to container-0. - Use 'kubectl describe pod/nginx-exporter-78859765db-6j8sw -n default' to see all of the containers in this pod. - / # curl http://localhost - - - - Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and - working. Further configuration is required.

- -

For online documentation and support please refer to - nginx.org.
- Commercial support is available at - nginx.com.

- -

Thank you for using nginx.

- - - / # - -You can see that Nginx has been accessed once. - -Log in to AOM. In the navigation pane, choose **Monitoring** > **Metric Monitoring**. You can view Nginx-related metrics, for example, **nginx_connections_active**. - -.. |image1| image:: /_static/images/en-us_image_0000001517743384.png -.. |image2| image:: /_static/images/en-us_image_0000001568822693.png diff --git a/umn/source/monitoring_and_alarm/monitoring_overview.rst b/umn/source/monitoring_and_alarm/monitoring_overview.rst deleted file mode 100644 index 2578d8d..0000000 --- a/umn/source/monitoring_and_alarm/monitoring_overview.rst +++ /dev/null @@ -1,98 +0,0 @@ -:original_name: cce_10_0182.html - -.. _cce_10_0182: - -Monitoring Overview -=================== - -CCE works with AOM to comprehensively monitor clusters. When a node is created, the ICAgent (the DaemonSet named **icagent** in the kube-system namespace of the cluster) of AOM is installed by default. The ICAgent collects monitoring data of underlying resources and workloads running on the cluster. It also collects monitoring data of custom metrics of the workload. - -- Resource metrics - - Basic resource monitoring includes CPU, memory, and disk monitoring. For details, see :ref:`Resource Metrics `. You can view these metrics of clusters, nodes, and workloads on the CCE or AOM console. - -- Custom metrics - - The ICAgent collects custom metrics of applications and uploads them to AOM. For details, see :ref:`Custom Monitoring `. - -.. _cce_10_0182__section205486212251: - -Resource Metrics ----------------- - -On the CCE console, you can view the following metrics. - -.. table:: **Table 1** Resource metrics - - +------------------------+------------------------------------------------------------------------------+ - | Metric | Description | - +========================+==============================================================================+ - | CPU Allocation Rate | Indicates the percentage of CPUs allocated to workloads. | - +------------------------+------------------------------------------------------------------------------+ - | Memory Allocation Rate | Indicates the percentage of memory allocated to workloads. | - +------------------------+------------------------------------------------------------------------------+ - | CPU Usage | Indicates the CPU usage. | - +------------------------+------------------------------------------------------------------------------+ - | Memory Usage | Indicates the memory usage. | - +------------------------+------------------------------------------------------------------------------+ - | Disk Usage | Indicates the disk usage. | - +------------------------+------------------------------------------------------------------------------+ - | Down | Indicates the speed at which data is downloaded to a node. The unit is KB/s. | - +------------------------+------------------------------------------------------------------------------+ - | Up | Indicates the speed at which data is uploaded from a node. The unit is KB/s. | - +------------------------+------------------------------------------------------------------------------+ - | Disk Read Rate | Indicates the data volume read from a disk per second. The unit is KB/s. | - +------------------------+------------------------------------------------------------------------------+ - | Disk Write Rate | Indicates the data volume written to a disk per second. The unit is KB/s. | - +------------------------+------------------------------------------------------------------------------+ - -On the AOM console, you can view host metrics and container metrics. - -Viewing Cluster Monitoring Data -------------------------------- - -Click the cluster name and access the cluster console. In the navigation pane, choose **Cluster Information**. In the right pane, you can view the CPU and memory usage of all nodes (excluding master nodes) in the cluster in the last hour. - -**Explanation of monitoring metrics:** - -- CPU allocation rate = Sum of CPU quotas requested by pods in the cluster/Sum of CPU quotas that can be allocated of all nodes (excluding master nodes) in the cluster -- Memory allocation rate = Sum of memory quotas requested by pods in the cluster/Sum of memory quotas that can be allocated of all nodes (excluding master nodes) in the cluster -- CPU usage: Average CPU usage of all nodes (excluding master nodes) in a cluster -- Memory usage: Average memory usage of all nodes (excluding master nodes) in a cluster - -.. note:: - - Allocatable node resources (CPU or memory) = Total amount - Reserved amount - Eviction thresholds. For details, see :ref:`Formula for Calculating the Reserved Resources of a Node `. - -CCE provides the status, availability zone (AZ), CPU usage, and memory usage of master nodes. - -Viewing Monitoring Data of Worker Nodes ---------------------------------------- - -In addition to viewing monitoring data of all nodes, you can also view monitoring data of a single node. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click **Monitor** in the **Operation** column of the target node. - -Monitoring data comes from AOM. You can view the monitoring data of a node, including the CPU, memory, disk, network, and GPU. - -Viewing Workload Monitoring Data --------------------------------- - -You can view monitoring data of a workload on the **Monitoring** tab page of the workload details page. Click the cluster name and access the cluster console. Choose **Workloads** in the navigation pane and click **Monitor** in the **Operation** column of the target workload. - -Monitoring data comes from AOM. You can view the monitoring data of a workload, including the CPU, memory, network, and GPU, on the AOM console. - -**Explanation of monitoring metrics:** - -- Workload CPU usage = Maximum CPU usage in each pod of the workload -- Workload memory usage = Maximum memory usage in each pod of the workload - -You can also click **View More** to go to the AOM console and view monitoring data of the workload. - -Viewing Pod Monitoring Data ---------------------------- - -You can view monitoring data of a pod on the **Pods** tab page of the workload details page. - -**Explanation of monitoring metrics:** - -- Pod CPU usage = The used CPU cores/The sum of all CPU limits of the pods (If not specified, all node CPU cores are used.) -- Pod memory usage = The used physical memory/The sum of all memory limits of pods (If not specified, all node memory is used.) diff --git a/umn/source/namespaces/creating_a_namespace.rst b/umn/source/namespaces/creating_a_namespace.rst index 74a688d..0f6a432 100644 --- a/umn/source/namespaces/creating_a_namespace.rst +++ b/umn/source/namespaces/creating_a_namespace.rst @@ -17,8 +17,8 @@ Prerequisites At least one cluster has been created. -Notes and Constraints ---------------------- +Constraints +----------- A maximum of 6,000 Services can be created in each namespace. The Services mentioned here indicate the Kubernetes Service resources added for workloads. @@ -40,7 +40,7 @@ Namespaces can be created in either of the following ways: Creating a Namespace -------------------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Namespaces** in the navigation pane and click **Create Namespace** in the upper right corner. @@ -71,7 +71,7 @@ Creating a Namespace | | If you want to limit the CPU or memory quota, you must specify the CPU or memory request value when creating a workload. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -#. When the configuration is complete, click **OK**. +#. After the configuration is complete, click **OK**. Using kubectl to Create a Namespace ----------------------------------- diff --git a/umn/source/namespaces/managing_namespaces.rst b/umn/source/namespaces/managing_namespaces.rst index e4e1ec8..76f2fdd 100644 --- a/umn/source/namespaces/managing_namespaces.rst +++ b/umn/source/namespaces/managing_namespaces.rst @@ -29,7 +29,7 @@ Isolating Namespaces The following figure shows namespaces created for the development, joint debugging, and testing environments, respectively. - .. figure:: /_static/images/en-us_image_0000001569182513.png + .. figure:: /_static/images/en-us_image_0000001647417256.png :alt: **Figure 1** One namespace for one environment **Figure 1** One namespace for one environment @@ -39,11 +39,26 @@ Isolating Namespaces You are advised to use this method if a large number of workloads are deployed in the same environment. For example, in the following figure, different namespaces (APP1 and APP2) are created to logically manage workloads as different groups. Workloads in the same namespace access each other using the Service name, and workloads in different namespaces access each other using the Service name or namespace name. - .. figure:: /_static/images/en-us_image_0000001569022797.png + .. figure:: /_static/images/en-us_image_0000001695896197.png :alt: **Figure 2** Grouping workloads into different namespaces **Figure 2** Grouping workloads into different namespaces +Managing Namespace Labels +------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. In the navigation pane, choose **Namespaces**. +#. Locate the row containing the target namespace and choose **More** > **Manage Label** in the **Operation** column. +#. In the dialog box that is displayed, the existing labels of the namespace are displayed. Modify the labels as needed. + + - Adding a label: Click the add icon, enter the key and value of the label to be added, and click **OK**. + + For example, the key is **project** and the value is **cicd**, indicating that the namespace is used to deploy CICD. + + - Deleting a label: Click |image1| next the label to be deleted and then **OK**. + +#. Switch to the **Manage Label** dialog box again and check the modified labels. + Deleting a Namespace -------------------- @@ -51,6 +66,8 @@ If a namespace is deleted, all resources (such as workloads, jobs, and ConfigMap #. Log in to the CCE console and access the cluster console. -#. In the navigation pane, choose **Namespaces**, select the target namespace, and choose **More** > **Delete**. +#. Choose **Namespaces** in the navigation pane. On the displayed page, click **More** in the row of the target namespace and choose **Delete**. Follow the prompts to delete the namespace. The default namespaces cannot be deleted. + +.. |image1| image:: /_static/images/en-us_image_0000001695736909.png diff --git a/umn/source/namespaces/setting_a_resource_quota.rst b/umn/source/namespaces/setting_a_resource_quota.rst index 47acdd1..cf66921 100644 --- a/umn/source/namespaces/setting_a_resource_quota.rst +++ b/umn/source/namespaces/setting_a_resource_quota.rst @@ -42,19 +42,19 @@ Starting from clusters of v1.21 and later, the default `Resource Quotas ` shows the SNAT architecture. The SNAT function allows the container pods in a VPC to access the Internet without being bound to an EIP. SNAT supports a large number of concurrent connections, which makes it suitable for applications involving a large number of requests and connections. .. _cce_10_0400__en-us_topic_0261817696_en-us_topic_0241700138_en-us_topic_0144420145_fig34611314153619: -.. figure:: /_static/images/en-us_image_0000001569182781.png +.. figure:: /_static/images/en-us_image_0000001695896869.png :alt: **Figure 1** SNAT **Figure 1** SNAT @@ -26,9 +26,9 @@ To enable a container pod to access the Internet, perform the following steps: a. Log in to the management console. b. Click |image1| in the upper left corner of the management console and select a region and a project. - c. Click |image2| in the upper left corner and choose **Networking** > **Elastic IP** in the expanded list. + c. Click |image2| at the upper left corner and choose **Networking** > **Elastic IP** in the expanded list. d. On the **EIPs** page, click **Create** **EIP**. - e. Set parameters as required. + e. Configure parameters as required. .. note:: @@ -38,9 +38,9 @@ To enable a container pod to access the Internet, perform the following steps: a. Log in to the management console. b. Click |image3| in the upper left corner of the management console and select a region and a project. - c. Click |image4| in the upper left corner and choose **Networking** > **NAT Gateway** in the expanded list. - d. On the displayed page, click **Create Public NAT Gateway** in the upper right corner. - e. Set parameters as required. + c. Click |image4| at the upper left corner and choose **Networking** > **NAT Gateway** in the expanded list. + d. On the displayed page, click **Create** **Public NAT Gateway** in the upper right corner. + e. Configure parameters as required. .. note:: @@ -50,7 +50,7 @@ To enable a container pod to access the Internet, perform the following steps: a. Log in to the management console. b. Click |image5| in the upper left corner of the management console and select a region and a project. - c. Click |image6| in the upper left corner and choose **Networking** > **NAT Gateway** in the expanded list. + c. Click |image6| at the upper left corner and choose **Networking** > **NAT Gateway** in the expanded list. d. On the page displayed, click the name of the NAT gateway for which you want to add the SNAT rule. e. On the **SNAT Rules** tab page, click **Add SNAT Rule**. f. Set parameters as required. @@ -60,14 +60,15 @@ To enable a container pod to access the Internet, perform the following steps: SNAT rules take effect by CIDR block. As different container network models use different communication modes, the subnet needs to be selected according to the following rules: - Tunnel network and VPC network: Select the subnet where the node is located, that is, the subnet selected during node creation. + - Cloud Native Network 2.0: Select the subnet where the container is located, that is, the container subnet selected during cluster creation. - If there are multiple CIDR blocks, you can create multiple SNAT rules or customize a CIDR block as long as the CIDR block contains the node subnet. + If there are multiple CIDR blocks, you can create multiple SNAT rules or customize a CIDR block as long as the CIDR block contains the container subnet (Cloud Native 2.0 Network) or the node subnet. After the SNAT rule is configured, workloads can access public networks from the container. Public networks can be pinged from the container. -.. |image1| image:: /_static/images/en-us_image_0000001568822961.png -.. |image2| image:: /_static/images/en-us_image_0000001518062796.png -.. |image3| image:: /_static/images/en-us_image_0000001517743652.png -.. |image4| image:: /_static/images/en-us_image_0000001568902689.png -.. |image5| image:: /_static/images/en-us_image_0000001569023069.png -.. |image6| image:: /_static/images/en-us_image_0000001568822957.png +.. |image1| image:: /_static/images/en-us_image_0000001647577200.png +.. |image2| image:: /_static/images/en-us_image_0000001695737597.png +.. |image3| image:: /_static/images/en-us_image_0000001695737589.png +.. |image4| image:: /_static/images/en-us_image_0000001695737593.png +.. |image5| image:: /_static/images/en-us_image_0000001647417936.png +.. |image6| image:: /_static/images/en-us_image_0000001647417932.png diff --git a/umn/source/network/cluster_network_settings/adding_a_container_cidr_block_for_a_cluster.rst b/umn/source/network/cluster_network_settings/adding_a_container_cidr_block_for_a_cluster.rst new file mode 100644 index 0000000..af0bd43 --- /dev/null +++ b/umn/source/network/cluster_network_settings/adding_a_container_cidr_block_for_a_cluster.rst @@ -0,0 +1,40 @@ +:original_name: cce_10_0680.html + +.. _cce_10_0680: + +Adding a Container CIDR Block for a Cluster +=========================================== + +Scenario +-------- + +If the container CIDR block (container subnet in a CCE Turbo cluster) set during CCE cluster creation is insufficient, you can add a container CIDR block for the cluster. + +Constraints +----------- + +- This function applies to CCE clusters and CCE Turbo clusters of v1.19 or later, but not to clusters using container tunnel networking. +- The container CIDR block or container subnet cannot be deleted after being added. Exercise caution when performing this operation. + +Adding a Container CIDR Block for a CCE Cluster +----------------------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. On the **Cluster Information** page, click **Add Container CIDR Block** in the **Networking Configuration** area. +#. Configure the container CIDR block to be added. You can click |image1| to add multiple container CIDR blocks at a time. + + .. note:: + + New container CIDR blocks cannot conflict with service CIDR blocks, VPC CIDR blocks, and existing container CIDR blocks. + +#. Click **OK**. + +Adding a Container Subnet for a CCE Turbo Cluster +------------------------------------------------- + +#. Log in to the CCE console and access the CCE Turbo cluster console. +#. On the **Cluster Information** page, locate the **Networking Configuration** area and click **Add Pod Subnet**. +#. Select a container subnet in the same VPC. You can add multiple container subnets at a time. If no other container subnet is available, go to the VPC console to create one. +#. Click **OK**. + +.. |image1| image:: /_static/images/en-us_image_0000001647417744.png diff --git a/umn/source/network/cluster_network_settings/index.rst b/umn/source/network/cluster_network_settings/index.rst new file mode 100644 index 0000000..807a140 --- /dev/null +++ b/umn/source/network/cluster_network_settings/index.rst @@ -0,0 +1,16 @@ +:original_name: cce_10_0679.html + +.. _cce_10_0679: + +Cluster Network Settings +======================== + +- :ref:`Switching a Node Subnet ` +- :ref:`Adding a Container CIDR Block for a Cluster ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + switching_a_node_subnet + adding_a_container_cidr_block_for_a_cluster diff --git a/umn/source/network/cluster_network_settings/switching_a_node_subnet.rst b/umn/source/network/cluster_network_settings/switching_a_node_subnet.rst new file mode 100644 index 0000000..2485e1c --- /dev/null +++ b/umn/source/network/cluster_network_settings/switching_a_node_subnet.rst @@ -0,0 +1,31 @@ +:original_name: cce_10_0464.html + +.. _cce_10_0464: + +Switching a Node Subnet +======================= + +Scenario +-------- + +This section describes how to switch subnets for nodes in a cluster. + +Constraints +----------- + +- Only subnets in the same VPC as the cluster can be switched. The security group of the node cannot be switched. + +Procedure +--------- + +#. Log in to the ECS console. +#. Click **More > Manage Network > Change VPC** in the **Operation** column of the target ECS. +#. Set parameters for changing the VPC. + + - **VPC**: Select the same VPC as that of the cluster. + - **Subnet**: Select the target subnet to be switched. + - **Private IP Address**: Select **Assign new** or **Use existing** as required. + - **Security Group**: Select the security group of the cluster node. Otherwise, the node is unavailable. + +#. Click **OK**. +#. Go to the CCE console and reset the node. You can use the default parameter settings. For details, see :ref:`Resetting a Node `. diff --git a/umn/source/networking/configuring_intra-vpc_access.rst b/umn/source/network/configuring_intra-vpc_access.rst similarity index 100% rename from umn/source/networking/configuring_intra-vpc_access.rst rename to umn/source/network/configuring_intra-vpc_access.rst diff --git a/umn/source/networking/container_network_models/cloud_native_network_2.0.rst b/umn/source/network/container_network_models/cloud_native_network_2.0.rst similarity index 90% rename from umn/source/networking/container_network_models/cloud_native_network_2.0.rst rename to umn/source/network/container_network_models/cloud_native_network_2.0.rst index 0845f00..6cfcc37 100644 --- a/umn/source/networking/container_network_models/cloud_native_network_2.0.rst +++ b/umn/source/network/container_network_models/cloud_native_network_2.0.rst @@ -11,7 +11,7 @@ Model Definition Developed by CCE, Cloud Native Network 2.0 deeply integrates Elastic Network Interfaces (ENIs) and sub-ENIs of Virtual Private Cloud (VPC). Container IP addresses are allocated from the VPC CIDR block. ELB passthrough networking is supported to direct access requests to containers. Security groups and elastic IPs (EIPs) are bound to deliver high performance. -.. figure:: /_static/images/en-us_image_0000001568822717.png +.. figure:: /_static/images/en-us_image_0000001695737033.png :alt: **Figure 1** Cloud Native Network 2.0 **Figure 1** Cloud Native Network 2.0 @@ -29,7 +29,7 @@ Advantages and Disadvantages - As the container network directly uses VPC, it is easy to locate network problems and provide the highest performance. - External networks in a VPC can be directly connected to container IP addresses. -- The load balancing, security group, and EIP capabilities provided by VPC can be used directly. +- The load balancing, security group, and EIP capabilities provided by VPC can be directly used by pods. **Disadvantages** @@ -38,7 +38,7 @@ The container network directly uses VPC, which occupies the VPC address space. T Application Scenarios --------------------- -- High performance requirements and use of other VPC network capabilities: Cloud Native Network 2.0 directly uses VPC, which delivers almost the same performance as the VPC network. Therefore, it is applicable to scenarios that have high requirements on bandwidth and latency, such as online live broadcast and e-commerce seckill. +- High performance requirements and use of other VPC network capabilities: Cloud Native Network 2.0 directly uses VPC, which delivers almost the same performance as the VPC network. Therefore, it applies to scenarios that have high requirements on bandwidth and latency. - Large-scale networking: Cloud Native Network 2.0 supports a maximum of 2000 ECS nodes and 100,000 containers. Recommendation for CIDR Block Planning @@ -57,7 +57,7 @@ In the Cloud Native Network 2.0 model, the container CIDR block and node CIDR bl In addition, a subnet can be added to the container CIDR block after a cluster is created to increase the number of available IP addresses. In this case, ensure that the added subnet does not conflict with other subnets in the container CIDR block. -.. figure:: /_static/images/en-us_image_0000001569182549.png +.. figure:: /_static/images/en-us_image_0000001695737041.png :alt: **Figure 2** Configuring CIDR blocks **Figure 2** Configuring CIDR blocks @@ -67,7 +67,7 @@ Example of Cloud Native Network 2.0 Access Create a CCE Turbo cluster, which contains three ECS nodes. -Access the details page of one node. You can see that the node has one primary NIC and one extended NIC, and both of them are ENIs. The extended NIC belongs to the container CIDR block and is used to mount a sub-ENI to the pod. +Access the details page of one node. You can see that the node has one primary ENI and one extended ENI, and both of them are ENIs. The extended ENI belongs to the container CIDR block and is used to mount a sub-ENI to the pod. Create a Deployment in the cluster. @@ -114,8 +114,8 @@ View the created pod. example-5bdc5699b7-s9fts 1/1 Running 0 7s 10.1.16.89 10.1.0.144 example-5bdc5699b7-swq6q 1/1 Running 0 7s 10.1.17.111 10.1.0.167 -The IP addresses of all pods are sub-ENIs, which are mounted to the ENI (extended NIC) of the node. +The IP addresses of all pods are sub-ENIs, which are mounted to the ENI (extended ENI) of the node. -For example, the extended NIC of node 10.1.0.167 is 10.1.17.172. On the **Network Interfaces** page of the Network Console, you can see that three sub-ENIs are mounted to the extended NIC 10.1.17.172, which is the IP address of the pod. +For example, the extended ENI of node 10.1.0.167 is 10.1.17.172. On the **Network Interfaces** page of the Network Console, you can see that three sub-ENIs are mounted to the extended ENI 10.1.17.172, which is the IP address of the pod. In the VPC, the IP address of the pod can be successfully accessed. diff --git a/umn/source/networking/container_network_models/container_tunnel_network.rst b/umn/source/network/container_network_models/container_tunnel_network.rst similarity index 94% rename from umn/source/networking/container_network_models/container_tunnel_network.rst rename to umn/source/network/container_network_models/container_tunnel_network.rst index 438af1f..1fa60a7 100644 --- a/umn/source/networking/container_network_models/container_tunnel_network.rst +++ b/umn/source/network/container_network_models/container_tunnel_network.rst @@ -11,7 +11,7 @@ Container Tunnel Network Model The container tunnel network is constructed on but independent of the node network through tunnel encapsulation. This network model uses VXLAN to encapsulate Ethernet packets into UDP packets and transmits them in tunnels. Open vSwitch serves as the backend virtual switch. Though at some costs of performance, packet encapsulation and tunnel transmission enable higher interoperability and compatibility with advanced features (such as network policy-based isolation) for most common scenarios. -.. figure:: /_static/images/en-us_image_0000001518222740.png +.. figure:: /_static/images/en-us_image_0000001695737509.png :alt: **Figure 1** Container tunnel network **Figure 1** Container tunnel network @@ -34,13 +34,13 @@ Advantages and Disadvantages **Disadvantages** - High encapsulation overhead, complex networking, and low performance -- Failure to use the load balancing and security group capabilities provided by the VPC +- Pods cannot directly use functionalities such as EIPs and security groups. - External networks cannot be directly connected to container IP addresses. Applicable Scenarios -------------------- -- Low requirements on performance: As the container tunnel network requires additional VXLAN tunnel encapsulation, it has about 5% to 15% of performance loss when compared with the other two container network models. Therefore, the container tunnel network is applicable to the scenarios that do not have high performance requirements, such as web applications, and middle-end and back-end services with a small number of access requests. +- Low requirements on performance: As the container tunnel network requires additional VXLAN tunnel encapsulation, it has about 5% to 15% of performance loss when compared with the other two container network models. Therefore, the container tunnel network applies to the scenarios that do not have high performance requirements, such as web applications, and middle-end and back-end services with a small number of access requests. - Large-scale networking: Different from the VPC network that is limited by the VPC route quota, the container tunnel network does not have any restriction on the infrastructure. In addition, the container tunnel network controls the broadcast domain to the node level. The container tunnel network supports a maximum of 2000 nodes. Container IP Address Management @@ -55,7 +55,7 @@ The container tunnel network allocates container IP addresses according to the f - Pods scheduled to a node are cyclically allocated IP addresses from one or more CIDR blocks allocated to the node. -.. figure:: /_static/images/en-us_image_0000001569182773.png +.. figure:: /_static/images/en-us_image_0000001647577116.png :alt: **Figure 2** IP address allocation of the container tunnel network **Figure 2** IP address allocation of the container tunnel network diff --git a/umn/source/networking/container_network_models/index.rst b/umn/source/network/container_network_models/index.rst similarity index 100% rename from umn/source/networking/container_network_models/index.rst rename to umn/source/network/container_network_models/index.rst diff --git a/umn/source/networking/container_network_models/overview.rst b/umn/source/network/container_network_models/overview.rst similarity index 96% rename from umn/source/networking/container_network_models/overview.rst rename to umn/source/network/container_network_models/overview.rst index 8f1bf08..5b29bbd 100644 --- a/umn/source/networking/container_network_models/overview.rst +++ b/umn/source/network/container_network_models/overview.rst @@ -7,7 +7,7 @@ Overview The container network assigns IP addresses to pods in a cluster and provides networking services. In CCE, you can select the following network models for your cluster: -- :ref:`Container tunnel network ` +- :ref:`Tunnel network ` - :ref:`VPC network ` - :ref:`Cloud Native Network 2.0 ` @@ -25,7 +25,7 @@ Network Model Comparison .. table:: **Table 1** Network model comparison +------------------------+-----------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | Dimension | Container Tunnel Network | VPC Network | Cloud Native Network 2.0 | + | Dimension | Tunnel Network | VPC Network | Cloud Native Network 2.0 | +========================+===================================================================================================================================+======================================================================================================================================================+============================================================================================================+ | Application scenarios | - Common container service scenarios | - Scenarios that have high requirements on network latency and bandwidth | - Scenarios that have high requirements on network latency, bandwidth, and performance | | | - Scenarios that do not have high requirements on network latency and bandwidth | - Containers can communicate with VMs using a microservice registration framework, such as Dubbo and CSE. | - Containers can communicate with VMs using a microservice registration framework, such as Dubbo and CSE. | @@ -43,14 +43,14 @@ Network Model Comparison +------------------------+-----------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+ | Network performance | Performance loss due to VXLAN encapsulation | No tunnel encapsulation. Cross-node packets are forwarded through VPC routers, delivering performance equivalent to that of the host network. | The container network is integrated with the VPC network, eliminating performance loss. | +------------------------+-----------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | Networking scale | A maximum of 2,000 nodes are supported. | By default, 200 nodes are supported. | A maximum of 2,000 nodes are supported. | + | Networking scale | A maximum of 2,000 nodes are supported. | A maximum of 2000 nodes are supported, which is restricted by the VPC routing capability. | A maximum of 2,000 nodes are supported. | | | | | | | | | Each time a node is added to the cluster, a route is added to the VPC route tables. Therefore, the cluster scale is limited by the VPC route tables. | | +------------------------+-----------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+ .. important:: - #. The scale of a cluster that uses the VPC network model is limited by the custom routes of the VPC. Therefore, you need to estimate the number of required nodes before creating a cluster. + #. The scale of a cluster that uses the VPC network model is limited by the custom routes of the VPC. Therefore, estimate the number of required nodes before creating a cluster. #. The scale of a cluster that uses the Cloud Native Network 2.0 model depends on the size of the VPC subnet CIDR block selected for the network attachment definition. Before creating a cluster, evaluate the scale of your cluster. #. By default, VPC routing network supports direct communication between containers and hosts in the same VPC. If a peering connection policy is configured between the VPC and another VPC, the containers can directly communicate with hosts on the peer VPC. In addition, in hybrid networking scenarios such as Direct Connect and VPN, communication between containers and hosts on the peer end can also be achieved with proper planning. #. Do not change the mask of the primary CIDR block on the VPC after a cluster is created. Otherwise, the network will be abnormal. diff --git a/umn/source/networking/container_network_models/vpc_network.rst b/umn/source/network/container_network_models/vpc_network.rst similarity index 88% rename from umn/source/networking/container_network_models/vpc_network.rst rename to umn/source/network/container_network_models/vpc_network.rst index a2ab121..b24e15e 100644 --- a/umn/source/networking/container_network_models/vpc_network.rst +++ b/umn/source/network/container_network_models/vpc_network.rst @@ -8,17 +8,17 @@ VPC Network Model Definition ---------------- -The VPC network uses VPC routing to integrate with the underlying network. This network model is suitable for performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the VPC route quota. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from outside the cluster. +The VPC network uses VPC routing to integrate with the underlying network. This network model is suitable for performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the VPC route quota. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from ECSs in the same VPC outside the cluster. -.. figure:: /_static/images/en-us_image_0000001568822773.png +.. figure:: /_static/images/en-us_image_0000001647417536.png :alt: **Figure 1** VPC network model **Figure 1** VPC network model **Pod-to-pod communication** -- On the same node: Packets are directly forwarded through IPVlan. +- On the same node: Packets are directly forwarded through IPvlan. - Across nodes: Packets are forwarded to the default gateway through default routes, and then to the peer node via the VPC routes. Advantages and Disadvantages @@ -27,7 +27,7 @@ Advantages and Disadvantages **Advantages** - No tunnel encapsulation is required, so network problems are easy to locate and the performance is high. -- External networks in a VPC can be directly connected to container IP addresses. +- In the same VPC, the external network of the cluster can be directly connected to the container IP address. **Disadvantages** @@ -38,7 +38,7 @@ Advantages and Disadvantages Applicable Scenarios -------------------- -- High performance requirements: As no tunnel encapsulation is required, the VPC network model delivers the performance close to that of a VPC network when compared with the container tunnel network model. Therefore, the VPC network model is applicable to scenarios that have high requirements on performance, such as AI computing and big data computing. +- High performance requirements: As no tunnel encapsulation is required, the VPC network model delivers the performance close to that of a VPC network when compared with the container tunnel network model. Therefore, the VPC network model applies to scenarios that have high requirements on performance, such as AI computing and big data computing. - Small- and medium-scale networking: The VPC network is limited by the VPC route quota. Currently, a maximum of 200 nodes are supported by default. If there are large-scale networking requirements, you can increase the VPC route quota. .. _cce_10_0283__section1574982552114: @@ -54,7 +54,7 @@ The VPC network allocates container IP addresses according to the following rule - Pods scheduled to a node are cyclically allocated IP addresses from CIDR blocks allocated to the node. -.. figure:: /_static/images/en-us_image_0000001569022889.png +.. figure:: /_static/images/en-us_image_0000001695737193.png :alt: **Figure 2** IP address management of the VPC network **Figure 2** IP address management of the VPC network @@ -129,9 +129,9 @@ Check the pod. example-86b9779494-x8kl5 1/1 Running 0 14s 172.16.0.5 192.168.0.99 example-86b9779494-zt627 1/1 Running 0 14s 172.16.0.8 192.168.0.99 -In this case, the IP address of the pod can be directly accessed from a node outside the cluster in the same VPC. This is a feature of the VPC network feature. +In this case, if you access the IP address of the pod from an ECS (outside the cluster) in the same VPC, the access is successful. This is a feature of VPC networking. Pods can be directly accessed from any node locating outside of the cluster and in the same VPC as that of the pods using the pods' IP addresses. -The pod can also be accessed from a node in the same cluster or in the pod. As shown in the following figure, the pod can be accessed directly from the container. +Pods can be accessed from nodes or pods in the same cluster. In the following example, you can directly access the pods in the container. .. code-block:: diff --git a/umn/source/network/container_network_settings/cloud_native_network_2.0_settings/index.rst b/umn/source/network/container_network_settings/cloud_native_network_2.0_settings/index.rst new file mode 100644 index 0000000..3ba1cb6 --- /dev/null +++ b/umn/source/network/container_network_settings/cloud_native_network_2.0_settings/index.rst @@ -0,0 +1,14 @@ +:original_name: cce_10_0678.html + +.. _cce_10_0678: + +Cloud Native Network 2.0 Settings +================================= + +- :ref:`Security Group Policies ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + security_group_policies diff --git a/umn/source/workloads/security_group_policies.rst b/umn/source/network/container_network_settings/cloud_native_network_2.0_settings/security_group_policies.rst similarity index 93% rename from umn/source/workloads/security_group_policies.rst rename to umn/source/network/container_network_settings/cloud_native_network_2.0_settings/security_group_policies.rst index 127a1df..b6803a4 100644 --- a/umn/source/workloads/security_group_policies.rst +++ b/umn/source/network/container_network_settings/cloud_native_network_2.0_settings/security_group_policies.rst @@ -5,10 +5,10 @@ Security Group Policies ======================= -When the Cloud Native Network 2.0 model is used, pods use VPC ENIs or sub-ENIs for networking. You can directly bind security groups and EIPs to pods. CCE provides a custom resource object named **SecurityGroup** for you to associate security groups with pods in CCE. You can customize workloads with specific security isolation requirements using SecurityGroups. +In Cloud Native Network 2.0, pods use VPC ENIs or sub-ENIs for networking. You can directly bind security groups and EIPs to pods. To bind CCE pods with security groups, CCE provides a custom resource object named **SecurityGroup**. Using this resource object, you can customize security isolation for workloads. -Notes and Constraints ---------------------- +Constraints +----------- - This function is supported for CCE Turbo clusters of v1.19 and later. Upgrade your CCE Turbo clusters if their versions are earlier than v1.19. - A workload can be bound to a maximum of five security groups. @@ -16,11 +16,11 @@ Notes and Constraints Using the Console ----------------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. -#. In the navigation pane, choose **Workloads**. On the displayed page, click the name of the target workload. +#. In the navigation pane, choose **Workloads**. On the displayed page, click the desired workload name. -#. Switch to the **Security Group Policy** tab page and click **Create**. +#. Switch to the **SecurityGroups** tab page and click **Create**. #. Set the parameters as described in :ref:`Table 1 `. @@ -43,7 +43,7 @@ Using the Console | | | | | | NOTICE: | | | | | | - | | - A maximum of 5 security groups can be selected. | | + | | - A maximum of five security groups can be selected. | | | | - Hover the cursor on next to the security group name, and you can view details about the security group. | | +----------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+ diff --git a/umn/source/network/container_network_settings/configuring_qos_rate_limiting_for_inter-pod_access.rst b/umn/source/network/container_network_settings/configuring_qos_rate_limiting_for_inter-pod_access.rst new file mode 100644 index 0000000..a9009ca --- /dev/null +++ b/umn/source/network/container_network_settings/configuring_qos_rate_limiting_for_inter-pod_access.rst @@ -0,0 +1,85 @@ +:original_name: cce_10_0382.html + +.. _cce_10_0382: + +Configuring QoS Rate Limiting for Inter-Pod Access +================================================== + +Scenario +-------- + +Bandwidth preemption occurs between different containers deployed on the same node, which may cause service jitter. You can configure QoS rate limiting for inter-pod access to prevent this problem. + +Constraints +----------- + +The following shows constraints on setting the rate limiting for inter-pod access: + ++-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+ +| Constraint Type | Tunnel network model | VPC network model | Cloud Native 2.0 Network Model | ++=========================+=======================================================================+=======================================================================+==================================================================================+ +| Supported versions | All versions | Clusters of v1.19.10 and later | Clusters of v1.19.10 and later | ++-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+ +| Supported runtime types | Only common containers (runC as the container runtime) are supported. | Only common containers (runC as the container runtime) are supported. | Only common containers (runC as the container runtime) are supported. | +| | | | | +| | Secure containers are not supported. | Secure containers (Kata as the container runtime) are not supported. | Secure containers (Kata as the container runtime) are not supported. | ++-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+ +| Supported pod types | Only non-HostNetwork pods | | | ++-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+ +| Supported scenarios | Inter-pod access, pods accessing nodes, and pods accessing services | | | ++-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+ +| Constraints | None | None | - Pods access external cloud service CIDR blocks 100.64.0.0/10 and 214.0.0.0/8. | +| | | | - Traffic rate limiting of health check | ++-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+ +| Upper rate limit | Minimum value between the upper bandwidth limit and 34 Gbit/s | Minimum value between the upper bandwidth limit and 4.3 Gbit/s | Minimum value between the upper bandwidth limit and 4.3 Gbit/s | ++-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+ +| Lower rate limit | Only the rate limit of Kbit/s or higher is supported. | Currently, only the rate limit of Mbit/s or higher is supported. | | ++-------------------------+-----------------------------------------------------------------------+-----------------------------------------------------------------------+----------------------------------------------------------------------------------+ + +Using the CCE Console +--------------------- + +When creating a workload on the console, you can set pod ingress and egress bandwidth limits on the **Advanced Settings > Network Configuration** area. + +Using kubectl +------------- + +You can add annotations to a workload to specify its egress and ingress bandwidth. + +.. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: test + namespace: default + labels: + app: test + spec: + replicas: 2 + selector: + matchLabels: + app: test + template: + metadata: + labels: + app: test + annotations: + kubernetes.io/ingress-bandwidth: 100M + kubernetes.io/egress-bandwidth: 100M + spec: + containers: + - name: container-1 + image: nginx:alpine + imagePullPolicy: IfNotPresent + imagePullSecrets: + - name: default-secret + +- **kubernetes.io/ingress-bandwidth**: ingress bandwidth of the pod +- **kubernetes.io/egress-bandwidth**: egress bandwidth of the pod + +If these two parameters are not specified, the bandwidth is not limited. + +.. note:: + + After modifying the ingress or egress bandwidth limit of a pod, restart the container for the modification to take effect. After annotations are modified in a pod not managed by workloads, the container will not be restarted, so the bandwidth limits do not take effect. You can create a pod again or manually restart the container. diff --git a/umn/source/network/container_network_settings/container_tunnel_network_settings/index.rst b/umn/source/network/container_network_settings/container_tunnel_network_settings/index.rst new file mode 100644 index 0000000..24a17a3 --- /dev/null +++ b/umn/source/network/container_network_settings/container_tunnel_network_settings/index.rst @@ -0,0 +1,14 @@ +:original_name: cce_10_0677.html + +.. _cce_10_0677: + +Container Tunnel Network Settings +================================= + +- :ref:`Network Policies ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + network_policies diff --git a/umn/source/networking/network_policies.rst b/umn/source/network/container_network_settings/container_tunnel_network_settings/network_policies.rst similarity index 84% rename from umn/source/networking/network_policies.rst rename to umn/source/network/container_network_settings/container_tunnel_network_settings/network_policies.rst index f770dbc..1a436fb 100644 --- a/umn/source/networking/network_policies.rst +++ b/umn/source/network/container_network_settings/container_tunnel_network_settings/network_policies.rst @@ -9,32 +9,23 @@ Network policies are designed by Kubernetes to restrict pod access. It is equiva By default, if a namespace does not have any policy, pods in the namespace accept traffic from any source and send traffic to any destination. -Network policy rules are classified into the following types: +Network policies are classified into the following types: - **namespaceSelector**: selects particular namespaces for which all pods should be allowed as ingress sources or egress destinations. - **podSelector**: selects particular pods in the same namespace as the network policy which should be allowed as ingress sources or egress destinations. - **ipBlock**: selects particular IP blocks to allow as ingress sources or egress destinations. (Only egress rules support IP blocks.) -Notes and Constraints ---------------------- +Constraints +----------- - Only clusters that use the tunnel network model support network policies. Network policies are classified into the following types: - Ingress: All versions support this type. - - - Egress: Only clusters of v1.23 or later support egress rules. - - Egress rules are supported only in the following OSs: + - Egress: Only the following OSs and cluster versions support egress rules. +-----------------------+-----------------------+-------------------------------------------+ | OS | Cluster Version | Verified Kernel Version | +=======================+=======================+===========================================+ - | CentOS | v1.23 or later | 3.10.0-1062.18.1.el7.x86_64 | - | | | | - | | | 3.10.0-1127.19.1.el7.x86_64 | - | | | | - | | | 3.10.0-1160.25.1.el7.x86_64 | - +-----------------------+-----------------------+-------------------------------------------+ | EulerOS 2.5 | v1.23 or later | 3.10.0-862.14.1.5.h591.eulerosv2r7.x86_64 | | | | | | | | 3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64 | @@ -45,7 +36,7 @@ Notes and Constraints +-----------------------+-----------------------+-------------------------------------------+ - Network isolation is not supported for IPv6 addresses. -- If a cluster is upgraded to v1.23 in in-place mode, you cannot use egress rules because the node OS is not upgraded. In this case, reset the node. +- If upgrade to a cluster version that supports egress rules is performed in in-place mode, you cannot use egress rules because the node OS is not upgraded. In this case, reset the node. Using Ingress Rules ------------------- @@ -63,19 +54,19 @@ Using Ingress Rules podSelector: # The rule takes effect for pods with the role=db label. matchLabels: role: db - ingress: #This is an ingress rule. + ingress: # This is an ingress rule. - from: - - podSelector: #Only traffic from the pods with the "role=frontend" label is allowed. + - podSelector: # Only traffic from the pods with the "role=frontend" label is allowed. matchLabels: role: frontend - ports: #Only TCP can be used to access port 6379. + ports: # Only TCP can be used to access port 6379. - protocol: TCP port: 6379 - See the following figure. + The following figure shows how podSelector works. - .. figure:: /_static/images/en-us_image_0000001518062636.png + .. figure:: /_static/images/en-us_image_0000001695896529.png :alt: **Figure 1** podSelector **Figure 1** podSelector @@ -92,19 +83,19 @@ Using Ingress Rules podSelector: # The rule takes effect for pods with the role=db label. matchLabels: role: db - ingress: #This is an ingress rule. + ingress: # This is an ingress rule. - from: - namespaceSelector: # Only traffic from the pods in the namespace with the "project=myproject" label is allowed. matchLabels: project: myproject - ports: #Only TCP can be used to access port 6379. + ports: # Only TCP can be used to access port 6379. - protocol: TCP port: 6379 - See the following figure. + The following figure shows how namespaceSelector works. - .. figure:: /_static/images/en-us_image_0000001518222592.png + .. figure:: /_static/images/en-us_image_0000001695737257.png :alt: **Figure 2** namespaceSelector **Figure 2** namespaceSelector @@ -116,7 +107,7 @@ Egress supports not only podSelector and namespaceSelector, but also ipBlock. .. note:: - Only clusters of version 1.23 or later support egress rules. Currently, only EulerOS 2.5, EulerOS 2.9, and CentOS 7.X nodes are supported. + Only clusters of version 1.23 or later support egress rules. Currently, nodes running EulerOS 2.5, EulerOS 2.9 are supported. .. code-block:: @@ -138,10 +129,10 @@ Egress supports not only podSelector and namespaceSelector, but also ipBlock. except: - 172.16.0.40/32 # This CIDR block cannot be accessed. This value must fall within the range specified by cidr. -The following figure shows how to use ingress and egress together. +The following figure shows how ipBlock works. -.. figure:: /_static/images/en-us_image_0000001517743496.png +.. figure:: /_static/images/en-us_image_0000001647576864.png :alt: **Figure 3** ipBlock **Figure 3** ipBlock @@ -162,12 +153,12 @@ You can define ingress and egress in the same rule. podSelector: # The rule takes effect for pods with the role=db label. matchLabels: role: db - ingress: # Ingress rule + ingress: # This is an ingress rule. - from: - - podSelector: #Only traffic from the pods with the "role=frontend" label is allowed. + - podSelector: # Only traffic from the pods with the "role=frontend" label is allowed. matchLabels: role: frontend - ports: #Only TCP can be used to access port 6379. + ports: # Only TCP can be used to access port 6379. - protocol: TCP port: 6379 egress: # Egress rule @@ -179,7 +170,7 @@ You can define ingress and egress in the same rule. The following figure shows how to use ingress and egress together. -.. figure:: /_static/images/en-us_image_0000001568902533.png +.. figure:: /_static/images/en-us_image_0000001695896533.png :alt: **Figure 4** Using both ingress and egress **Figure 4** Using both ingress and egress @@ -187,7 +178,7 @@ The following figure shows how to use ingress and egress together. Creating a Network Policy on the Console ---------------------------------------- -#. Log in to the CCE console and access the cluster console. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Networking** in the navigation pane, click the **Network Policies** tab, and click **Create Network Policy** in the upper right corner. - **Policy Name**: Specify a network policy name. @@ -234,7 +225,7 @@ Creating a Network Policy on the Console #. Click **OK**. -.. |image1| image:: /_static/images/en-us_image_0000001568822793.png -.. |image2| image:: /_static/images/en-us_image_0000001569022905.png -.. |image3| image:: /_static/images/en-us_image_0000001517903064.png -.. |image4| image:: /_static/images/en-us_image_0000001517903068.png +.. |image1| image:: /_static/images/en-us_image_0000001647417596.png +.. |image2| image:: /_static/images/en-us_image_0000001647417588.png +.. |image3| image:: /_static/images/en-us_image_0000001695737253.png +.. |image4| image:: /_static/images/en-us_image_0000001647417600.png diff --git a/umn/source/networking/host_network.rst b/umn/source/network/container_network_settings/host_network.rst similarity index 93% rename from umn/source/networking/host_network.rst rename to umn/source/network/container_network_settings/host_network.rst index 5f083d6..c10f741 100644 --- a/umn/source/networking/host_network.rst +++ b/umn/source/network/container_network_settings/host_network.rst @@ -8,7 +8,7 @@ Host Network Scenario -------- -Kubernetes allows pods to directly use the host/node network. +Kubernetes allows pods to directly use the host/node network. When a pod is configured with **hostNetwork: true**, applications running in the pod can directly view the network interface of the host where the pod is located. Configuration ------------- @@ -51,7 +51,7 @@ Precautions If a pod uses the host network, it occupies a host port. The pod IP is the host IP. To use the host network, you must confirm pods do not conflict with each other in terms of the host ports they occupy. Do not use the host network unless you know exactly which host port is used by which pod. -When using the host network, you access the node to access a pod on it. Therefore, you need to **allow access from the security group port of the node**. Otherwise, the access fails. +When using the host network, you access the node to access a pod on it. Therefore, **allow access from the security group port of the node**. Otherwise, the access fails. In addition, using the host network requires you to reserve host ports for the pods. When using a Deployment to deploy pods of the hostNetwork type, ensure that **the number of pods does not exceed the number of nodes**. Otherwise, multiple pods will be scheduled onto the node, and they will fail to start due to port conflicts. For example, in the preceding example nginx YAML, if two pods (setting **replicas** to **2**) are deployed in a cluster with only one node, one pod cannot be created. The pod logs will show that the Nginx cannot be started because the port is occupied. diff --git a/umn/source/network/container_network_settings/index.rst b/umn/source/network/container_network_settings/index.rst new file mode 100644 index 0000000..c989a38 --- /dev/null +++ b/umn/source/network/container_network_settings/index.rst @@ -0,0 +1,20 @@ +:original_name: cce_10_0675.html + +.. _cce_10_0675: + +Container Network Settings +========================== + +- :ref:`Host Network ` +- :ref:`Configuring QoS Rate Limiting for Inter-Pod Access ` +- :ref:`Container Tunnel Network Settings ` +- :ref:`Cloud Native Network 2.0 Settings ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + host_network + configuring_qos_rate_limiting_for_inter-pod_access + container_tunnel_network_settings/index + cloud_native_network_2.0_settings/index diff --git a/umn/source/networking/dns/dns_configuration.rst b/umn/source/network/dns/dns_configuration.rst similarity index 64% rename from umn/source/networking/dns/dns_configuration.rst rename to umn/source/network/dns/dns_configuration.rst index b6cb174..e294aa5 100644 --- a/umn/source/networking/dns/dns_configuration.rst +++ b/umn/source/network/dns/dns_configuration.rst @@ -7,7 +7,7 @@ DNS Configuration Every Kubernetes cluster has a built-in DNS add-on (Kube-DNS or CoreDNS) to provide domain name resolution for workloads in the cluster. When handling a high concurrency of DNS queries, Kube-DNS/CoreDNS may encounter a performance bottleneck, that is, it may fail occasionally to fulfill DNS queries. There are cases when Kubernetes workloads initiate unnecessary DNS queries. This makes DNS overloaded if there are many concurrent DNS queries. Tuning DNS configuration for workloads will reduce the risks of DNS query failures to some extent. -For more information about DNS, see :ref:`coredns (System Resource Add-On, Mandatory) `. +For more information about DNS, see :ref:`CoreDNS (System Resource Add-On, Mandatory) `. DNS Configuration Items ----------------------- @@ -36,6 +36,31 @@ Run the **cat /etc/resolv.conf** command on a Linux node or container to view th For more information about configuration options in the resolver configuration file used by Linux operating systems, visit http://man7.org/linux/man-pages/man5/resolv.conf.5.html. +Configuring DNS for a Workload Using the Console +------------------------------------------------ + +Kubernetes provides DNS-related configuration options for applications. The use of application's DNS configuration can effectively reduce unnecessary DNS queries in certain scenarios and improve service concurrency. The following procedure uses an Nginx application as an example to describe how to add DNS configurations for a workload on the console. + +#. Log in to the CCE console, access the cluster console, select **Workloads** in the navigation pane, and click **Create Workload** in the upper right corner. +#. Configure basic information about the workload. For details, see :ref:`Creating a Workload `. +#. In the **Advanced Settings** area, click the **DNS** tab and set the following parameters as required: + + - **DNS Policy**: The DNS policies provided on the console correspond to the **dnsPolicy** field in the YAML file. For details, see :ref:`Table 1 `. + + - **Supplement defaults**: corresponds to **dnsPolicy=ClusterFirst**. Containers can resolve both the cluster-internal domain names registered by a Service and the external domain names exposed to public networks. + - **Replace defaults**: corresponds to **dnsPolicy=None**. You must configure **IP Address** and **Search Domain**. Containers only use the user-defined IP address and search domain configurations for domain name resolution. + - **Inherit defaults**: corresponds to **dnsPolicy=Default**. Containers use the domain name resolution configuration from the node that pods run on and cannot resolve the cluster-internal domain names. + + - **Optional Objects**: The options parameters in the :ref:`dnsConfig field `. Each object may have a name property (required) and a value property (optional). After setting the properties, click **confirm to add**. + + - **timeout**: Timeout interval, in seconds. + - **ndots**: Number of dots (.) that must be present in a domain name. If a domain name has dots fewer than this value, the operating system will look up the name in the search domain. If not, the name is a fully qualified domain name (FQDN) and will be tried first as an absolute name. + + - **IP Address**: **nameservers** in the :ref:`dnsConfig `. You can configure the domain name server for the custom domain name. The value is one or a group of DNS IP addresses. + - **Search Domain**: **searches** in the :ref:`dnsConfig `. A list of DNS search domains for hostname lookup in the pod. This property is optional. When specified, the provided list will be merged into the search domain names generated from the chosen DNS policy in **dnsPolicy**. Duplicate domain names are removed. + +#. Click **Create Workload**. + Configuring DNS Using the Workload YAML --------------------------------------- @@ -76,84 +101,59 @@ When creating a workload using a YAML file, you can configure the DNS settings i searches: - my.dns.search.suffix -**dnsPolicy** +- **dnsPolicy** -The **dnsPolicy** field is used to configure a DNS policy for an application. The default value is **ClusterFirst**. The DNS parameters in **dnsConfig** will be merged to the DNS file generated according to **dnsPolicy**. The merge rules are later explained in :ref:`Table 2 `. Currently, **dnsPolicy** supports the following four values: + The **dnsPolicy** field is used to configure a DNS policy for an application. The default value is **ClusterFirst**. The following table lists **dnsPolicy** configurations. -.. _cce_10_0365__table144443315261: + .. _cce_10_0365__table144443315261: -.. table:: **Table 1** dnsPolicy + .. table:: **Table 1** dnsPolicy - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | ClusterFirst (default value) | CCE cluster's CoreDNS, which is cascaded with the cloud DNS by default, is used for workloads. Containers can resolve both the cluster-internal domain names registered by a Service and the external domain names exposed to public networks. The search list (**search** option) and **ndots: 5** are present in the DNS configuration file. Therefore, when accessing an external domain name and a long cluster-internal domain name (for example, kubernetes.default.svc.cluster.local), the search list will usually be traversed first, resulting in at least six invalid DNS queries. The issue of invalid DNS queries disappears only when a short cluster-internal domain name (for example, kubernetes) is being accessed. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | ClusterFirstWithHostNet | By default, the DNS configuration file that the **--resolv-conf** flag points to is configured for workloads running with **hostNetwork=true**, that is, a cloud DNS is used for CCE clusters. If workloads need to use Kube-DNS/CoreDNS of the cluster, set **dnsPolicy** to **ClusterFirstWithHostNet** and container's DNS configuration file is the same as ClusterFirst, in which invalid DNS queries still exist. | - | | | - | | .. code-block:: | - | | | - | | ... | - | | spec: | - | | containers: | - | | - image: nginx:latest | - | | imagePullPolicy: IfNotPresent | - | | name: container-1 | - | | restartPolicy: Always | - | | hostNetwork: true | - | | dnsPolicy: ClusterFirstWithHostNet | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Default | Container's DNS configuration file is the DNS configuration file that the kubelet's **--resolv-conf** flag points to. In this case, a cloud DNS is used for CCE clusters. Both **search** and **options** fields are left unspecified. This configuration can only resolve the external domain names registered with the Internet, and not cluster-internal domain names. This configuration is free from the issue of invalid DNS queries. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | None | If **dnsPolicy** is set to **None**, the **dnsConfig** field must be specified because all DNS settings are supposed to be provided using the **dnsConfig** field. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | ClusterFirst (default value) | Custom DNS configuration added to the default DNS configuration. By default, the application connects to CoreDNS (CoreDNS of the CCE cluster connects to the DNS on the cloud by default). The custom dnsConfig will be added to the default DNS parameters. Containers can resolve both the cluster-internal domain names registered by a Service and the external domain names exposed to public networks. The search list (**search** option) and **ndots: 5** are present in the DNS configuration file. Therefore, when accessing an external domain name and a long cluster-internal domain name (for example, kubernetes.default.svc.cluster.local), the search list will usually be traversed first, resulting in at least six invalid DNS queries. The issue of invalid DNS queries disappears only when a short cluster-internal domain name (for example, kubernetes) is being accessed. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ClusterFirstWithHostNet | By default, the applications configured with the :ref:`host network ` are interconnected with the DNS configuration of the node where the pod is located. The DNS configuration is specified in the DNS file that the kubelet **--resolv-conf** parameter points to. In this case, the CCE cluster uses the DNS on the cloud. If workloads need to use Kube-DNS/CoreDNS of the cluster, set **dnsPolicy** to **ClusterFirstWithHostNet** and container's DNS configuration file is the same as ClusterFirst, in which invalid DNS queries still exist. | + | | | + | | .. code-block:: | + | | | + | | ... | + | | spec: | + | | containers: | + | | - image: nginx:latest | + | | imagePullPolicy: IfNotPresent | + | | name: container-1 | + | | restartPolicy: Always | + | | hostNetwork: true | + | | dnsPolicy: ClusterFirstWithHostNet | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Default | The DNS configuration of the node where the pod is located is inherited, and the custom DNS configuration is added to the inherited configuration. Container's DNS configuration file is the DNS configuration file that the kubelet's **--resolv-conf** flag points to. In this case, a cloud DNS is used for CCE clusters. Both **search** and **options** fields are left unspecified. This configuration can only resolve the external domain names registered with the Internet, and not cluster-internal domain names. This configuration is free from the issue of invalid DNS queries. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | None | The default DNS configuration is replaced by the custom DNS configuration, and only the custom DNS configuration is used. If **dnsPolicy** is set to **None**, the **dnsConfig** field must be specified because all DNS settings are supposed to be provided using the **dnsConfig** field. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -.. note:: + .. note:: - If the **dnsPolicy** field is not specified, the default value is **ClusterFirst** instead of **Default**. + If the **dnsPolicy** field is not specified, the default value is **ClusterFirst** instead of **Default**. -**dnsConfig** +- **dnsConfig** -The **dnsConfig** field is used to configure DNS parameters for workloads. The configured parameters are merged to the DNS configuration file generated according to **dnsPolicy**. If **dnsPolicy** is set to **None**, the workload's DNS configuration file is specified by the **dnsConfig** field. If **dnsPolicy** is not set to **None**, the DNS parameters configured in **dnsConfig** are added to the DNS configuration file generated according to **dnsPolicy**. + The **dnsConfig** field is used to configure DNS parameters for workloads. The configured parameters are merged to the DNS configuration file generated according to **dnsPolicy**. If **dnsPolicy** is set to **None**, the workload's DNS configuration file is specified by the **dnsConfig** field. If **dnsPolicy** is not set to **None**, the DNS parameters configured in **dnsConfig** are added to the DNS configuration file generated according to **dnsPolicy**. -.. _cce_10_0365__table16581121652515: + .. _cce_10_0365__table16581121652515: -.. table:: **Table 2** dnsConfig + .. table:: **Table 2** dnsConfig - +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +=============+================================================================================================================================================================================================================================================================================================================================================+ - | options | An optional list of objects where each object may have a name property (required) and a value property (optional). The contents in this property will be merged to the options generated from the specified DNS policy in **dnsPolicy**. Duplicate entries are removed. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | nameservers | A list of IP addresses that will be used as DNS servers. If workload's **dnsPolicy** is set to **None**, the list must contain at least one IP address, otherwise this property is optional. The servers listed will be combined to the nameservers generated from the specified DNS policy in **dnsPolicy** with duplicate addresses removed. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | searches | A list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the search domain names generated from the chosen DNS policy in **dnsPolicy**. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Configuring DNS for a Workload Using the Console ------------------------------------------------- - -Kubernetes provides DNS-related configuration options for applications. The use of application's DNS configuration can effectively reduce unnecessary DNS queries in certain scenarios and improve service concurrency. The following procedure uses an Nginx application as an example to describe how to add DNS configurations for a workload on the console. - -#. Log in to the CCE console, access the cluster console, select **Workloads** in the navigation pane, and click **Create Workload** in the upper right corner. -#. Configure basic information about the workload. For details, see :ref:`Creating a Deployment `. -#. In the **Advanced Settings** area, click the **DNS** tab and set the following parameters as required: - - - **DNS Policy**: The DNS policies provided on the console correspond to the **dnsPolicy** field in the YAML file. For details, see :ref:`Table 1 `. - - - **Supplement defaults**: corresponds to **dnsPolicy=ClusterFirst**. Containers can resolve both the cluster-internal domain names registered by a Service and the external domain names exposed to public networks. - - **Replace defaults**: corresponds to **dnsPolicy=None**. You must configure **IP Address** and **Search Domain**. Containers only use the user-defined IP address and search domain configurations for domain name resolution. - - **Inherit defaults**: corresponds to **dnsPolicy=Default**. Containers use the domain name resolution configuration from the node that pods run on and cannot resolve the cluster-internal domain names. - - - **Optional Objects**: The options parameters in the :ref:`dnsConfig field `. Each object may have a name property (required) and a value property (optional). After setting the properties, click **confirm to add**. - - - **timeout**: Timeout interval, in seconds. - - **ndots**: Number of dots (.) that must be present in a domain name. If a domain name has dots fewer than this value, the operating system will look up the name in the search domain. If not, the name is a fully qualified domain name (FQDN) and will be tried first as an absolute name. - - - **IP Address**: **nameservers** in the :ref:`dnsConfig field `. You can configure the domain name server for the custom domain name. The value is one or a group of DNS IP addresses. - - **Search Domain**: **searches** in the :ref:`dnsConfig field `. A list of DNS search domains for hostname lookup in the pod. This property is optional. When specified, the provided list will be merged into the search domain names generated from the chosen DNS policy in **dnsPolicy**. Duplicate domain names are removed. - -#. Click **Create Workload**. + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +=============+================================================================================================================================================================================================================================================================================================================================================+ + | options | An optional list of objects where each object may have a name property (required) and a value property (optional). The contents in this property will be merged to the options generated from the specified DNS policy in **dnsPolicy**. Duplicate entries are removed. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | nameservers | A list of IP addresses that will be used as DNS servers. If workload's **dnsPolicy** is set to **None**, the list must contain at least one IP address, otherwise this property is optional. The servers listed will be combined to the nameservers generated from the specified DNS policy in **dnsPolicy** with duplicate addresses removed. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | searches | A list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the search domain names generated from the chosen DNS policy in **dnsPolicy**. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains. | + +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Configuration Examples ---------------------- @@ -164,7 +164,7 @@ The following example describes how to configure DNS for workloads. **Scenario** - Kubernetes in-cluster Kube-DNS/CoreDNS is applicable to resolving only cluster-internal domain names or cluster-internal domain names + external domain names. This is the default DNS for workloads. + Kubernetes in-cluster Kube-DNS/CoreDNS applies to resolving only cluster-internal domain names or cluster-internal domain names + external domain names. This is the default DNS for workloads. **Example:** @@ -180,6 +180,8 @@ The following example describes how to configure DNS for workloads. - name: test image: nginx:alpine dnsPolicy: ClusterFirst + imagePullSecrets: + - name: default-secret Container's DNS configuration file: @@ -193,7 +195,7 @@ The following example describes how to configure DNS for workloads. **Scenario** - A DNS cannot resolve cluster-internal domain names and therefore is applicable to the scenario where workloads access only external domain names registered with the Internet. + A DNS cannot resolve cluster-internal domain names and therefore applies to the scenario where workloads access only external domain names registered with the Internet. **Example:** @@ -208,7 +210,9 @@ The following example describes how to configure DNS for workloads. containers: - name: test image: nginx:alpine - dnsPolicy: Default//The DNS configuration file that the kubelet's --resolv-conf flag points to is used. In this case, a DNS is used for CCE clusters. + dnsPolicy: Default # The DNS configuration file that the kubelet --resolv-conf parameter points to is used. In this case, the CCE cluster uses the DNS on the cloud. + imagePullSecrets: + - name: default-secret Container's DNS configuration file: @@ -238,6 +242,8 @@ The following example describes how to configure DNS for workloads. image: nginx:alpine ports: - containerPort: 80 + imagePullSecrets: + - name: default-secret Container's DNS configuration file: @@ -271,7 +277,7 @@ The following example describes how to configure DNS for workloads. dnsPolicy: "None" dnsConfig: nameservers: - - 10.2.3.4 //IP address of your on-premises DNS + - 10.2.3.4 # IP address of your on-premises DNS searches: - ns1.svc.cluster.local - my.dns.search.suffix @@ -280,6 +286,8 @@ The following example describes how to configure DNS for workloads. value: "2" - name: timeout value: "3" + imagePullSecrets: + - name: default-secret Container's DNS configuration file: @@ -308,7 +316,9 @@ The following example describes how to configure DNS for workloads. dnsConfig: options: - name: ndots - value: "2" //Changes the ndots:5 option in the DNS configuration file generated based on the ClusterFirst policy to ndots:2. + value: "2" # The ndots:5 option in the DNS configuration file generated based on the ClusterFirst policy is changed to ndots:2. + imagePullSecrets: + - name: default-secret Container's DNS configuration file: @@ -317,3 +327,31 @@ The following example describes how to configure DNS for workloads. nameserver 10.247.3.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:2 + + **Example 3: Using Multiple DNSs in Serial Sequence** + + .. code-block:: + + apiVersion: v1 + kind: Pod + metadata: + namespace: default + name: dns-example + spec: + containers: + - name: test + image: nginx:alpine + dnsPolicy: ClusterFirst # Added DNS configuration. The cluster connects to CoreDNS by default. + dnsConfig: + nameservers: + - 10.2.3.4 # IP address of your on-premises DNS + imagePullSecrets: + - name: default-secret + + Container's DNS configuration file: + + .. code-block:: + + nameserver 10.247.3.10 10.2.3.4 + search default.svc.cluster.local svc.cluster.local cluster.local + options ndots:5 diff --git a/umn/source/networking/dns/index.rst b/umn/source/network/dns/index.rst similarity index 100% rename from umn/source/networking/dns/index.rst rename to umn/source/network/dns/index.rst diff --git a/umn/source/network/dns/overview.rst b/umn/source/network/dns/overview.rst new file mode 100644 index 0000000..5f6a8a4 --- /dev/null +++ b/umn/source/network/dns/overview.rst @@ -0,0 +1,94 @@ +:original_name: cce_10_0360.html + +.. _cce_10_0360: + +Overview +======== + +Introduction to CoreDNS +----------------------- + +When you create a cluster, the :ref:`CoreDNS add-on ` is installed to resolve domain names in the cluster. + +You can view the pod of the CoreDNS add-on in the kube-system namespace. + +.. code-block:: + + $ kubectl get po --namespace=kube-system + NAME READY STATUS RESTARTS AGE + coredns-7689f8bdf-295rk 1/1 Running 0 9m11s + coredns-7689f8bdf-h7n68 1/1 Running 0 11m + +After CoreDNS is installed, it becomes a DNS. After the Service is created, CoreDNS records the Service name and IP address. In this way, the pod can obtain the Service IP address by querying the Service name from CoreDNS. + +**nginx..svc.cluster.local** is used to access the Service. **nginx** is the Service name, **** is the namespace, and **svc.cluster.local** is the domain name suffix. In actual use, you can omit **.svc.cluster.local** in the same namespace and use the ServiceName. + +An advantage of using ServiceName is that you can write ServiceName into the program when developing the application. In this way, you do not need to know the IP address of a specific Service. + +After CoreDNS is installed, there is also a Service in the kube-system namespace, as shown below. + +.. code-block:: + + $ kubectl get svc -n kube-system + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + coredns ClusterIP 10.247.3.10 53/UDP,53/TCP,8080/TCP 13d + +By default, after other pods are created, the address of the CoreDNS Service is written as the address of the domain name resolution server in the **/etc/resolv.conf** file of the pod. Create a pod and view the **/etc/resolv.conf** file as follows: + +.. code-block:: + + $ kubectl exec test01-6cbbf97b78-krj6h -it -- /bin/sh + / # cat /etc/resolv.conf + nameserver 10.247.3.10 + search default.svc.cluster.local svc.cluster.local cluster.local + options ndots:5 timeout single-request-reopen + +When a user accesses the *Service name:Port* of the Nginx pod, the IP address of the Nginx Service is resolved from CoreDNS, and then the IP address of the Nginx Service is accessed. In this way, the user can access the backend Nginx pod. + + +.. figure:: /_static/images/en-us_image_0000001695896713.png + :alt: **Figure 1** Example of domain name resolution in a cluster + + **Figure 1** Example of domain name resolution in a cluster + +How Does Domain Name Resolution Work in Kubernetes? +--------------------------------------------------- + +DNS policies can be set on a per-pod basis. Currently, Kubernetes supports four types of DNS policies: **Default**, **ClusterFirst**, **ClusterFirstWithHostNet**, and **None**. For details, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/. These policies are specified in the **dnsPolicy** field in the pod-specific. + +- **Default**: Pods inherit the name resolution configuration from the node that the pods run on. The custom upstream DNS server and the stub domain cannot be used together with this policy. +- **ClusterFirst**: Any DNS query that does not match the configured cluster domain suffix, such as **www.kubernetes.io**, is forwarded to the upstream name server inherited from the node. Cluster administrators may have extra stub domains and upstream DNS servers configured. +- **ClusterFirstWithHostNet**: For pods running with hostNetwork, set its DNS policy **ClusterFirstWithHostNet**. +- **None**: It allows a pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the **dnsPolicy** field in the pod-specific. + +.. note:: + + - Clusters of Kubernetes v1.10 and later support **Default**, **ClusterFirst**, **ClusterFirstWithHostNet**, and **None**. Clusters earlier than Kubernetes v1.10 support only **Default**, **ClusterFirst**, and **ClusterFirstWithHostNet**. + - **Default** is not the default DNS policy. If **dnsPolicy** is not explicitly specified, **ClusterFirst** is used. + +**Routing** + +**Without stub domain configurations**: Any query that does not match the configured cluster domain suffix, such as **www.kubernetes.io**, is forwarded to the upstream DNS server inherited from the node. + +**With stub domain configurations**: If stub domains and upstream DNS servers are configured, DNS queries are routed according to the following flow: + +#. The query is first sent to the DNS caching layer in CoreDNS. +#. From the caching layer, the suffix of the request is examined and then the request is forwarded to the corresponding DNS: + + - Names with the cluster suffix, for example, **.cluster.local**: The request is sent to CoreDNS. + + - Names with the stub domain suffix, for example, **.acme.local**: The request is sent to the configured custom DNS resolver that listens, for example, on 1.2.3.4. + - Names that do not match the suffix (for example, **widget.com**): The request is forwarded to the upstream DNS. + + +.. figure:: /_static/images/en-us_image_0000001647576960.png + :alt: **Figure 2** Routing + + **Figure 2** Routing + +Related Operations +------------------ + +You can also configure DNS in a workload. For details, see :ref:`DNS Configuration `. + +You can also use CoreDNS to implement user-defined domain name resolution. For details, see :ref:`Using CoreDNS for Custom Domain Name Resolution `. diff --git a/umn/source/network/dns/using_coredns_for_custom_domain_name_resolution.rst b/umn/source/network/dns/using_coredns_for_custom_domain_name_resolution.rst new file mode 100644 index 0000000..39b6315 --- /dev/null +++ b/umn/source/network/dns/using_coredns_for_custom_domain_name_resolution.rst @@ -0,0 +1,226 @@ +:original_name: cce_10_0361.html + +.. _cce_10_0361: + +Using CoreDNS for Custom Domain Name Resolution +=============================================== + +Challenges +---------- + +When using CCE, you may need to resolve custom internal domain names in the following scenarios: + +- In the legacy code, a fixed domain name is configured for calling other internal services. If the system decides to use Kubernetes Services, the code refactoring workload could be heavy. +- A service is created outside the cluster. Data in the cluster needs to be sent to the service through a fixed domain name. + +Solution +-------- + +There are several CoreDNS-based solutions for custom domain name resolution: + +- :ref:`Configuring the Stub Domain for CoreDNS `: You can add it on the console, which is easy to operate. +- :ref:`Using the CoreDNS Hosts plug-in to configure resolution for any domain name `: You can add any record set, which is similar to adding a record set in the local **/etc/hosts** file. +- :ref:`Using the CoreDNS Rewrite plug-in to point a domain name to a service in the cluster `: A nickname is assigned to the Kubernetes Service. You do not need to know the IP address of the resolution record in advance. +- :ref:`Using the CoreDNS Forward plug-in to set the self-built DNS as the upstream DNS `: The self-built DNS can manage a large number of resolution records. You do not need to modify the CoreDNS configuration when adding or deleting records. + +Precautions +----------- + +Improper modification on CoreDNS configuration may cause domain name resolution failures in the cluster. Perform tests before and after the modification. + +.. _cce_10_0361__section5202157467: + +Configuring the Stub Domain for CoreDNS +--------------------------------------- + +Cluster administrators can modify the ConfigMap for the CoreDNS Corefile to change how service discovery works. + +Assume that a cluster administrator has a Consul DNS server located at 10.150.0.1 and all Consul domain names have the suffix **.consul.local**. + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane, choose **Add-ons**. On the displayed page, click **Edit** under **CoreDNS**. + +#. Add a stub domain in the **Parameters** area. The format is a key-value pair. The key is a DNS suffix domain name, and the value is a DNS IP address or a group of DNS IP addresses, for example, **consul.local --10.XXX.XXX.XXX**. + +#. Click **OK**. + +#. Choose **ConfigMaps and Secrets** in the navigation pane, select the **kube-system** namespace, and view the ConfigMap data of CoreDNS to check whether the update is successful. + + The corresponding Corefile content is as follows: + + .. code-block:: + + .:5353 { + bind {$POD_IP} + cache 30 + errors + health {$POD_IP}:8080 + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + } + loadbalance round_robin + prometheus {$POD_IP}:9153 + forward . /etc/resolv.conf { + policy random + } + reload + ready {$POD_IP}:8081 + } + consul.local:5353 { + bind {$POD_IP} + errors + cache 30 + forward . 10.150.0.1 + } + +.. _cce_10_0361__section106211954135311: + +Modifying the CoreDNS Hosts Configuration File +---------------------------------------------- + +After modifying the hosts file in CoreDNS, you do not need to configure the hosts file in each pod to add resolution records. + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane, choose **Add-ons**. On the displayed page, click **Edit** under **CoreDNS**. + +#. Edit the advanced configuration under **Parameters** and add the following content to the **plugins** field: + + .. code-block:: + + { + "configBlock": "192.168.1.1 www.example.com\nfallthrough", + "name": "hosts" + } + + .. important:: + + The **fallthrough** field must be configured. **fallthrough** indicates that when the domain name to be resolved cannot be found in the hosts file, the resolution task is transferred to the next CoreDNS plug-in. If **fallthrough** is not specified, the task ends and the domain name resolution stops. As a result, the domain name resolution in the cluster fails. + + For details about how to configure the hosts file, visit https://coredns.io/plugins/hosts/. + +#. Click **OK**. + +#. Choose **ConfigMaps and Secrets** in the navigation pane, select the **kube-system** namespace, and view the ConfigMap data of CoreDNS to check whether the update is successful. + + The corresponding Corefile content is as follows: + + .. code-block:: + + .:5353 { + bind {$POD_IP} + hosts { + 192.168.1.1 www.example.com + fallthrough + } + cache 30 + errors + health {$POD_IP}:8080 + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + } + loadbalance round_robin + prometheus {$POD_IP}:9153 + forward . /etc/resolv.conf { + policy random + } + reload + ready {$POD_IP}:8081 + } + +.. _cce_10_0361__section2213823544: + +Adding the CoreDNS Rewrite Configuration to Point the Domain Name to Services in the Cluster +-------------------------------------------------------------------------------------------- + +Use the Rewrite plug-in of CoreDNS to resolve a specified domain name to the domain name of a Service. For example, the request for accessing the example.com domain name is redirected to the example.default.svc.cluster.local domain name, that is, the example service in the default namespace. + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane, choose **Add-ons**. On the displayed page, click **Edit** under **CoreDNS**. + +#. Edit the advanced configuration under **Parameters** and add the following content to the **plugins** field: + + .. code-block:: + + { + "name": "rewrite", + "parameters": "name example.com example.default.svc.cluster.local" + } + +#. Click **OK**. + +#. Choose **ConfigMaps and Secrets** in the navigation pane, select the **kube-system** namespace, and view the ConfigMap data of CoreDNS to check whether the update is successful. + + Corresponding Corefile content: + + .. code-block:: + + .:5353 { + bind {$POD_IP} + rewrite name example.com example.default.svc.cluster.local + cache 30 + errors + health {$POD_IP}:8080 + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + } + loadbalance round_robin + prometheus {$POD_IP}:9153 + forward . /etc/resolv.conf { + policy random + } + reload + ready {$POD_IP}:8081 + } + +.. _cce_10_0361__section677819913541: + +Using CoreDNS to Cascade Self-Built DNS +--------------------------------------- + +By default, CoreDNS uses the **/etc/resolv.conf** file of the node for resolution. You can also change the resolution address to that of the external DNS. + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane, choose **Add-ons**. On the displayed page, click **Edit** under **CoreDNS**. + +#. Edit the advanced configuration under **Parameters** and modify the following content in the **plugins** field: + + .. code-block:: + + { + "configBlock": "policy random", + "name": "forward", + "parameters": ". 192.168.1.1" + } + +#. Click **OK**. + +#. Choose **ConfigMaps and Secrets** in the navigation pane, select the **kube-system** namespace, and view the ConfigMap data of CoreDNS to check whether the update is successful. + + The corresponding Corefile content is as follows: + + .. code-block:: + + .:5353 { + bind {$POD_IP} + cache 30 + errors + health {$POD_IP}:8080 + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + } + loadbalance round_robin + prometheus {$POD_IP}:9153 + forward . 192.168.1.1 { + policy random + } + reload + ready {$POD_IP}:8081 + } diff --git a/umn/source/networking/index.rst b/umn/source/network/index.rst similarity index 68% rename from umn/source/networking/index.rst rename to umn/source/network/index.rst index b8697dc..dfe79e1 100644 --- a/umn/source/networking/index.rst +++ b/umn/source/network/index.rst @@ -2,18 +2,18 @@ .. _cce_10_0020: -Networking -========== +Network +======= - :ref:`Overview ` - :ref:`Container Network Models ` -- :ref:`Services ` +- :ref:`Service ` - :ref:`Ingresses ` - :ref:`DNS ` +- :ref:`Container Network Settings ` +- :ref:`Cluster Network Settings ` - :ref:`Configuring Intra-VPC Access ` - :ref:`Accessing Public Networks from a Container ` -- :ref:`Network Policies ` -- :ref:`Host Network ` .. toctree:: :maxdepth: 1 @@ -21,10 +21,10 @@ Networking overview container_network_models/index - services/index + service/index ingresses/index dns/index + container_network_settings/index + cluster_network_settings/index configuring_intra-vpc_access accessing_public_networks_from_a_container - network_policies - host_network diff --git a/umn/source/network/ingresses/elb_ingresses/configuring_elb_ingresses_using_annotations.rst b/umn/source/network/ingresses/elb_ingresses/configuring_elb_ingresses_using_annotations.rst new file mode 100644 index 0000000..838a673 --- /dev/null +++ b/umn/source/network/ingresses/elb_ingresses/configuring_elb_ingresses_using_annotations.rst @@ -0,0 +1,194 @@ +:original_name: cce_10_0695.html + +.. _cce_10_0695: + +Configuring ELB Ingresses Using Annotations +=========================================== + +By adding annotations to a YAML file, you can implement more advanced ingress functions. This section describes the annotations that can be used when you create an ingress of the ELB type. + +- :ref:`Interconnecting with ELB ` +- :ref:`Using HTTP/2 ` +- :ref:`Interconnecting with HTTPS Backend Services ` + +.. _cce_10_0695__section7819047102916: + +Interconnecting with ELB +------------------------ + +.. table:: **Table 1** Annotations for interconnecting with ELB + + +------------------------------+-----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +==============================+===========================================================+=========================================================================================================================================================================================================+================================================+ + | kubernetes.io/elb.class | String | Select a proper load balancer type. | v1.9 or later | + | | | | | + | | | The value can be: | | + | | | | | + | | | - **union**: shared load balancer | | + | | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | | + +------------------------------+-----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/ingress.class | String | - **cce**: The self-developed ELB ingress is used. | Only clusters of v1.21 or earlier | + | | | - **nginx**: Nginx ingress is used. | | + | | | | | + | | | This parameter is mandatory when an ingress is created by calling the API. | | + | | | | | + | | | For clusters of v1.23 or later, use the parameter **ingressClassName**. For details, see :ref:`Using kubectl to Create an ELB Ingress `. | | + +------------------------------+-----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/elb.port | Integer | This parameter indicates the external port registered with the address of the LoadBalancer Service. | v1.9 or later | + | | | | | + | | | Supported range: 1 to 65535 | | + | | | | | + | | | .. note:: | | + | | | | | + | | | Some ports are high-risk ports and are blocked by default, for example, port 21. | | + +------------------------------+-----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/elb.id | String | Mandatory **when an existing load balancer is to be interconnected**. | v1.9 or later | + | | | | | + | | | ID of a load balancer. | | + | | | | | + | | | **How to obtain**: | | + | | | | | + | | | On the management console, click **Service List**, and choose **Networking** > **Elastic Load Balance**. Click the name of the target load balancer. On the **Summary** tab page, find and copy the ID. | | + +------------------------------+-----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/elb.ip | String | Mandatory **when an existing load balancer is to be interconnected**. | v1.9 or later | + | | | | | + | | | This parameter indicates the service address of a load balancer. The value can be the public IP address of a public network load balancer or the private IP address of a private network load balancer. | | + +------------------------------+-----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/elb.autocreate | :ref:`Table 4 ` Object | Mandatory **when load balancers are automatically created**. | v1.9 or later | + | | | | | + | | | **Example** | | + | | | | | + | | | - If a public network load balancer will be automatically created, set this parameter to the following value: | | + | | | | | + | | | '{"type":"public","bandwidth_name":"cce-bandwidth-1551163379627","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"}' | | + | | | | | + | | | - If a private network load balancer will be automatically created, set this parameter to the following value: | | + | | | | | + | | | {"type":"inner","name":"A-location-d-test"} | | + +------------------------------+-----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/elb.subnet-id | String | Optional **when load balancers are automatically created**. | Mandatory for clusters earlier than v1.11.7-r0 | + | | | | | + | | | ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. | Discarded in clusters later than v1.11.7-r0 | + | | | | | + | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. | | + | | | - Optional for clusters later than v1.11.7-r0. | | + +------------------------------+-----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + +To use the preceding annotations, perform the following steps: + +- See :ref:`Creating an Ingress - Interconnecting with an Existing Load Balancer ` to interconnect an existing load balancer. +- See :ref:`Creating an Ingress - Automatically Creating a Load Balancer ` to automatically create a load balancer. + +.. _cce_10_0695__section17893312104519: + +Using HTTP/2 +------------ + +.. table:: **Table 2** Annotations of using HTTP/2 + + +--------------------------------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +================================+=================+=========================================================================================================================================================================================================================================================================================================================+===================================+ + | kubernetes.io/elb.http2-enable | String | Whether HTTP/2 is enabled. Request forwarding using HTTP/2 improves the access performance between your application and the load balancer. However, the load balancer still uses HTTP 1.X to forward requests to the backend server. **This parameter is supported in clusters of v1.19.16-r0, v1.21.3-r0, and later.** | v1.19.16-r0, v1.21.3-r0, or later | + | | | | | + | | | Options: | | + | | | | | + | | | - **true**: enabled | | + | | | - **false**: disabled (default value) | | + | | | | | + | | | Note: **HTTP/2 can be enabled or disabled only when the listener uses HTTPS.** This parameter is invalid and defaults to **false** when the listener protocol is HTTP. | | + +--------------------------------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------+ + +For details about the application scenarios, see :ref:`ELB Ingresses Using HTTP/2 `. + +.. _cce_10_0695__section03391948464: + +Interconnecting with HTTPS Backend Services +------------------------------------------- + +.. table:: **Table 3** Annotations for interconnecting with HTTPS backend services + + +---------------------------------+--------+-------------------------------------------------------------------------------+----------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +=================================+========+===============================================================================+============================+ + | kubernetes.io/elb.pool-protocol | String | To interconnect with HTTPS backend services, set this parameter to **https**. | v1.23.8, v1.25.3, or later | + +---------------------------------+--------+-------------------------------------------------------------------------------+----------------------------+ + +For details about the application scenarios, see :ref:`Interconnecting ELB Ingresses with HTTPS Backend Services `. + +Data Structure +-------------- + +.. _cce_10_0695__table148341447193017: + +.. table:: **Table 4** Data structure of the **elb.autocreate** field + + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +======================+=======================================+==================+==================================================================================================================================================================================================================================================================================================================================================================================+ + | name | No | String | Name of the automatically created load balancer. | + | | | | | + | | | | The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed. | + | | | | | + | | | | Default: **cce-lb+service.UID** | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | type | No | String | Network type of the load balancer. | + | | | | | + | | | | - **public**: public network load balancer | + | | | | - **inner**: private network load balancer | + | | | | | + | | | | Default: **inner** | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | + | | | | | + | | | | The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_chargemode | No | String | Bandwidth mode. | + | | | | | + | | | | - **bandwidth**: billed by bandwidth | + | | | | - **traffic**: billed by traffic | + | | | | | + | | | | Default: **bandwidth** | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_size | Yes for public network load balancers | Integer | Bandwidth size. The default value is 1 to 2000 Mbit/s. Configure this parameter based on the bandwidth range allowed in your region. | + | | | | | + | | | | The minimum increment for bandwidth adjustment varies depending on the bandwidth range. | + | | | | | + | | | | - The minimum increment is 1 Mbit/s if the allowed bandwidth does not exceed 300 Mbit/s. | + | | | | - The minimum increment is 50 Mbit/s if the allowed bandwidth ranges from 300 Mbit/s to 1000 Mbit/s. | + | | | | - The minimum increment is 500 Mbit/s if the allowed bandwidth exceeds 1000 Mbit/s. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_sharetype | Yes for public network load balancers | String | Bandwidth sharing mode. | + | | | | | + | | | | - **PER**: dedicated bandwidth | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | eip_type | Yes for public network load balancers | String | EIP type. | + | | | | | + | | | | - **5_bgp**: dynamic BGP | + | | | | - **5_sbgp**: static BGP | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | available_zone | Yes | Array of strings | AZ where the load balancer is located. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | l4_flavor_name | Yes | String | Flavor name of the layer-4 load balancer. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | l7_flavor_name | No | String | Flavor name of the layer-7 load balancer. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. The value of this parameter must be the same as that of **l4_flavor_name**, that is, both are elastic specifications or fixed specifications. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | elb_virsubnet_ids | No | Array of strings | Subnet where the backend server of the load balancer is located. If this parameter is left blank, the default cluster subnet is used. Load balancers occupy different number of subnet IP addresses based on their specifications. Therefore, you are not advised to use the subnet CIDR blocks of other resources (such as clusters and nodes) as the load balancer CIDR block. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. | + | | | | | + | | | | Example: | + | | | | | + | | | | .. code-block:: | + | | | | | + | | | | "elb_virsubnet_ids": [ | + | | | | "14567f27-8ae4-42b8-ae47-9f847a4690dd" | + | | | | ] | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/network/ingresses/elb_ingresses/configuring_https_certificates_for_elb_ingresses.rst b/umn/source/network/ingresses/elb_ingresses/configuring_https_certificates_for_elb_ingresses.rst new file mode 100644 index 0000000..040dd61 --- /dev/null +++ b/umn/source/network/ingresses/elb_ingresses/configuring_https_certificates_for_elb_ingresses.rst @@ -0,0 +1,231 @@ +:original_name: cce_10_0687.html + +.. _cce_10_0687: + +Configuring HTTPS Certificates for ELB Ingresses +================================================ + +Ingress supports TLS certificate configuration and secures your Services with HTTPS. + +Currently, you can use the TLS secret certificate configured in the cluster and the ELB certificate. + +.. note:: + + If HTTPS is enabled for the same port of the same load balancer of multiple ingresses, you must select the same certificate. + +Using a TLS Secret Certificate +------------------------------ + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Ingress supports two TLS secret types: kubernetes.io/tls and IngressTLS. IngressTLS is used as an example. For details, see :ref:`Creating a Secret `. For details about examples of the kubernetes.io/tls secret and its description, see `TLS Secret `__. + + Run the following command to create a YAML file named **ingress-test-secret.yaml** (the file name can be customized): + + **vi ingress-test-secret.yaml** + + **The YAML file is configured as follows:** + + .. code-block:: + + apiVersion: v1 + data: + tls.crt: LS0******tLS0tCg== + tls.key: LS0tL******0tLS0K + kind: Secret + metadata: + annotations: + description: test for ingressTLS secrets + name: ingress-test-secret + namespace: default + type: IngressTLS + + .. note:: + + In the preceding information, **tls.crt** and **tls.key** are only examples. Replace them with the actual files. The values of **tls.crt** and **tls.key** are Base64-encoded. + +#. Create a secret. + + **kubectl create -f ingress-test-secret.yaml** + + If information similar to the following is displayed, the secret is being created: + + .. code-block:: + + secret/ingress-test-secret created + + View the created secret. + + **kubectl get secrets** + + If information similar to the following is displayed, the secret has been created: + + .. code-block:: + + NAME TYPE DATA AGE + ingress-test-secret IngressTLS 2 13s + +#. Create a YAML file named **ingress-test.yaml**. The file name can be customized. + + **vi ingress-test.yaml** + + .. note:: + + Default security policy (kubernetes.io/elb.tls-ciphers-policy) is supported only in clusters of v1.17.17 or later. + + **The following uses the automatically created load balancer as an example. The YAML file is configured as follows:** + + **For clusters of v1.21 or earlier:** + + .. code-block:: + + apiVersion: networking.k8s.io/v1beta1 + kind: Ingress + metadata: + name: ingress-test + annotations: + kubernetes.io/elb.class: performance + kubernetes.io/ingress.class: cce + kubernetes.io/elb.port: '443' + kubernetes.io/elb.autocreate: + '{ + "type": "public", + "bandwidth_name": "cce-bandwidth-******", + "bandwidth_chargemode": "bandwidth", + "bandwidth_size": 5, + "bandwidth_sharetype": "PER", + "eip_type": "5_bgp", + "available_zone": [ + "eu-de-01" + ], + "elb_virsubnet_ids":["b4bf8152-6c36-4c3b-9f74-2229f8e640c9"], + "l7_flavor_name": "L7_flavor.elb.s1.small" + }' + kubernetes.io/elb.tls-ciphers-policy: tls-1-2 + spec: + tls: + - secretName: ingress-test-secret + rules: + - host: foo.bar.com + http: + paths: + - path: '/' + backend: + serviceName: # Replace it with the name of your target Service. + servicePort: 80 + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH + + **For clusters of v1.23 or later:** + + .. code-block:: + + apiVersion: networking.k8s.io/v1 + kind: Ingress + metadata: + name: ingress-test + annotations: + kubernetes.io/elb.class: performance + kubernetes.io/elb.port: '443' + kubernetes.io/elb.autocreate: + '{ + "type": "public", + "bandwidth_name": "cce-bandwidth-******", + "bandwidth_chargemode": "bandwidth", + "bandwidth_size": 5, + "bandwidth_sharetype": "PER", + "eip_type": "5_bgp", + "available_zone": [ + "eu-de-01" + ], + "elb_virsubnet_ids":["b4bf8152-6c36-4c3b-9f74-2229f8e640c9"], + "l7_flavor_name": "L7_flavor.elb.s1.small" + }' + kubernetes.io/elb.tls-ciphers-policy: tls-1-2 + spec: + tls: + - secretName: ingress-test-secret + rules: + - host: foo.bar.com + http: + paths: + - path: '/' + backend: + service: + name: # Replace it with the name of your target Service. + port: + number: 8080 # Replace 8080 with the port number of your target Service. + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH + pathType: ImplementationSpecific + ingressClassName: cce + + .. table:: **Table 1** Key parameters + + +--------------------------------------+-----------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +======================================+=================+==================+========================================================================================================================================================================+ + | kubernetes.io/elb.tls-ciphers-policy | No | String | The default value is **tls-1-2**, which is the default security policy used by the listener and takes effect only when HTTPS is used. | + | | | | | + | | | | Options: | + | | | | | + | | | | - tls-1-0 | + | | | | - tls-1-1 | + | | | | - tls-1-2 | + | | | | - tls-1-2-strict | + | | | | | + | | | | For details of cipher suites for each security policy, see :ref:`Table 2 `. | + +--------------------------------------+-----------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | tls | No | Array of strings | When HTTPS is used, this parameter must be added to specify the secret certificate. | + | | | | | + | | | | Multiple independent domain names and certificates can be added. For details, see :ref:`Configuring the Server Name Indication (SNI) for ELB Ingresses `. | + +--------------------------------------+-----------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | secretName | No | String | This parameter is mandatory if HTTPS is used. Set this parameter to the name of the created secret. | + +--------------------------------------+-----------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + .. _cce_10_0687__table9419191416246: + + .. table:: **Table 2** **tls_ciphers_policy** parameter description + + +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Security Policy | TLS Version | Cipher Suite | + +=======================+=======================+=======================================================================================================================================================================================================================================================================================================================================================================================================+ + | tls-1-0 | TLS 1.2 | ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES128-SHA256:AES256-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-SHA:AES256-SHA | + | | | | + | | TLS 1.1 | | + | | | | + | | TLS 1.0 | | + +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | tls-1-1 | TLS 1.2 | | + | | | | + | | TLS 1.1 | | + +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | tls-1-2 | TLS 1.2 | | + +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | tls-1-2-strict | TLS 1.2 | ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES128-SHA256:AES256-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384 | + +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Create an ingress. + + **kubectl create -f ingress-test.yaml** + + If information similar to the following is displayed, the ingress has been created. + + .. code-block:: + + ingress/ingress-test created + + View the created ingress. + + **kubectl get ingress** + + If information similar to the following is displayed, the ingress has been created and the workload is accessible. + + .. code-block:: + + NAME HOSTS ADDRESS PORTS AGE + ingress-test * 121.**.**.** 80 10s + +#. Enter **https://121.**.**.*\*:443** in the address box of the browser to access the workload (for example, :ref:`Nginx workload `). + + **121.**.**.*\*** indicates the IP address of the unified load balancer. diff --git a/umn/source/network/ingresses/elb_ingresses/configuring_the_server_name_indication_sni_for_elb_ingresses.rst b/umn/source/network/ingresses/elb_ingresses/configuring_the_server_name_indication_sni_for_elb_ingresses.rst new file mode 100644 index 0000000..bcca3ef --- /dev/null +++ b/umn/source/network/ingresses/elb_ingresses/configuring_the_server_name_indication_sni_for_elb_ingresses.rst @@ -0,0 +1,115 @@ +:original_name: cce_10_0688.html + +.. _cce_10_0688: + +Configuring the Server Name Indication (SNI) for ELB Ingresses +============================================================== + +SNI allows multiple TLS-based access domain names to be provided for external systems using the same IP address and port number. Different domain names can use different security certificates. + +.. note:: + + - This function is supported only in clusters of v1.15.11 and later. + - The **SNI** option is available only when HTTPS is used. + + - Only one domain name can be specified for each SNI certificate. Wildcard-domain certificates are supported. + - Security policy (kubernetes.io/elb.tls-ciphers-policy) is supported only in clusters of v1.17.11 or later. + +You can enable SNI when the preceding conditions are met. The following uses the automatic creation of a load balancer as an example. In this example, **sni-test-secret-1** and **sni-test-secret-2** are SNI certificates. The domain names specified by the certificates must be the same as those in the certificates. + +**For clusters of v1.21 or earlier:** + +.. code-block:: + + apiVersion: networking.k8s.io/v1beta1 + kind: Ingress + metadata: + name: ingress-test + annotations: + kubernetes.io/elb.class: performance + kubernetes.io/ingress.class: cce + kubernetes.io/elb.port: '443' + kubernetes.io/elb.autocreate: + '{ + "type": "public", + "bandwidth_name": "cce-bandwidth-******", + "bandwidth_chargemode": "bandwidth", + "bandwidth_size": 5, + "bandwidth_sharetype": "PER", + "eip_type": "5_bgp", + "available_zone": [ + "eu-de-01" + ], + "elb_virsubnet_ids":["b4bf8152-6c36-4c3b-9f74-2229f8e640c9"], + "l7_flavor_name": "L7_flavor.elb.s1.small" + }' + kubernetes.io/elb.tls-ciphers-policy: tls-1-2 + spec: + tls: + - secretName: ingress-test-secret + - hosts: + - example.top # Domain name specified when a certificate is issued + secretName: sni-test-secret-1 + - hosts: + - example.com # Domain name specified when a certificate is issued + secretName: sni-test-secret-2 + rules: + - host: example.com + http: + paths: + - path: '/' + backend: + serviceName: # Replace it with the name of your target Service. + servicePort: 80 + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH + +**For clusters of v1.23 or later:** + +.. code-block:: + + apiVersion: networking.k8s.io/v1 + kind: Ingress + metadata: + name: ingress-test + annotations: + kubernetes.io/elb.class: performance + kubernetes.io/elb.port: '443' + kubernetes.io/elb.autocreate: + '{ + "type": "public", + "bandwidth_name": "cce-bandwidth-******", + "bandwidth_chargemode": "bandwidth", + "bandwidth_size": 5, + "bandwidth_sharetype": "PER", + "eip_type": "5_bgp", + "available_zone": [ + "eu-de-01" + ], + "elb_virsubnet_ids":["b4bf8152-6c36-4c3b-9f74-2229f8e640c9"], + "l7_flavor_name": "L7_flavor.elb.s1.small" + }' + kubernetes.io/elb.tls-ciphers-policy: tls-1-2 + spec: + tls: + - secretName: ingress-test-secret + - hosts: + - example.top # Domain name specified when a certificate is issued + secretName: sni-test-secret-1 + - hosts: + - example.com # Domain name specified when a certificate is issued + secretName: sni-test-secret-2 + rules: + - host: example.com + http: + paths: + - path: '/' + backend: + service: + name: # Replace it with the name of your target Service. + port: + number: 8080 # Replace 8080 with the port number of your target Service. + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH + pathType: ImplementationSpecific + ingressClassName: cce diff --git a/umn/source/network/ingresses/elb_ingresses/creating_an_elb_ingress_on_the_console.rst b/umn/source/network/ingresses/elb_ingresses/creating_an_elb_ingress_on_the_console.rst new file mode 100644 index 0000000..0a4505a --- /dev/null +++ b/umn/source/network/ingresses/elb_ingresses/creating_an_elb_ingress_on_the_console.rst @@ -0,0 +1,165 @@ +:original_name: cce_10_0251.html + +.. _cce_10_0251: + +Creating an ELB Ingress on the Console +====================================== + +Prerequisites +------------- + +- An ingress provides network access for backend workloads. Ensure that a workload is available in a cluster. If no workload is available, deploy a workload by referring to :ref:`Creating a Deployment `, :ref:`Creating a StatefulSet `, or :ref:`Creating a DaemonSet `. +- :ref:`Services Supported by Ingresses ` lists the Service types supported by ELB ingresses. + +Precautions +----------- + +- It is recommended that other resources not use the load balancer automatically created by an ingress. Otherwise, the load balancer will be occupied when the ingress is deleted, resulting in residual resources. +- After an ingress is created, upgrade and maintain the configuration of the selected load balancers on the CCE console. Do not modify the configuration on the ELB console. Otherwise, the ingress service may be abnormal. +- The URL registered in an ingress forwarding policy must be the same as the URL used to access the backend Service. Otherwise, a 404 error will be returned. +- In a cluster using the IPVS proxy mode, if the ingress and Service use the same ELB load balancer, the ingress cannot be accessed from the nodes and containers in the cluster because kube-proxy mounts the LoadBalancer Service address to the ipvs-0 bridge. This bridge intercepts the traffic of the load balancer connected to the ingress. You are advised to use different ELB load balancers for the ingress and Service. +- Dedicated load balancers must be the application type (HTTP/HTTPS) supporting private networks (with a private IP). +- If multiple ingresses are used to connect to the same ELB port in the same cluster, the listener configuration items (such as the certificate associated with the listener and the HTTP2 attribute of the listener) are subject to the configuration of the first ingress. + +Adding an ELB Ingress +--------------------- + +This section uses an Nginx workload as an example to describe how to add an ELB ingress. + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. Choose **Networking** in the navigation pane, click the **Ingresses** tab, and click **Create Ingress** in the upper right corner. + +#. Configure ingress parameters. + + - **Name**: specifies a name of an ingress, for example, **ingress-demo**. + + - **Load Balancer** + + Select the load balancer to interconnect. Only load balancers in the same VPC as the cluster are supported. If no load balancer is available, click **Create Load Balancer** to create one on the ELB console. + + Dedicated load balancers must support HTTP or HTTPS and the network type must support private networks. + + The CCE console supports automatic creation of load balancers. Select **Auto create** from the drop-down list box and configure the following parameters: + + - **Instance Name**: Enter a load balancer name. + - **Public Access**: If enabled, an EIP with 5 Mbit/s bandwidth will be created. + - **Subnet**, **AZ**, and **Specifications** (available only for dedicated load balancers): Configure the subnet, AZ, and specifications. Only HTTP- or HTTPS-compliant dedicated load balancers can be automatically created. + + - **Listener**: Ingress configures a listener for the load balancer, which listens to requests from the load balancer and distributes traffic. After the configuration is complete, a listener is created on the load balancer. The default listener name is *k8s___*, for example, *k8s_HTTP_80*. + + - **External Protocol**: **HTTP** and **HTTPS** are available. + + - **External Port**: Port number that is open to the ELB service address. The port number can be specified randomly. + + - **Server Certificate**: When an HTTPS listener is created for a load balancer, bind a certificate to the load balancer to support encrypted authentication for HTTPS data transmission. + + .. note:: + + If there is already an HTTPS ingress for the chosen port on the load balancer, the certificate of the new HTTPS ingress must be the same as the certificate of the existing ingress. This means that a listener has only one certificate. If two certificates, each with a different ingress, are added to the same listener of the same load balancer, only the certificate added earliest takes effect on the load balancer. + + - **SNI**: Server Name Indication (SNI) is an extended protocol of TLS. It allows multiple TLS-based access domain names to be provided for external systems using the same IP address and port. Different domain names can use different security certificates. After SNI is enabled, the client is allowed to submit the requested domain name when initiating a TLS handshake request. After receiving the TLS request, the load balancer searches for the certificate based on the domain name in the request. If the certificate corresponding to the domain name is found, the load balancer returns the certificate for authorization. Otherwise, the default certificate (server certificate) is returned for authorization. + + .. note:: + + - The **SNI** option is available only when **HTTPS** is selected. + + - This function is supported only for clusters of v1.15.11 and later. + - Specify the domain name for the SNI certificate. Only one domain name can be specified for each certificate. Wildcard-domain certificates are supported. + + - **Security Policy**: combinations of different TLS versions and supported cipher suites available to HTTPS listeners. + + For details about security policies, see ELB User Guide. + + .. note:: + + - **Security Policy** is available only when **HTTPS** is selected. + - This function is supported only for clusters of v1.17.9 and later. + + - **Forwarding Policy**: When the access address of a request matches the forwarding policy (a forwarding policy consists of a domain name and URL, for example, *10.XXX.XXX.XXX:80/helloworld*), the request is forwarded to the corresponding Service for processing. You can click |image1| to add multiple forwarding policies. + + - **Domain Name**: actual domain name. Ensure that the domain name has been registered and archived. Once a domain name rule is configured, you must use the domain name for access. + - URL Matching Rule + + - **Prefix match**: If the URL is set to **/healthz**, the URL that meets the prefix can be accessed. For example, **/healthz/v1** and **/healthz/v2**. + - **Exact match**: The URL can be accessed only when it is fully matched. For example, if the URL is set to **/healthz**, only /healthz can be accessed. + - **Regular expression**: The URL is matched based on the regular expression. For example, if the regular expression is **/[A-Za-z0-9_.-]+/test**, all URLs that comply with this rule can be accessed, for example, **/abcA9/test** and **/v1-Ab/test**. Two regular expression standards are supported: POSIX and Perl. + + - **URL**: access path to be registered, for example, **/healthz**. + + .. note:: + + The access path added here must exist in the backend application. Otherwise, the forwarding fails. + + For example, the default access URL of the Nginx application is **/usr/share/nginx/html**. When adding **/test** to the ingress forwarding policy, ensure that your Nginx application contains the same URL, that is, **/usr/share/nginx/html/test**, otherwise, 404 is returned. + + - **Destination Service**: Select an existing Service or create a Service. Services that do not meet search criteria are automatically filtered out. + - **Destination Service Port**: Select the access port of the destination Service. + - **Set ELB**: + + - .. _cce_10_0251__li8170555132211: + + **Algorithm**: Three algorithms are available: weighted round robin, weighted least connections algorithm, or source IP hash. + + .. note:: + + - **Weighted round robin**: Requests are forwarded to different servers based on their weights, which indicate server processing performance. Backend servers with higher weights receive proportionately more requests, whereas equal-weighted servers receive the same number of requests. This algorithm is often used for short connections, such as HTTP services. + - **Weighted least connections**: In addition to the weight assigned to each server, the number of connections processed by each backend server is also considered. Requests are forwarded to the server with the lowest connections-to-weight ratio. Building on **least connections**, the **weighted least connections** algorithm assigns a weight to each server based on their processing capability. This algorithm is often used for persistent connections, such as database connections. + - **Source IP hash**: The source IP address of each request is calculated using the hash algorithm to obtain a unique hash key, and all backend servers are numbered. The generated key allocates the client to a particular server. This enables requests from different clients to be distributed in load balancing mode and ensures that requests from the same client are forwarded to the same server. This algorithm applies to TCP connections without cookies. + + - **Sticky Session**: This function is disabled by default. Options are as follows: + + - **Load balancer cookie**: Enter the **Stickiness Duration** , which ranges from 1 to 1,440 minutes. + - **Application cookie**: This parameter is available only for shared load balancers. In addition, enter **Cookie Name**, which ranges from 1 to 64 characters. + + .. note:: + + When the :ref:`distribution policy ` uses the source IP hash, sticky session cannot be set. + + - **Health Check**: Set the health check configuration of the load balancer. If this function is enabled, the following configurations are supported: + + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===========================================================================================================================================================================================================================================================+ + | Protocol | When the protocol of the target service port is set to TCP, TCP and HTTP are supported. When it is set to UDP, only UDP is supported. | + | | | + | | - **Check Path** (supported only by the HTTP health check protocol): specifies the health check URL. The check path must start with a slash (/) and contain 1 to 80 characters. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Port | By default, the service port (Node Port and container port of the Service) is used for health check. You can also specify another port for health check. After the port is specified, a service port named **cce-healthz** will be added for the Service. | + | | | + | | - **Node Port**: If a shared load balancer is used or no ENI instance is associated, the node port is used as the health check port. If this parameter is not specified, a random port is used. The value ranges from 30000 to 32767. | + | | - **Container Port**: When a dedicated load balancer is associated with an ENI instance, the container port is used for health check. The value ranges from 1 to 65535. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Check Period (s) | Specifies the maximum interval between health checks. The value ranges from 1 to 50. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Timeout (s) | Specifies the maximum timeout duration for each health check. The value ranges from 1 to 50. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Max. Retries | Specifies the maximum number of health check retries. The value ranges from 1 to 10. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + - **Operation**: Click **Delete** to delete the configuration. + + - **Annotation**: Ingresses provide some advanced CCE functions, which are implemented by annotations. When you use kubectl to create a container, annotations will be used. For details, see :ref:`Creating an Ingress - Automatically Creating a Load Balancer ` and :ref:`Creating an Ingress - Interconnecting with an Existing Load Balancer `. + +#. After the configuration is complete, click **OK**. After the ingress is created, it is displayed in the ingress list. + + On the ELB console, you can view the ELB automatically created through CCE. The default name is **cce-lb-ingress.UID**. Click the ELB name to access its details page. On the **Listeners** tab page, view the route settings of the ingress, including the URL, listener port, and backend server group port. + + .. important:: + + After the ingress is created, upgrade and maintain the selected load balancer on the CCE console. Do not maintain the load balancer on the ELB console. Otherwise, the ingress service may be abnormal. + +#. Access the /healthz interface of the workload, for example, workload **defaultbackend**. + + a. Obtain the access address of the **/healthz** interface of the workload. The access address consists of the load balancer IP address, external port, and mapping URL, for example, 10.**.**.**:80/healthz. + + b. Enter the URL of the /healthz interface, for example, http://10.**.**.**:80/healthz, in the address box of the browser to access the workload, as shown in :ref:`Figure 1 `. + + .. _cce_10_0251__fig17115192714367: + + .. figure:: /_static/images/en-us_image_0000001695737201.png + :alt: **Figure 1** Accessing the /healthz interface of defaultbackend + + **Figure 1** Accessing the /healthz interface of defaultbackend + +.. |image1| image:: /_static/images/en-us_image_0000001647417544.png diff --git a/umn/source/network/ingresses/elb_ingresses/elb_ingresses_routing_to_multiple_services.rst b/umn/source/network/ingresses/elb_ingresses/elb_ingresses_routing_to_multiple_services.rst new file mode 100644 index 0000000..4d9bd7f --- /dev/null +++ b/umn/source/network/ingresses/elb_ingresses/elb_ingresses_routing_to_multiple_services.rst @@ -0,0 +1,42 @@ +:original_name: cce_10_0689.html + +.. _cce_10_0689: + +ELB Ingresses Routing to Multiple Services +========================================== + +Ingresses can route to multiple backend Services based on different matching policies. The **spec** field in the YAML file is set as below. You can access **www.example.com/foo**, **www.example.com/bar**, and **foo.example.com/** to route to three different backend Services. + +.. important:: + + The URL registered in an ingress forwarding policy must be the same as the URL used to access the backend Service. Otherwise, a 404 error will be returned. + +.. code-block:: + + ... + spec: + rules: + - host: 'www.example.com' + http: + paths: + - path: '/foo' + backend: + serviceName: # Replace it with the name of your target Service. + servicePort: 80 + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH + - path: '/bar' + backend: + serviceName: # Replace it with the name of your target Service. + servicePort: 80 + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH + - host: 'foo.example.com' + http: + paths: + - path: '/' + backend: + serviceName: # Replace it with the name of your target Service. + servicePort: 80 + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH diff --git a/umn/source/network/ingresses/elb_ingresses/elb_ingresses_using_http_2.rst b/umn/source/network/ingresses/elb_ingresses/elb_ingresses_using_http_2.rst new file mode 100644 index 0000000..ae2db71 --- /dev/null +++ b/umn/source/network/ingresses/elb_ingresses/elb_ingresses_using_http_2.rst @@ -0,0 +1,88 @@ +:original_name: cce_10_0694.html + +.. _cce_10_0694: + +ELB Ingresses Using HTTP/2 +========================== + +Ingresses can use HTTP/2 to expose Services. Connections from the load balancer to your application use HTTP/1.X by default. If your application is capable of receiving HTTP2 requests, you can add the following field to the ingress annotation to enable the use of HTTP/2: + +.. code-block:: + + kubernetes.io/elb.http2-enable: 'true' + +The following shows the YAML file for associating with an existing load balancer: + +**For clusters of v1.21 or earlier:** + +.. code-block:: + + apiVersion: networking.k8s.io/v1beta1 + kind: Ingress + metadata: + name: ingress-test + annotations: + kubernetes.io/elb.id: # Replace it with the ID of your existing load balancer. + kubernetes.io/elb.ip: # Replace it with the IP of your existing load balancer. + kubernetes.io/elb.port: '443' + kubernetes.io/ingress.class: cce + kubernetes.io/elb.http2-enable: 'true' # Enable HTTP/2. + spec: + tls: + - secretName: ingress-test-secret + rules: + - host: '' + http: + paths: + - path: '/' + backend: + serviceName: # Replace it with the name of your target Service. + servicePort: 80 # Replace it with the port number of your target Service. + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH + +**For clusters of v1.23 or later:** + +.. code-block:: + + apiVersion: networking.k8s.io/v1 + kind: Ingress + metadata: + name: ingress-test + annotations: + kubernetes.io/elb.id: # Replace it with the ID of your existing load balancer. + kubernetes.io/elb.ip: # Replace it with the IP of your existing load balancer. + kubernetes.io/elb.port: '443' + kubernetes.io/elb.http2-enable: 'true' # Enable HTTP/2. + spec: + tls: + - secretName: ingress-test-secret + rules: + - host: '' + http: + paths: + - path: '/' + backend: + service: + name: # Replace it with the name of your target Service. + port: + number: 8080 # Replace 8080 with the port number of your target Service. + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH + pathType: ImplementationSpecific + ingressClassName: cce + +Table 6 HTTP/2 parameters + ++--------------------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Parameter | Mandatory | Type | Description | ++================================+=================+=================+==================================================================================================================================================================================================================================================================================================================================+ +| kubernetes.io/elb.http2-enable | No | Bool | Whether HTTP/2 is enabled. Request forwarding using HTTP/2 improves the access performance between your application and the load balancer. However, the load balancer still uses HTTP 1.X to forward requests to the backend server. **This parameter is supported in clusters of v1.19.16-r0, v1.21.3-r0, and later versions.** | +| | | | | +| | | | Options: | +| | | | | +| | | | - **true**: enabled | +| | | | - **false**: disabled (default value) | +| | | | | +| | | | Note: **HTTP/2 can be enabled or disabled only when the listener uses HTTPS.** This parameter is invalid when the listener protocol is HTTP, and defaults to **false**. | ++--------------------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/network/ingresses/elb_ingresses/index.rst b/umn/source/network/ingresses/elb_ingresses/index.rst new file mode 100644 index 0000000..ef00253 --- /dev/null +++ b/umn/source/network/ingresses/elb_ingresses/index.rst @@ -0,0 +1,28 @@ +:original_name: cce_10_0686.html + +.. _cce_10_0686: + +ELB Ingresses +============= + +- :ref:`Creating an ELB Ingress on the Console ` +- :ref:`Using kubectl to Create an ELB Ingress ` +- :ref:`Configuring ELB Ingresses Using Annotations ` +- :ref:`Configuring HTTPS Certificates for ELB Ingresses ` +- :ref:`Configuring the Server Name Indication (SNI) for ELB Ingresses ` +- :ref:`ELB Ingresses Routing to Multiple Services ` +- :ref:`ELB Ingresses Using HTTP/2 ` +- :ref:`Interconnecting ELB Ingresses with HTTPS Backend Services ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + creating_an_elb_ingress_on_the_console + using_kubectl_to_create_an_elb_ingress + configuring_elb_ingresses_using_annotations + configuring_https_certificates_for_elb_ingresses + configuring_the_server_name_indication_sni_for_elb_ingresses + elb_ingresses_routing_to_multiple_services + elb_ingresses_using_http_2 + interconnecting_elb_ingresses_with_https_backend_services diff --git a/umn/source/network/ingresses/elb_ingresses/interconnecting_elb_ingresses_with_https_backend_services.rst b/umn/source/network/ingresses/elb_ingresses/interconnecting_elb_ingresses_with_https_backend_services.rst new file mode 100644 index 0000000..2cf22ba --- /dev/null +++ b/umn/source/network/ingresses/elb_ingresses/interconnecting_elb_ingresses_with_https_backend_services.rst @@ -0,0 +1,55 @@ +:original_name: cce_10_0691.html + +.. _cce_10_0691: + +Interconnecting ELB Ingresses with HTTPS Backend Services +========================================================= + +Ingress can interconnect with backend services of different protocols. By default, the backend proxy channel of an ingress is an HTTP channel. To create an HTTPS channel, add the following configuration to the **annotations** field: + +.. code-block:: text + + kubernetes.io/elb.pool-protocol: https + +Constraints +----------- + +- This feature only applies to clusters of v1.23.8, v1.25.3, and later. +- Ingress can interconnect with HTTPS backend services only when dedicated load balancers are used. +- When interconnecting with HTTPS backend services, set **Client Protocol** of ingress to **HTTPS**. + +Interconnecting with HTTPS Backend Services +------------------------------------------- + +An ingress configuration example: + +.. code-block:: + + apiVersion: networking.k8s.io/v1 + kind: Ingress + metadata: + name: ingress-test + namespace: default + annotations: + kubernetes.io/elb.port: '443' + kubernetes.io/elb.id: # In this example, an existing dedicated load balancer is used. Replace its ID with the ID of your dedicated load balancer. + kubernetes.io/elb.class: performance + kubernetes.io/elb.pool-protocol: https # Interconnected HTTPS backend service + kubernetes.io/elb.tls-ciphers-policy: tls-1-2 + spec: + tls: + - secretName: ingress-test-secret + rules: + - host: '' + http: + paths: + - path: '/' + backend: + service: + name: # Replace it with the name of your target Service. + port: + number: 80 + property: + ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH + pathType: ImplementationSpecific + ingressClassName: cce diff --git a/umn/source/networking/ingresses/using_kubectl_to_create_an_elb_ingress.rst b/umn/source/network/ingresses/elb_ingresses/using_kubectl_to_create_an_elb_ingress.rst similarity index 58% rename from umn/source/networking/ingresses/using_kubectl_to_create_an_elb_ingress.rst rename to umn/source/network/ingresses/elb_ingresses/using_kubectl_to_create_an_elb_ingress.rst index 283691c..1677447 100644 --- a/umn/source/networking/ingresses/using_kubectl_to_create_an_elb_ingress.rst +++ b/umn/source/network/ingresses/elb_ingresses/using_kubectl_to_create_an_elb_ingress.rst @@ -17,7 +17,7 @@ Prerequisites ------------- - An ingress provides network access for backend workloads. Ensure that a workload is available in a cluster. If no workload is available, deploy a sample Nginx workload by referring to :ref:`Creating a Deployment `, :ref:`Creating a StatefulSet `, or :ref:`Creating a DaemonSet `. -- A NodePort Service has been configured for the workload. For details about how to configure the Service, see :ref:`NodePort `. +- :ref:`Services Supported by Ingresses ` lists the Service types supported by ELB ingresses. - Dedicated load balancers must be the application type (HTTP/HTTPS) supporting private networks (with a private IP). .. _cce_10_0252__section084115985013: @@ -25,7 +25,7 @@ Prerequisites Ingress Description of networking.k8s.io/v1 ------------------------------------------- -In CCE clusters of v1.23 or later, the ingress version is switched to networking.k8s.io/v1. +In CCE clusters of v1.23 or later, the ingress version is switched to **networking.k8s.io/v1**. Compared with v1beta1, v1 has the following differences in parameters: @@ -86,7 +86,7 @@ The following describes how to run the kubectl command to automatically create a service: name: # Replace it with the name of your target Service. port: - number: 8080 # Replace 8080 with the port number of your target Service. + number: # Replace it with the port number of your target Service. property: ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH pathType: ImplementationSpecific @@ -121,7 +121,7 @@ The following describes how to run the kubectl command to automatically create a - path: '/' backend: serviceName: # Replace it with the name of your target Service. - servicePort: 80 + servicePort: # Replace it with the port number of your target Service. property: ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH @@ -148,6 +148,7 @@ The following describes how to run the kubectl command to automatically create a "available_zone": [ "eu-de-01" ], + "elb_virsubnet_ids":["b4bf8152-6c36-4c3b-9f74-2229f8e640c9"], "l7_flavor_name": "L7_flavor.elb.s1.small" }' spec: @@ -160,7 +161,7 @@ The following describes how to run the kubectl command to automatically create a service: name: # Replace it with the name of your target Service. port: - number: 8080 # Replace 8080 with the port number of your target Service. + number: # Replace it with the port number of your target Service. property: ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH pathType: ImplementationSpecific @@ -190,6 +191,7 @@ The following describes how to run the kubectl command to automatically create a "available_zone": [ "eu-de-01" ], + "elb_virsubnet_ids":["b4bf8152-6c36-4c3b-9f74-2229f8e640c9"], "l7_flavor_name": "L7_flavor.elb.s1.small" }' spec: @@ -200,121 +202,159 @@ The following describes how to run the kubectl command to automatically create a - path: '/' backend: serviceName: # Replace it with the name of your target Service. - servicePort: 80 + servicePort: # Replace it with the port number of your target Service. property: ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH .. table:: **Table 1** Key parameters - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Mandatory | Type | Description | - +===========================================+=========================================+=======================+=========================================================================================================================================================================================================================================+ - | kubernetes.io/elb.class | Yes | String | Select a proper load balancer type. | - | | | | | - | | | | The value can be: | - | | | | | - | | | | - **union**: shared load balancer | - | | | | - **performance**: dedicated load balancer.. | - | | | | | - | | | | Default: **union** | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/ingress.class | Yes | String | **cce**: The self-developed ELB ingress is used. | - | | | | | - | | (only for clusters of v1.21 or earlier) | | This parameter is mandatory when an ingress is created by calling the API. | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | ingressClassName | Yes | String | **cce**: The self-developed ELB ingress is used. | - | | | | | - | | (only for clusters of v1.23 or later) | | This parameter is mandatory when an ingress is created by calling the API. | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.port | Yes | Integer | This parameter indicates the external port registered with the address of the LoadBalancer Service. | - | | | | | - | | | | Supported range: 1 to 65535 | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.subnet-id | ``-`` | String | ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. | - | | | | | - | | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. | - | | | | - Optional for clusters later than v1.11.7-r0. It is left blank by default. | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.autocreate | Yes | elb.autocreate object | Whether to automatically create a load balancer associated with an ingress. For details about the field description, see :ref:`Table 2 `. | - | | | | | - | | | | **Example** | - | | | | | - | | | | - If a public network load balancer will be automatically created, set this parameter to the following value: | - | | | | | - | | | | {"type":"public","bandwidth_name":"cce-bandwidth-``******``","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"} | - | | | | | - | | | | - If a private network load balancer will be automatically created, set this parameter to the following value: | - | | | | | - | | | | {"type":"inner","name":"A-location-d-test"} | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | host | No | String | Domain name for accessing the Service. By default, this parameter is left blank, and the domain name needs to be fully matched. | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | path | Yes | String | User-defined route path. All external access requests must match **host** and **path**. | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | serviceName | Yes | String | Name of the target Service bound to the ingress. | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | servicePort | Yes | Integer | Access port of the target Service. | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | ingress.beta.kubernetes.io/url-match-mode | No | String | Route matching policy. | - | | | | | - | | | | Default: **STARTS_WITH** (prefix match) | - | | | | | - | | | | Value range: | - | | | | | - | | | | - **EQUAL_TO**: exact match | - | | | | - **STARTS_WITH**: prefix match | - | | | | - **REGEX**: regular expression match | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | pathType | Yes | String | Path type. This field is supported only by clusters of v1.23 or later. | - | | | | | - | | | | - **ImplementationSpecific**: The matching method depends on Ingress Controller. The matching method defined by **ingress.beta.kubernetes.io/url-match-mode** is used in CCE. | - | | | | - **Exact**: exact matching of the URL, which is case-sensitive. | - | | | | - **Prefix**: matching based on the URL prefix separated by a slash (/). The match is case-sensitive, and elements in the path are matched one by one. A path element refers to a list of labels in the path separated by a slash (/). | - +-------------------------------------------+-----------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +===========================================+=========================================+=======================+=======================================================================================================================================================================================================================================================================================+ + | kubernetes.io/elb.class | Yes | String | Select a proper load balancer type. | + | | | | | + | | | | - **union**: shared load balancer | + | | | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/ingress.class | Yes | String | **cce**: The self-developed ELB ingress is used. | + | | | | | + | | (only for clusters of v1.21 or earlier) | | This parameter is mandatory when an ingress is created by calling the API. | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ingressClassName | Yes | String | **cce**: The self-developed ELB ingress is used. | + | | | | | + | | (only for clusters of v1.23 or later) | | This parameter is mandatory when an ingress is created by calling the API. | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.port | Yes | Integer | This parameter indicates the external port registered with the address of the LoadBalancer Service. | + | | | | | + | | | | Supported range: 1 to 65535 | + | | | | | + | | | | .. note:: | + | | | | | + | | | | Some ports are high-risk ports and are blocked by default, for example, port 21. | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.subnet-id | None | String | ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. | + | | | | | + | | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. | + | | | | - Optional for clusters later than v1.11.7-r0. It is left blank by default. | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.autocreate | Yes | elb.autocreate object | Whether to automatically create a load balancer associated with an ingress. For details about the field description, see :ref:`Table 2 `. | + | | | | | + | | | | **Example** | + | | | | | + | | | | - If a public network load balancer will be automatically created, set this parameter to the following value: | + | | | | | + | | | | {"type":"public","bandwidth_name":"cce-bandwidth-``******``","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"} | + | | | | | + | | | | - If a private network load balancer will be automatically created, set this parameter to the following value: | + | | | | | + | | | | {"type":"inner","name":"A-location-d-test"} | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | host | No | String | Domain name for accessing the Service. By default, this parameter is left blank, and the domain name needs to be fully matched. Ensure that the domain name has been registered and archived. Once a domain name rule is configured, you must use the domain name for access. | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | path | Yes | String | User-defined route path. All external access requests must match **host** and **path**. | + | | | | | + | | | | .. note:: | + | | | | | + | | | | The access path added here must exist in the backend application. Otherwise, the forwarding fails. | + | | | | | + | | | | For example, the default access URL of the Nginx application is **/usr/share/nginx/html**. When adding **/test** to the ingress forwarding policy, ensure the access URL of your Nginx application contains **/usr/share/nginx/html/test**. Otherwise, error 404 will be returned. | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | ingress.beta.kubernetes.io/url-match-mode | No | String | Route matching policy. | + | | | | | + | | | | Default: **STARTS_WITH** (prefix match) | + | | | | | + | | | | Options: | + | | | | | + | | | | - **EQUAL_TO**: exact match | + | | | | - **STARTS_WITH**: prefix match | + | | | | - **REGEX**: regular expression match | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | pathType | Yes | String | Path type. This field is supported only by clusters of v1.23 or later. | + | | | | | + | | | | - **ImplementationSpecific**: The matching method depends on Ingress Controller. The matching method defined by **ingress.beta.kubernetes.io/url-match-mode** is used in CCE. | + | | | | - **Exact**: exact matching of the URL, which is case-sensitive. | + | | | | - **Prefix**: prefix matching, which is case-sensitive. With this method, the URL path is separated into multiple elements by slashes (/) and the elements are matched one by one. If each element in the URL matches the path, the subpaths of the URL can be routed normally. | + | | | | | + | | | | .. note:: | + | | | | | + | | | | - During prefix matching, each element must be exactly matched. If the last element of the URL is the substring of the last element in the request path, no matching is performed. For example, **/foo/bar** matches **/foo/bar/baz** but does not match **/foo/barbaz**. | + | | | | - When elements are separated by slashes (/), if the URL or request path ends with a slash (/), the slash (/) at the end is ignored. For example, **/foo/bar** matches **/foo/bar/**. | + | | | | | + | | | | See `examples `__ of ingress path matching. | + +-------------------------------------------+-----------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ .. _cce_10_0252__table268711532210: - .. table:: **Table 2** Data structure of the elb.autocreate field + .. table:: **Table 2** Data structure of the **elb.autocreate** field - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Mandatory | Type | Description | - +======================+=======================================+=================+===============================================================================================================================================================================================+ - | type | No | String | Network type of the load balancer. | - | | | | | - | | | | - **public**: public network load balancer | - | | | | - **inner**: private network load balancer | - | | | | | - | | | | Default: **inner** | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | - | | | | | - | | | | Value range: a string of 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_chargemode | No | String | Bandwidth mode. | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_size | Yes for public network load balancers | Integer | Bandwidth size. The value ranges from 1 Mbit/s to 2000 Mbit/s by default. The actual range varies depending on the configuration in each region. | - | | | | | - | | | | - The minimum increment for bandwidth adjustment varies depending on the bandwidth range. The details are as follows: | - | | | | | - | | | | - The minimum increment is 1 Mbit/s if the allowed bandwidth ranges from 0 Mbit/s to 300 Mbit/s (with 300 Mbit/s included). | - | | | | - The minimum increment is 50 Mbit/s if the allowed bandwidth ranges from 300 Mbit/s to 1000 Mbit/s. | - | | | | - The minimum increment is 500 Mbit/s if the allowed bandwidth is greater than 1000 Mbit/s. | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_sharetype | Yes for public network load balancers | String | Bandwidth type. | - | | | | | - | | | | **PER**: dedicated bandwidth. | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | eip_type | Yes for public network load balancers | String | EIP type. | - | | | | | - | | | | - **5_bgp**: dynamic BGP | - | | | | - **5_sbgp**: static BGP | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | name | No | String | Name of the automatically created load balancer. | - | | | | | - | | | | Value range: a string of 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | - | | | | | - | | | | Default: **cce-lb+ingress.UID** | - +----------------------+---------------------------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +======================+=======================================+==================+==================================================================================================================================================================================================================================================================================================================================================================================+ + | name | No | String | Name of the automatically created load balancer. | + | | | | | + | | | | The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed. | + | | | | | + | | | | Default: **cce-lb+service.UID** | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | type | No | String | Network type of the load balancer. | + | | | | | + | | | | - **public**: public network load balancer | + | | | | - **inner**: private network load balancer | + | | | | | + | | | | Default: **inner** | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | + | | | | | + | | | | The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_chargemode | No | String | Bandwidth mode. | + | | | | | + | | | | - **bandwidth**: billed by bandwidth | + | | | | - **traffic**: billed by traffic | + | | | | | + | | | | Default: **bandwidth** | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_size | Yes for public network load balancers | Integer | Bandwidth size. The default value is 1 to 2000 Mbit/s. Configure this parameter based on the bandwidth range allowed in your region. | + | | | | | + | | | | The minimum increment for bandwidth adjustment varies depending on the bandwidth range. | + | | | | | + | | | | - The minimum increment is 1 Mbit/s if the allowed bandwidth does not exceed 300 Mbit/s. | + | | | | - The minimum increment is 50 Mbit/s if the allowed bandwidth ranges from 300 Mbit/s to 1000 Mbit/s. | + | | | | - The minimum increment is 500 Mbit/s if the allowed bandwidth exceeds 1000 Mbit/s. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_sharetype | Yes for public network load balancers | String | Bandwidth sharing mode. | + | | | | | + | | | | - **PER**: dedicated bandwidth | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | eip_type | Yes for public network load balancers | String | EIP type. | + | | | | | + | | | | - **5_bgp**: dynamic BGP | + | | | | - **5_sbgp**: static BGP | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | available_zone | Yes | Array of strings | AZ where the load balancer is located. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | l4_flavor_name | Yes | String | Flavor name of the layer-4 load balancer. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | l7_flavor_name | No | String | Flavor name of the layer-7 load balancer. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. The value of this parameter must be the same as that of **l4_flavor_name**, that is, both are elastic specifications or fixed specifications. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | elb_virsubnet_ids | No | Array of strings | Subnet where the backend server of the load balancer is located. If this parameter is left blank, the default cluster subnet is used. Load balancers occupy different number of subnet IP addresses based on their specifications. Therefore, you are not advised to use the subnet CIDR blocks of other resources (such as clusters and nodes) as the load balancer CIDR block. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. | + | | | | | + | | | | Example: | + | | | | | + | | | | .. code-block:: | + | | | | | + | | | | "elb_virsubnet_ids": [ | + | | | | "14567f27-8ae4-42b8-ae47-9f847a4690dd" | + | | | | ] | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ #. Create an ingress. @@ -360,7 +400,8 @@ CCE allows you to connect to an existing load balancer when creating an ingress. name: ingress-test annotations: kubernetes.io/elb.id: # Replace it with the ID of your existing load balancer. - kubernetes.io/elb.ip: # Replace it with your existing load balancer IP. + kubernetes.io/elb.ip: # Replace it with the IP of your existing load balancer. + kubernetes.io/elb.class: performance # Load balancer type kubernetes.io/elb.port: '80' spec: rules: @@ -372,7 +413,7 @@ CCE allows you to connect to an existing load balancer when creating an ingress. service: name: # Replace it with the name of your target Service. port: - number: 8080 # Replace 8080 with your target service port number. + number: 8080 # Replace 8080 with the port number of your target Service. property: ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH pathType: ImplementationSpecific @@ -388,7 +429,8 @@ CCE allows you to connect to an existing load balancer when creating an ingress. name: ingress-test annotations: kubernetes.io/elb.id: # Replace it with the ID of your existing load balancer. - kubernetes.io/elb.ip: # Replace it with your existing load balancer IP. + kubernetes.io/elb.ip: # Replace it with the IP of your existing load balancer. + kubernetes.io/elb.class: performance # Load balancer type kubernetes.io/elb.port: '80' kubernetes.io/ingress.class: cce spec: @@ -405,495 +447,25 @@ CCE allows you to connect to an existing load balancer when creating an ingress. .. table:: **Table 3** Key parameters - +----------------------+-----------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Mandatory | Type | Description | - +======================+=================+=================+=========================================================================================================================================================================================================+ - | kubernetes.io/elb.id | Yes | String | This parameter indicates the ID of a load balancer. The value can contain 1 to 100 characters. | - | | | | | - | | | | **How to obtain**: | - | | | | | - | | | | On the management console, click **Service List**, and choose **Networking** > **Elastic Load Balance**. Click the name of the target load balancer. On the **Summary** tab page, find and copy the ID. | - +----------------------+-----------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.ip | Yes | String | This parameter indicates the service address of a load balancer. The value can be the public IP address of a public network load balancer or the private IP address of a private network load balancer. | - +----------------------+-----------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Configuring HTTPS Certificates ------------------------------- - -Ingress supports TLS certificate configuration and secures your Services with HTTPS. - -.. note:: - - If HTTPS is enabled for the same port of the same load balancer of multiple ingresses, you must select the same certificate. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following command to create a YAML file named **ingress-test-secret.yaml** (the file name can be customized): - - **vi ingress-test-secret.yaml** - - **The YAML file is configured as follows:** - - .. code-block:: - - apiVersion: v1 - data: - tls.crt: LS0******tLS0tCg== - tls.key: LS0tL******0tLS0K - kind: Secret - metadata: - annotations: - description: test for ingressTLS secrets - name: ingress-test-secret - namespace: default - type: IngressTLS - - .. note:: - - In the preceding information, **tls.crt** and **tls.key** are only examples. Replace them with the actual files. The values of **tls.crt** and **tls.key** are Base64-encoded. - -#. Create a secret. - - **kubectl create -f ingress-test-secret.yaml** - - If information similar to the following is displayed, the secret is being created: - - .. code-block:: - - secret/ingress-test-secret created - - View the created secrets. - - **kubectl get secrets** - - If information similar to the following is displayed, the secret has been created successfully: - - .. code-block:: - - NAME TYPE DATA AGE - ingress-test-secret IngressTLS 2 13s - -#. Create a YAML file named **ingress-test.yaml**. The file name can be customized. - - **vi ingress-test.yaml** - - .. note:: - - Default security policy (kubernetes.io/elb.tls-ciphers-policy) is supported only in clusters of v1.17.17 or later. - - **The following uses the automatically created load balancer as an example. The YAML file is configured as follows:** - - **For clusters of v1.21 or earlier:** - - .. code-block:: - - apiVersion: networking.k8s.io/v1beta1 - kind: Ingress - metadata: - name: ingress-test - annotations: - kubernetes.io/elb.class: union - kubernetes.io/ingress.class: cce - kubernetes.io/elb.port: '443' - kubernetes.io/elb.autocreate: - '{ - "type":"public", - "bandwidth_name":"cce-bandwidth-15511633796**", - "bandwidth_chargemode":"bandwidth", - "bandwidth_size":5, - "bandwidth_sharetype":"PER", - "eip_type":"5_bgp" - }' - kubernetes.io/elb.tls-ciphers-policy: tls-1-2 - spec: - tls: - - secretName: ingress-test-secret - rules: - - host: '' - http: - paths: - - path: '/' - backend: - serviceName: # Replace it with the name of your target Service. - servicePort: 80 - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - - **For clusters of v1.23 or later:** - - .. code-block:: - - apiVersion: networking.k8s.io/v1 - kind: Ingress - metadata: - name: ingress-test - annotations: - kubernetes.io/elb.class: union - kubernetes.io/elb.port: '443' - kubernetes.io/elb.autocreate: - '{ - "type":"public", - "bandwidth_name":"cce-bandwidth-15511633796**", - "bandwidth_chargemode":"bandwidth", - "bandwidth_size":5, - "bandwidth_sharetype":"PER", - "eip_type":"5_bgp" - }' - kubernetes.io/elb.tls-ciphers-policy: tls-1-2 - spec: - tls: - - secretName: ingress-test-secret - rules: - - host: '' - http: - paths: - - path: '/' - backend: - service: - name: # Replace it with the name of your target Service. - port: - number: 8080 # Replace 8080 with the port number of your target Service. - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - pathType: ImplementationSpecific - ingressClassName: cce - - .. table:: **Table 4** Key parameters - - +--------------------------------------+-----------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Mandatory | Type | Description | - +======================================+=================+==================+============================================================================================================================================================================================================================================+ - | kubernetes.io/elb.tls-ciphers-policy | No | String | The default value is **tls-1-2**, which is the default security policy used by the listener and takes effect only when the HTTPS protocol is used. | - | | | | | - | | | | Value range: | - | | | | | - | | | | - tls-1-0 | - | | | | - tls-1-1 | - | | | | - tls-1-2 | - | | | | - tls-1-2-strict | - | | | | | - | | | | For details of cipher suites for each security policy, see :ref:`Table 5 `. | - +--------------------------------------+-----------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | tls | No | Array of strings | This parameter is mandatory if HTTPS is used. Multiple independent domain names and certificates can be added to this parameter. For details, see :ref:`Configuring the Server Name Indication (SNI) `. | - +--------------------------------------+-----------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | secretName | No | String | This parameter is mandatory if HTTPS is used. Set this parameter to the name of the created secret. | - +--------------------------------------+-----------------+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - .. _cce_10_0252__table9419191416246: - - .. table:: **Table 5** tls_ciphers_policy parameter description - - +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Security Policy | TLS Version | Cipher Suite | - +=======================+=======================+=======================================================================================================================================================================================================================================================================================================================================================================================================+ - | tls-1-0 | TLS 1.2 | ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES128-SHA256:AES256-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-SHA:AES256-SHA | - | | | | - | | TLS 1.1 | | - | | | | - | | TLS 1.0 | | - +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | tls-1-1 | TLS 1.2 | | - | | | | - | | TLS 1.1 | | - +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | tls-1-2 | TLS 1.2 | | - +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | tls-1-2-strict | TLS 1.2 | ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES128-SHA256:AES256-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384 | - +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -#. Create an ingress. - - **kubectl create -f ingress-test.yaml** - - If information similar to the following is displayed, the ingress has been created. - - .. code-block:: - - ingress/ingress-test created - - View the created ingress. - - **kubectl get ingress** - - If information similar to the following is displayed, the ingress has been created successfully and the workload is accessible. - - .. code-block:: - - NAME HOSTS ADDRESS PORTS AGE - ingress-test * 121.**.**.** 80 10s - -#. Enter **https://121.**.**.*\*:443** in the address box of the browser to access the workload (for example, :ref:`Nginx workload `). - - **121.**.**.*\*** indicates the IP address of the unified load balancer. - -Using HTTP/2 ------------- - -Ingresses can use HTTP/2 to expose services. Connections from the load balancer proxy to your applications use HTTP/1.X by default. If your application is capable of receiving HTTP/2 requests, you can add the following field to the ingress annotation to enable the use of HTTP/2: - -\`kubernetes.io/elb.http2-enable: 'true'\` - -The following shows the YAML file for associating with an existing load balancer: - -**For clusters of v1.21 or earlier:** - -.. code-block:: - - apiVersion: networking.k8s.io/v1beta1 - kind: Ingress - metadata: - name: ingress-test - annotations: - kubernetes.io/elb.id: # Replace it with the ID of your existing load balancer. - kubernetes.io/elb.ip: # Replace it with the IP of your existing load balancer. - kubernetes.io/elb.port: '443' - kubernetes.io/ingress.class: cce - kubernetes.io/elb.http2-enable: 'true' # Enable HTTP/2. - spec: - tls: - - secretName: ingress-test-secret - rules: - - host: '' - http: - paths: - - path: '/' - backend: - serviceName: # Replace it with the name of your target Service. - servicePort: 80 # Replace it with the port number of your target Service. - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - -**For clusters of v1.23 or later:** - -.. code-block:: - - apiVersion: networking.k8s.io/v1 - kind: Ingress - metadata: - name: ingress-test - annotations: - kubernetes.io/elb.id: # Replace it with the ID of your existing load balancer. - kubernetes.io/elb.ip: # Replace it with the IP of your existing load balancer. - kubernetes.io/elb.port: '443' - kubernetes.io/elb.http2-enable: 'true' # Enable HTTP/2. - spec: - tls: - - secretName: ingress-test-secret - rules: - - host: '' - http: - paths: - - path: '/' - backend: - service: - name: # Replace it with the name of your target Service. - port: - number: 8080 # Replace 8080 with the port number of your target Service. - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - pathType: ImplementationSpecific - ingressClassName: cce - -Table 6 HTTP/2 parameters - -+--------------------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Parameter | Mandatory | Type | Description | -+================================+=================+=================+==================================================================================================================================================================================================================================================================================================================================+ -| kubernetes.io/elb.http2-enable | No | Bool | Whether HTTP/2 is enabled. Request forwarding using HTTP/2 improves the access performance between your application and the load balancer. However, the load balancer still uses HTTP 1.X to forward requests to the backend server. **This parameter is supported in clusters of v1.19.16-r0, v1.21.3-r0, and later versions.** | -| | | | | -| | | | Value range: | -| | | | | -| | | | - **true**: enabled | -| | | | - **false**: disabled (default value) | -| | | | | -| | | | Note: **HTTP/2 can be enabled or disabled only when the listener uses HTTPS.** This parameter is invalid and defaults to **false** when the listener protocol is HTTP. | -+--------------------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -.. _cce_10_0252__section0555194782414: - -Configuring the Server Name Indication (SNI) --------------------------------------------- - -SNI allows multiple TLS-based access domain names to be provided for external systems using the same IP address and port number. Different domain names can use different security certificates. - -.. note:: - - - Only one domain name can be specified for each SNI certificate. Wildcard-domain certificates are supported. - - Security policy (kubernetes.io/elb.tls-ciphers-policy) is supported only in clusters of v1.17.11 or later. - -You can enable SNI when the preceding conditions are met. The following uses the automatic creation of a load balancer as an example. In this example, **sni-test-secret-1** and **sni-test-secret-2** are SNI certificates. The domain names specified by the certificates must be the same as those in the certificates. - -**For clusters of v1.21 or earlier:** - -.. code-block:: - - apiVersion: networking.k8s.io/v1beta1 - kind: Ingress - metadata: - name: ingress-test - annotations: - kubernetes.io/elb.class: union - kubernetes.io/ingress.class: cce - kubernetes.io/elb.port: '443' - kubernetes.io/elb.autocreate: - '{ - "type":"public", - "bandwidth_name":"cce-bandwidth-******", - "bandwidth_chargemode":"bandwidth", - "bandwidth_size":5, - "bandwidth_sharetype":"PER", - "eip_type":"5_bgp" - }' - kubernetes.io/elb.tls-ciphers-policy: tls-1-2 - spec: - tls: - - secretName: ingress-test-secret - - hosts: - - example.top # Domain name specified a certificate is issued - secretName: sni-test-secret-1 - - hosts: - - example.com # Domain name specified a certificate is issued - secretName: sni-test-secret-2 - rules: - - host: '' - http: - paths: - - path: '/' - backend: - serviceName: # Replace it with the name of your target Service. - servicePort: 80 - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - -**For clusters of v1.23 or later:** - -.. code-block:: - - apiVersion: networking.k8s.io/v1 - kind: Ingress - metadata: - name: ingress-test - annotations: - kubernetes.io/elb.class: union - kubernetes.io/elb.port: '443' - kubernetes.io/elb.autocreate: - '{ - "type":"public", - "bandwidth_name":"cce-bandwidth-******", - "bandwidth_chargemode":"bandwidth", - "bandwidth_size":5, - "bandwidth_sharetype":"PER", - "eip_type":"5_bgp" - }' - kubernetes.io/elb.tls-ciphers-policy: tls-1-2 - spec: - tls: - - secretName: ingress-test-secret - - hosts: - - example.top # Domain name specified a certificate is issued - secretName: sni-test-secret-1 - - hosts: - - example.com # Domain name specified a certificate is issued - secretName: sni-test-secret-2 - rules: - - host: '' - http: - paths: - - path: '/' - backend: - service: - name: # Replace it with the name of your target Service. - port: - number: 8080 # Replace 8080 with the port number of your target Service. - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - pathType: ImplementationSpecific - ingressClassName: cce - -Accessing Multiple Services ---------------------------- - -Ingresses can route requests to multiple backend Services based on different matching policies. The **spec** field in the YAML file is set as below. You can access **www.example.com/foo**, **www.example.com/bar**, and **foo.example.com/** to route to three different backend Services. - -.. important:: - - The URL registered in an ingress forwarding policy must be the same as the URL exposed by the backend Service. Otherwise, a 404 error will be returned. - -.. code-block:: - - spec: - rules: - - host: 'www.example.com' - http: - paths: - - path: '/foo' - backend: - serviceName: # Replace it with the name of your target Service. - servicePort: 80 - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - - path: '/bar' - backend: - serviceName: # Replace it with the name of your target Service. - servicePort: 80 - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - - host: 'foo.example.com' - http: - paths: - - path: '/' - backend: - serviceName: # Replace it with the name of your target Service. - servicePort: 80 - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - -Interconnecting with HTTPS Backend Services -------------------------------------------- - -Ingress can interconnect with backend services of different protocols. By default, the backend proxy channel of an ingress is an HTTP channel. To create an HTTPS channel, add the following configuration to the **annotations** field: - -.. code-block:: text - - kubernetes.io/elb.pool-protocol: https - -.. important:: - - - This feature only applies to clusters of v1.23.8, v1.25.3, and later. - - Ingress can interconnect with HTTPS backend services only when dedicated load balancers are used. - - When interconnecting with HTTPS backend services, set **Client Protocol** of ingress to **HTTPS**. - -An ingress configuration example is as follows: - -.. code-block:: - - apiVersion: networking.k8s.io/v1 - kind: Ingress - metadata: - name: ingress-test - namespace: default - annotations: - kubernetes.io/elb.port: '443' - kubernetes.io/elb.id: # In this example, an existing dedicated load balancer is used. Replace its ID with the ID of your dedicated load balancer. - kubernetes.io/elb.class: performance - kubernetes.io/elb.pool-protocol: https # Interconnected HTTPS backend service - kubernetes.io/elb.tls-ciphers-policy: tls-1-2 - spec: - tls: - - secretName: ingress-test-secret - rules: - - host: '' - http: - paths: - - path: '/' - backend: - service: - name: # Replace it with the name of your target Service. - port: - number: 80 - property: - ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH - pathType: ImplementationSpecific - ingressClassName: cce - -.. |image1| image:: /_static/images/en-us_image_0000001569022977.png + +-------------------------+-----------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +=========================+=================+=================+=========================================================================================================================================================================================================+ + | kubernetes.io/elb.id | Yes | String | ID of a load balancer. The value can contain 1 to 100 characters. | + | | | | | + | | | | **How to obtain**: | + | | | | | + | | | | On the management console, click **Service List**, and choose **Networking** > **Elastic Load Balance**. Click the name of the target load balancer. On the **Summary** tab page, find and copy the ID. | + +-------------------------+-----------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.ip | No | String | Service address of a load balancer. The value can be the public IP address of a public network load balancer or the private IP address of a private network load balancer. | + +-------------------------+-----------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.class | Yes | String | Load balancer type. | + | | | | | + | | | | - **union**: shared load balancer | + | | | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | + | | | | | + | | | | .. note:: | + | | | | | + | | | | If an ELB Ingress accesses an existing dedicated load balancer, the dedicated load balancer must be of the application load balancing (HTTP/HTTPS) type. | + +-------------------------+-----------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. |image1| image:: /_static/images/en-us_image_0000001695737505.png diff --git a/umn/source/network/ingresses/index.rst b/umn/source/network/ingresses/index.rst new file mode 100644 index 0000000..282c8c1 --- /dev/null +++ b/umn/source/network/ingresses/index.rst @@ -0,0 +1,16 @@ +:original_name: cce_10_0248.html + +.. _cce_10_0248: + +Ingresses +========= + +- :ref:`Overview ` +- :ref:`ELB Ingresses ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + elb_ingresses/index diff --git a/umn/source/network/ingresses/overview.rst b/umn/source/network/ingresses/overview.rst new file mode 100644 index 0000000..fdbb2b2 --- /dev/null +++ b/umn/source/network/ingresses/overview.rst @@ -0,0 +1,66 @@ +:original_name: cce_10_0094.html + +.. _cce_10_0094: + +Overview +======== + +Why We Need Ingresses +--------------------- + +A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, ingress. + +An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in :ref:`Figure 1 `, you can customize forwarding rules based on domain names and URLs to implement fine-grained distribution of access traffic. + +.. _cce_10_0094__fig18155819416: + +.. figure:: /_static/images/en-us_image_0000001695896861.png + :alt: **Figure 1** Ingress diagram + + **Figure 1** Ingress diagram + +The following describes the ingress-related definitions: + +- Ingress object: a set of access rules that forward requests to specified Services based on domain names or URLs. It can be added, deleted, modified, and queried by calling APIs. +- Ingress Controller: an executor for request forwarding. It monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time, parses rules defined by ingresses, and forwards requests to the corresponding backend Services. + +Working Principle of ELB Ingress Controller +------------------------------------------- + +ELB Ingress Controller developed by CCE implements layer-7 network access for the internet and intranet (in the same VPC) based on ELB and distributes access traffic to the corresponding Services using different URLs. + +ELB Ingress Controller is deployed on the master node and bound to the load balancer in the VPC where the cluster resides. Different domain names, ports, and forwarding policies can be configured for the same load balancer (with the same IP address). :ref:`Figure 2 ` shows the working principle of ELB Ingress Controller. + +#. A user creates an ingress object and configures a traffic access rule in the ingress, including the load balancer, URL, SSL, and backend service port. +#. When Ingress Controller detects that the ingress object changes, it reconfigures the listener and backend server route on the ELB side according to the traffic access rule. +#. When a user accesses a workload, the traffic is forwarded to the corresponding backend service port based on the forwarding policy configured on ELB, and then forwarded to each associated workload through the Service. + +.. _cce_10_0094__fig122542486129: + +.. figure:: /_static/images/en-us_image_0000001647577184.png + :alt: **Figure 2** Working principle of ELB Ingress Controller + + **Figure 2** Working principle of ELB Ingress Controller + +.. _cce_10_0094__section3565202819276: + +Services Supported by Ingresses +------------------------------- + +:ref:`Table 1 ` lists the services supported by ELB Ingresses. + +.. _cce_10_0094__table143264518141: + +.. table:: **Table 1** Services supported by ELB Ingresses + + +-------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | Cluster Type | ELB Type | ClusterIP | NodePort | + +===================+=========================+=======================================================================================================================================+============================================================================================================================================+ + | CCE cluster | Shared load balancer | Not supported | Supported | + +-------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | | Dedicated load balancer | Not supported (Failed to access the dedicated load balancers because no ENI is bound to the associated pod of the ClusterIP Service.) | Supported | + +-------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | CCE Turbo cluster | Shared load balancer | Not supported | Supported | + +-------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | | Dedicated load balancer | Supported | Not supported (Failed to access the dedicated load balancers because an ENI has been bound to the associated pod of the NodePort Service.) | + +-------------------+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/networking/overview.rst b/umn/source/network/overview.rst similarity index 65% rename from umn/source/networking/overview.rst rename to umn/source/network/overview.rst index 64d7c3d..3411b7d 100644 --- a/umn/source/networking/overview.rst +++ b/umn/source/network/overview.rst @@ -21,7 +21,7 @@ All nodes in the cluster are located in a VPC and use the VPC network. The conta - **Node Network** - A node network assigns IP addresses to hosts (nodes in the figure above) in a cluster. You need to select a VPC subnet as the node network of the CCE cluster. The number of available IP addresses in a subnet determines the maximum number of nodes (including master nodes and worker nodes) that can be created in a cluster. This quantity is also affected by the container network. For details, see the container network model. + A node network assigns IP addresses to hosts (nodes in the figure above) in a cluster. Select a VPC subnet as the node network of the CCE cluster. The number of available IP addresses in a subnet determines the maximum number of nodes (including master nodes and worker nodes) that can be created in a cluster. This quantity is also affected by the container network. For details, see the container network model. - **Container Network** @@ -30,24 +30,24 @@ All nodes in the cluster are located in a VPC and use the VPC network. The conta Currently, CCE supports the following container network models: - Container tunnel network: The container tunnel network is constructed on but independent of the node network through tunnel encapsulation. This network model uses VXLAN to encapsulate Ethernet packets into UDP packets and transmits them in tunnels. Open vSwitch serves as the backend virtual switch. - - VPC network: The VPC network uses VPC routing to integrate with the underlying network. This network model is applicable to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from outside the cluster. + - VPC network: The VPC network uses VPC routing to integrate with the underlying network. This network model applies to performance-intensive scenarios. The maximum number of nodes allowed in a cluster depends on the route quota in a VPC network. Each node is assigned a CIDR block of a fixed size. This networking model is free from tunnel encapsulation overhead and outperforms the container tunnel network model. In addition, as VPC routing includes routes to node IP addresses and the container CIDR block, container pods in a cluster can be directly accessed from outside the cluster. - Developed by CCE, Cloud Native Network 2.0 deeply integrates Elastic Network Interfaces (ENIs) and Sub Network Interfaces (sub-ENIs) of VPC. Container IP addresses are allocated from the VPC CIDR block. ELB passthrough networking is supported to direct access requests to containers. Security groups and elastic IPs (EIPs) are bound to deliver high performance. The performance, networking scale, and application scenarios of a container network vary according to the container network model. For details about the functions and features of different container network models, see :ref:`Overview `. - **Service Network** - Service is also a Kubernetes object. Each Service has a fixed IP address. When creating a cluster on CCE, you can specify the Service CIDR block. The Service CIDR block cannot overlap with the node or container CIDR block. The Service CIDR block can be used only within a cluster. + Service is also a Kubernetes object. Each Service has a static IP address. When creating a cluster on CCE, you can specify the Service CIDR block. The Service CIDR block cannot overlap with the node or container CIDR block. The Service CIDR block can be used only within a cluster. .. _cce_10_0010__section1860619221134: Service ------- -A Service is used for pod access. With a fixed IP address, a Service forwards access traffic to pods and performs load balancing for these pods. +A Service is used for pod access. With a static IP address, a Service forwards access traffic to pods and performs load balancing for these pods. -.. figure:: /_static/images/en-us_image_0000001517743432.png +.. figure:: /_static/images/en-us_image_0000001695896373.png :alt: **Figure 1** Accessing pods through a Service **Figure 1** Accessing pods through a Service @@ -58,7 +58,7 @@ You can configure the following types of Services: - NodePort: used for access from outside a cluster. A NodePort Service is accessed through the port on the node. - LoadBalancer: used for access from outside a cluster. It is an extension of NodePort, to which a load balancer routes, and external systems only need to access the load balancer. -For details about the Service, see :ref:`Service Overview `. +For details about the Service, see :ref:`Overview `. .. _cce_10_0010__section1248852094313: @@ -68,12 +68,12 @@ Ingress Services forward requests using layer-4 TCP and UDP protocols. Ingresses forward requests using layer-7 HTTP and HTTPS protocols. Domain names and paths can be used to achieve finer granularities. -.. figure:: /_static/images/en-us_image_0000001517903016.png +.. figure:: /_static/images/en-us_image_0000001647417440.png :alt: **Figure 2** Ingress-Service **Figure 2** Ingress-Service -For details about the ingress, see :ref:`Ingress Overview `. +For details about the ingress, see :ref:`Overview `. .. _cce_10_0010__section1286493159: @@ -85,18 +85,18 @@ Workload access scenarios can be categorized as follows: - Intra-cluster access: A ClusterIP Service is used for workloads in the same cluster to access each other. - Access from outside a cluster: A Service (NodePort or LoadBalancer type) or an ingress is recommended for a workload outside a cluster to access workloads in the cluster. - - Access through a public network requires an EIP to be bound the node or load balancer. - - Access through an intranet requires the workload to be accessed through the internal IP address of the node or load balancer. If workloads are located in different VPCs, a peering connection is required to enable communication between different VPCs. + - Access through the public network: An EIP should be bound to the node or load balancer. + - Access through the private network: The workload can be accessed through the internal IP address of the node or load balancer. If workloads are located in different VPCs, a peering connection is required to enable communication between different VPCs. -- The workload accesses the external network. +- The workload can access the external network as follows: - Accessing an intranet: The workload accesses the intranet address, but the implementation method varies depending on container network models. Ensure that the peer security group allows the access requests from the container CIDR block. - - Accessing a public network: You need to assign an EIP to the node where the workload runs (when the VPC network or tunnel network model is used), bind an EIP to the pod IP address (when the Cloud Native Network 2.0 model is used), or configure SNAT rules through the NAT gateway. For details, see :ref:`Accessing Public Networks from a Container `. + - Accessing a public network: Assign an EIP to the node where the workload runs (when the VPC network or tunnel network model is used), bind an EIP to the pod IP address (when the Cloud Native Network 2.0 model is used), or configure SNAT rules through the NAT gateway. For details, see :ref:`Accessing Public Networks from a Container `. -.. figure:: /_static/images/en-us_image_0000001568822741.png +.. figure:: /_static/images/en-us_image_0000001647576708.png :alt: **Figure 3** Network access diagram **Figure 3** Network access diagram -.. |image1| image:: /_static/images/en-us_image_0000001518222536.png +.. |image1| image:: /_static/images/en-us_image_0000001647576700.png diff --git a/umn/source/networking/services/intra-cluster_access_clusterip.rst b/umn/source/network/service/clusterip.rst similarity index 96% rename from umn/source/networking/services/intra-cluster_access_clusterip.rst rename to umn/source/network/service/clusterip.rst index 264409b..0a344ff 100644 --- a/umn/source/networking/services/intra-cluster_access_clusterip.rst +++ b/umn/source/network/service/clusterip.rst @@ -2,8 +2,8 @@ .. _cce_10_0011: -Intra-Cluster Access (ClusterIP) -================================ +ClusterIP +========= Scenario -------- @@ -16,7 +16,7 @@ The cluster-internal domain name format is **.\ *`. diff --git a/umn/source/networking/services/headless_service.rst b/umn/source/network/service/headless_service.rst similarity index 100% rename from umn/source/networking/services/headless_service.rst rename to umn/source/network/service/headless_service.rst diff --git a/umn/source/network/service/index.rst b/umn/source/network/service/index.rst new file mode 100644 index 0000000..2f41d05 --- /dev/null +++ b/umn/source/network/service/index.rst @@ -0,0 +1,22 @@ +:original_name: cce_10_0247.html + +.. _cce_10_0247: + +Service +======= + +- :ref:`Overview ` +- :ref:`ClusterIP ` +- :ref:`NodePort ` +- :ref:`LoadBalancer ` +- :ref:`Headless Service ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + clusterip + nodeport + loadbalancer/index + headless_service diff --git a/umn/source/networking/services/configuring_health_check_for_multiple_ports.rst b/umn/source/network/service/loadbalancer/configuring_health_check_for_multiple_ports.rst similarity index 93% rename from umn/source/networking/services/configuring_health_check_for_multiple_ports.rst rename to umn/source/network/service/loadbalancer/configuring_health_check_for_multiple_ports.rst index 43bfd16..99a87cc 100644 --- a/umn/source/networking/services/configuring_health_check_for_multiple_ports.rst +++ b/umn/source/network/service/loadbalancer/configuring_health_check_for_multiple_ports.rst @@ -47,15 +47,15 @@ The following is an example of using the **kubernetes.io/elb.health-check-option "delay": "5", "timeout": "10", "max_retries": "3", - "target_service_port": "TCP:1", // (Mandatory) Port for health check specified by spec.ports. The value consists of the protocol and port number, for example, TCP:80. - "monitor_port": "22" // (Optional) Re-specified port for health check. If this parameter is not specified, the service port is used by default. Ensure that the port is in the listening state on the node where the pod is located. Otherwise, the health check result will be affected. + "target_service_port": "TCP:1", + "monitor_port": "22" }, { "protocol": "HTTP", "delay": "5", "timeout": "10", "max_retries": "3", - "path": "/", // Health check URL. This parameter needs to be configured when HTTP is used. + "path": "/", "target_service_port": "TCP:2", "monitor_port": "22" } @@ -110,9 +110,9 @@ The following is an example of using the **kubernetes.io/elb.health-check-option | | | | | | | | | Value options: TCP, UDP, or HTTP | +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ - | path | No | String | Health check URL. This parameter needs to be configured when the protocol is HTTP. | + | path | No | String | Health check URL. This parameter needs to be configured when the protocol is **HTTP**. | | | | | | | | | | Default value: **/** | | | | | | - | | | | The value can contain 1 to 10000 characters. | + | | | | The value can contain 1 to 10,000 characters. | +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/networking/services/loadbalancer.rst b/umn/source/network/service/loadbalancer/creating_a_loadbalancer_service.rst similarity index 77% rename from umn/source/networking/services/loadbalancer.rst rename to umn/source/network/service/loadbalancer/creating_a_loadbalancer_service.rst index b367ec7..1c9b043 100644 --- a/umn/source/networking/services/loadbalancer.rst +++ b/umn/source/network/service/loadbalancer/creating_a_loadbalancer_service.rst @@ -1,23 +1,19 @@ -:original_name: cce_10_0014.html +:original_name: cce_10_0681.html -.. _cce_10_0014: +.. _cce_10_0681: -LoadBalancer -============ - -.. _cce_10_0014__section19854101411508: +Creating a LoadBalancer Service +=============================== Scenario -------- -A workload can be accessed from public networks through a load balancer, which is more secure and reliable than EIP. - -The LoadBalancer access address is in the format of :, for example, **10.117.117.117:80**. +LoadBalancer Services can access workloads from the public network through ELB, which is more reliable than EIP-based access. The LoadBalancer access address is in the format of *IP address of public network load balancer*:*Access port*, for example, **10.117.117.117:80**. In this access mode, requests are transmitted through an ELB load balancer to a node and then forwarded to the destination pod through the Service. -.. figure:: /_static/images/en-us_image_0000001569022961.png +.. figure:: /_static/images/en-us_image_0000001695736989.png :alt: **Figure 1** LoadBalancer **Figure 1** LoadBalancer @@ -27,7 +23,7 @@ When **CCE Turbo clusters and dedicated load balancers** are used, passthrough n External access requests are directly forwarded from a load balancer to pods. Internal access requests can be forwarded to a pod through a Service. -.. figure:: /_static/images/en-us_image_0000001517903124.png +.. figure:: /_static/images/en-us_image_0000001647417328.png :alt: **Figure 2** Passthrough networking **Figure 2** Passthrough networking @@ -35,35 +31,35 @@ External access requests are directly forwarded from a load balancer to pods. In Constraints ----------- -- LoadBalancer Services allow workloads to be accessed from public networks through **ELB**. This access mode has the following restrictions: +- LoadBalancer Services allow workloads to be accessed from public networks through ELB. This access mode has the following restrictions: - - It is recommended that automatically created load balancers not be used by other resources. Otherwise, these load balancers cannot be completely deleted, causing residual resources. + - Automatically created load balancers should not be used by other resources. Otherwise, these load balancers cannot be completely deleted. - Do not change the listener name for the load balancer in clusters of v1.15 and earlier. Otherwise, the load balancer cannot be accessed. -- After a Service is created, if the affinity setting is switched from the cluster level to the node level, the connection tracing table will not be cleared. You are advised not to modify the Service affinity setting after the Service is created. If you need to modify it, create a Service again. -- If the service affinity is set to the node level (that is, **externalTrafficPolicy** is set to **Local**), the cluster may fail to access the Service by using the ELB address. For details, see :ref:`Why a Cluster Fails to Access Services by Using the ELB Address `. +- After a Service is created, if the affinity setting is switched from the cluster level to the node level, the connection tracing table will not be cleared. You are advised not to modify the Service affinity setting after the Service is created. To modify it, create a Service again. +- If the service affinity is set to the node level (that is, :ref:`externalTrafficPolicy ` is set to **Local**), the cluster may fail to access the Service by using the ELB address. For details, see :ref:`Why a Service Fail to Be Accessed from Within the Cluster `. - CCE Turbo clusters support only cluster-level service affinity. - Dedicated ELB load balancers can be used only in clusters of v1.17 and later. -- Dedicated load balancers must be the network type (TCP/UDP) supporting private networks (with a private IP). If the Service needs to support HTTP, the specifications of dedicated load balancers must use HTTP/HTTPS (application load balancing) in addition to TCP/UDP (network load balancing). -- If you create a LoadBalancer Service on the CCE console, a random node port is automatically generated. If you use kubectl to create a LoadBalancer Service, a random node port is generated unless you specify one. -- In a CCE cluster, if the cluster-level affinity is configured for a LoadBalancer Service, requests are distributed to the node ports of each node using SNAT when entering the cluster. The number of node ports cannot exceed the number of available node ports on the node. If the Service affinity is at the node level (local), there is no such constraint. In a CCE Turbo cluster, this constraint applies to shared ELB load balancers, but not dedicated ones. You are advised to use dedicated ELB load balancers in CCE Turbo clusters. +- Dedicated load balancers must be of the network type (TCP/UDP) supporting private networks (with a private IP). If the Service needs to support HTTP, the specifications of dedicated load balancers must use HTTP/HTTPS (application load balancing) in addition to TCP/UDP (network load balancing). +- In a CCE cluster, if the cluster-level affinity is configured for a LoadBalancer Service, requests are distributed to the node ports of each node using SNAT when entering the cluster. The number of node ports cannot exceed the number of available node ports on the node. If the service affinity is at the node level (Local), there is no such constraint. In a CCE Turbo cluster, this constraint applies to shared ELB load balancers, but not dedicated ones. Use dedicated ELB load balancers in CCE Turbo clusters. - When the cluster service forwarding (proxy) mode is IPVS, the node IP cannot be configured as the external IP of the Service. Otherwise, the node is unavailable. - In a cluster using the IPVS proxy mode, if the ingress and Service use the same ELB load balancer, the ingress cannot be accessed from the nodes and containers in the cluster because kube-proxy mounts the LoadBalancer Service address to the ipvs-0 bridge. This bridge intercepts the traffic of the load balancer connected to the ingress. You are advised to use different ELB load balancers for the ingress and Service. + Creating a LoadBalancer Service ------------------------------- #. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Networking** in the navigation pane and click **Create Service** in the upper right corner. -#. Set parameters. +#. Configure parameters. - **Service Name**: Specify a Service name, which can be the same as the workload name. - - **Access Type**: Select **LoadBalancer**. + - **Service Type**: Select **LoadBalancer**. - **Namespace**: Namespace to which the workload belongs. - - **Service Affinity**: For details, see :ref:`externalTrafficPolicy (Service Affinity) `. + - **Service Affinity**: For details, see :ref:`externalTrafficPolicy (Service Affinity) `. - **Cluster level**: The IP addresses and access ports of all nodes in a cluster can be used to access the workload associated with the Service. Service access will cause performance loss due to route redirection, and the source IP address of the client cannot be obtained. - **Node level**: Only the IP address and access port of the node where the workload is located can access the workload associated with the Service. Service access will not cause performance loss due to route redirection, and the source IP address of the client can be obtained. @@ -74,42 +70,87 @@ Creating a LoadBalancer Service Select the load balancer to interconnect. Only load balancers in the same VPC as the cluster are supported. If no load balancer is available, click **Create Load Balancer** to create one on the ELB console. - You can click the edit icon in the row of **Set ELB** to configure load balancer parameters. + The CCE console supports automatic creation of load balancers. Select **Auto create** from the drop-down list box and set the following parameters: - - **Distribution Policy**: Three algorithms are available: weighted round robin, weighted least connections algorithm, or source IP hash. + - **Instance Name**: Enter a load balancer name. + - **Public Access**: If enabled, an EIP with 5 Mbit/s bandwidth will be created. + - **Subnet**, **AZ**, and **Specifications** (available only for dedicated load balancers): Configure the subnet, AZ, and specifications. Currently, only dedicated load balancers of the network type (TCP/UDP) can be automatically created. + + You can click **Edit** in the **Set ELB** area and configure load balancer parameters in the **Set ELB** dialog box. + + - .. _cce_10_0681__li8170555132211: + + **Algorithm**: Three algorithms are available: weighted round robin, weighted least connections algorithm, or source IP hash. .. note:: - **Weighted round robin**: Requests are forwarded to different servers based on their weights, which indicate server processing performance. Backend servers with higher weights receive proportionately more requests, whereas equal-weighted servers receive the same number of requests. This algorithm is often used for short connections, such as HTTP services. - - **Weighted least connections**: In addition to the weight assigned to each server, the number of connections processed by each backend server is also considered. Requests are forwarded to the server with the lowest connections-to-weight ratio. Building on **least connections**, the **weighted least connections** algorithm assigns a weight to each server based on their processing performance. This algorithm is often used for persistent connections, such as database connections. + - **Weighted least connections**: In addition to the weight assigned to each server, the number of connections processed by each backend server is also considered. Requests are forwarded to the server with the lowest connections-to-weight ratio. Building on **least connections**, the **weighted least connections** algorithm assigns a weight to each server based on their processing capability. This algorithm is often used for persistent connections, such as database connections. - **Source IP hash**: The source IP address of each request is calculated using the hash algorithm to obtain a unique hash key, and all backend servers are numbered. The generated key allocates the client to a particular server. This enables requests from different clients to be distributed in load balancing mode and ensures that requests from the same client are forwarded to the same server. This algorithm applies to TCP connections without cookies. - - **Type**: This function is disabled by default. You can select **Source IP address**. Listeners ensure session stickiness based on IP addresses. Requests from the same IP address will be forwarded to the same backend server. + - **Type**: This function is disabled by default. You can select **Source IP address**. Source IP address-based sticky session means that access requests from the same IP address are forwarded to the same backend server. - - **Health Check**: configured for the load balancer. When TCP is selected during the :ref:`port settings `, you can choose either TCP or HTTP. When UDP is selected during the :ref:`port settings `, only UDP is supported. By default, the service port (Node Port and container port of the Service) is used for health check. You can also specify another port for health check. After the port is specified, a service port named **cce-healthz** will be added for the Service. + .. note:: - - .. _cce_10_0014__li388800117144: + When the :ref:`distribution policy ` uses the source IP address algorithm, sticky session cannot be set. - **Port Settings** + - .. _cce_10_0681__li15274642132013: + + **Health Check**: Configure health check for the load balancer. + + - **Global health check**: applies only to ports using the same protocol. You are advised to select **Custom health check**. + - **Custom health check**: applies to :ref:`ports ` using different protocols. For details about the YAML definition for custom health check, see :ref:`Configuring Health Check for Multiple Ports `. + + .. _cce_10_0681__table11219123154614: + + .. table:: **Table 1** Health check parameters + + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===========================================================================================================================================================================================================================================================+ + | Protocol | When the protocol of :ref:`Port ` is set to TCP, the TCP and HTTP are supported. When the protocol of :ref:`Port ` is set to UDP, the UDP is supported. | + | | | + | | - **Check Path** (supported only by the HTTP): specifies the health check URL. The check path must start with a slash (/) and contain 1 to 80 characters. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Port | By default, the service port (Node Port and container port of the Service) is used for health check. You can also specify another port for health check. After the port is specified, a service port named **cce-healthz** will be added for the Service. | + | | | + | | - **Node Port**: If a shared load balancer is used or no ENI instance is associated, the node port is used as the health check port. If this parameter is not specified, a random port is used. The value ranges from 30000 to 32767. | + | | - **Container Port**: When a dedicated load balancer is associated with an ENI instance, the container port is used for health check. The value ranges from 1 to 65535. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Check Period (s) | Specifies the maximum interval between health checks. The value ranges from 1 to 50. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Timeout (s) | Specifies the maximum timeout duration for each health check. The value ranges from 1 to 50. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Max. Retries | Specifies the maximum number of health check retries. The value ranges from 1 to 10. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + - .. _cce_10_0681__li388800117144: + + **Port** - **Protocol**: protocol used by the Service. - **Service Port**: port used by the Service. The port number ranges from 1 to 65535. - **Container Port**: port on which the workload listens. For example, Nginx uses port 80 by default. + - **Health Check**: If :ref:`Health Check ` is set to **Custom health check**, you can configure health check for ports using different protocols. For details, see :ref:`Table 1 `. - - **Annotation**: The LoadBalancer Service has some advanced CCE functions, which are implemented by annotations. For details, see :ref:`Service Annotations `. When you use kubectl to create a container, annotations will be used. For details, see :ref:`Using kubectl to Create a Service (Using an Existing Load Balancer) ` and :ref:`Using kubectl to Create a Service (Automatically Creating a Load Balancer) `. + .. note:: + + When a LoadBalancer Service is created, a random node port number (NodePort) is automatically generated. + + - **Annotation**: The LoadBalancer Service has some advanced CCE functions, which are implemented by annotations. For details, see :ref:`Using Annotations to Configure Load Balancing `. #. Click **OK**. -.. _cce_10_0014__section1984211714368: +.. _cce_10_0681__section74196215320: Using kubectl to Create a Service (Using an Existing Load Balancer) ------------------------------------------------------------------- -You can set the access type when creating a workload using kubectl. This section uses an Nginx workload as an example to describe how to add a LoadBalancer Service using kubectl. +You can set the Service when creating a workload using kubectl. This section uses an Nginx workload as an example to describe how to add a LoadBalancer Service using kubectl. #. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. -#. Create and edit the **nginx-deployment.yaml** and **nginx-elb-svc.yaml** files. +#. Create the files named **nginx-deployment.yaml** and **nginx-elb-svc.yaml** and edit them. The file names are user-defined. **nginx-deployment.yaml** and **nginx-elb-svc.yaml** are merely example file names. @@ -151,81 +192,101 @@ You can set the access type when creating a workload using kubectl. This section apiVersion: v1 kind: Service metadata: - annotations: - kubernetes.io/elb.id: 5083f225-9bf8-48fa-9c8b-67bd9693c4c0 # ELB ID. Replace it with the actual value. - kubernetes.io/elb.class: union # Load balancer type name: nginx + annotations: + kubernetes.io/elb.id: # ELB ID. Replace it with the actual value. + kubernetes.io/elb.class: union # Load balancer type + kubernetes.io/elb.lb-algorithm: ROUND_ROBIN # Load balancer algorithm + kubernetes.io/elb.session-affinity-mode: SOURCE_IP # The sticky session type is source IP address. + kubernetes.io/elb.session-affinity-option: '{"persistence_timeout": "30"}' # Stickiness duration (min) + kubernetes.io/elb.health-check-flag: 'on' # Enable the ELB health check function. + kubernetes.io/elb.health-check-option: '{ + "protocol":"TCP", + "delay":"5", + "timeout":"10", + "max_retries":"3" + }' spec: + selector: + app: nginx ports: - name: service0 port: 80 # Port for accessing the Service, which is also the listener port on the load balancer. protocol: TCP targetPort: 80 # Port used by a Service to access the target container. This port is closely related to the applications running in a container. - selector: - app: nginx + nodePort: 31128 # Port number of the node. If this parameter is not specified, a random port number ranging from 30000 to 32767 is generated. type: LoadBalancer - .. table:: **Table 1** Key parameters + The preceding example uses annotations to implement some advanced functions of load balancing, such as sticky session and health check. For details, see :ref:`Table 2 `. + + In addition to the functions in this example, for more annotations and examples related to advanced functions, see :ref:`Using Annotations to Configure Load Balancing `. + + .. _cce_10_0681__table5352104717398: + + .. table:: **Table 2** annotations parameters +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Mandatory | Type | Description | +===========================================+=================+==========================================================+========================================================================================================================================================================================================================================================================================================+ - | kubernetes.io/elb.class | Yes | String | Select a proper load balancer type as required. | - | | | | | - | | | | The value can be: | - | | | | | - | | | | - **union**: shared load balancer | - | | | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | - +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.session-affinity-mode | No | String | Listeners ensure session stickiness based on IP addresses. Requests from the same IP address will be forwarded to the same backend server. | - | | | | | - | | | | - Disabling sticky session: Do not set this parameter. | - | | | | - Enabling sticky session: Set this parameter to **SOURCE_IP**, indicating that the sticky session is based on the source IP address. | - +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.session-affinity-option | No | :ref:`Table 2 ` Object | This parameter specifies the sticky session timeout. | - +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.id | Yes | String | This parameter indicates the ID of a load balancer. The value can contain 1 to 100 characters. | + | kubernetes.io/elb.id | Yes | String | ID of an enhanced load balancer. | | | | | | | | | | Mandatory when an existing load balancer is to be associated. | | | | | | - | | | | **Obtaining the load balancer ID:** | + | | | | **How to obtain**: | | | | | | | | | | On the management console, click **Service List**, and choose **Networking** > **Elastic Load Balance**. Click the name of the target load balancer. On the **Summary** tab page, find and copy the ID. | | | | | | | | | | .. note:: | | | | | | - | | | | The system preferentially interconnects with the load balancer based on the **kubernetes.io/elb.id** field. If this field is not specified, the **spec.loadBalancerIP** field is used (optional and available only in 1.23 and earlier versions). | + | | | | The system preferentially connects to the load balancer based on the **kubernetes.io/elb.id** field. If this field is not specified, the **spec.loadBalancerIP** field is used (optional and available only in 1.23 and earlier versions). | | | | | | | | | | Do not use the **spec.loadBalancerIP** field to connect to the load balancer. This field will be discarded by Kubernetes. For details, see `Deprecation `__. | +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.subnet-id | None | String | This parameter indicates the ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. | + | kubernetes.io/elb.class | Yes | String | Select a proper load balancer type. | | | | | | - | | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. | - | | | | - Optional for clusters later than v1.11.7-r0. | + | | | | - **union**: shared load balancer | + | | | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | + | | | | | + | | | | .. note:: | + | | | | | + | | | | If a LoadBalancer Service accesses an existing dedicated load balancer, the dedicated load balancer must support TCP/UDP networking. | +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.lb-algorithm | No | String | This parameter indicates the load balancing algorithm of the backend server group. The default value is **ROUND_ROBIN**. | + | kubernetes.io/elb.lb-algorithm | No | String | Specifies the load balancing algorithm of the backend server group. The default value is **ROUND_ROBIN**. | | | | | | - | | | | Value range: | + | | | | Options: | | | | | | | | | | - **ROUND_ROBIN**: weighted round robin algorithm | | | | | - **LEAST_CONNECTIONS**: weighted least connections algorithm | | | | | - **SOURCE_IP**: source IP hash algorithm | | | | | | - | | | | When the value is **SOURCE_IP**, the weights of backend servers in the server group are invalid. | + | | | | .. note:: | + | | | | | + | | | | If this parameter is set to **SOURCE_IP**, the weight setting (**weight** field) of backend servers bound to the backend server group is invalid, and sticky session cannot be enabled. | + +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.session-affinity-mode | No | String | Source IP address-based sticky session is supported. That is, access requests from the same IP address are forwarded to the same backend server. | + | | | | | + | | | | - Disabling sticky session: Do not configure this parameter. | + | | | | - Enabling sticky session: Set this parameter to **SOURCE_IP**, indicating that the sticky session is based on the source IP address. | + | | | | | + | | | | .. note:: | + | | | | | + | | | | When **kubernetes.io/elb.lb-algorithm** is set to **SOURCE_IP** (source IP address algorithm), sticky session cannot be enabled. | + +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.session-affinity-option | No | :ref:`Table 3 ` object | Sticky session timeout. | +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | kubernetes.io/elb.health-check-flag | No | String | Whether to enable the ELB health check. | | | | | | | | | | - Enabling health check: Leave blank this parameter or set it to **on**. | | | | | - Disabling health check: Set this parameter to **off**. | | | | | | - | | | | If this parameter is enabled, the :ref:`kubernetes.io/elb.health-check-option ` field must also be specified at the same time. | + | | | | If this parameter is enabled, the :ref:`kubernetes.io/elb.health-check-option ` field must also be specified at the same time. | +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.health-check-option | No | :ref:`Table 3 ` Object | ELB health check configuration items. | + | kubernetes.io/elb.health-check-option | No | :ref:`Table 4 ` object | ELB health check configuration items. | +-------------------------------------------+-----------------+----------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_10_0014__table43592047133910: + .. _cce_10_0681__table43592047133910: - .. table:: **Table 2** Data structure of the elb.session-affinity-option field + .. table:: **Table 3** Data structure of the **elb.session-affinity-option** field +---------------------+-----------------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Mandatory | Type | Description | @@ -235,9 +296,9 @@ You can set the access type when creating a workload using kubectl. This section | | | | Value range: 1 to 60. Default value: **60** | +---------------------+-----------------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ - .. _cce_10_0014__table236017471397: + .. _cce_10_0681__table236017471397: - .. table:: **Table 3** Data structure description of the **elb.health-check-option** field + .. table:: **Table 4** Data structure description of the **elb.health-check-option** field +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ | Parameter | Mandatory | Type | Description | @@ -256,9 +317,7 @@ You can set the access type when creating a workload using kubectl. This section +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ | protocol | No | String | Health check protocol. | | | | | | - | | | | Default value: protocol of the associated Service | - | | | | | - | | | | Value options: TCP, UDP, or HTTP | + | | | | Value options: TCP or HTTP | +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ | path | No | String | Health check URL. This parameter needs to be configured when the protocol is HTTP. | | | | | | @@ -298,7 +357,7 @@ You can set the access type when creating a workload using kubectl. This section **kubectl get svc** - If information similar to the following is displayed, the access type has been set successfully, and the workload is accessible. + If information similar to the following is displayed, the access type has been set, and the workload is accessible. .. code-block:: @@ -311,21 +370,21 @@ You can set the access type when creating a workload using kubectl. This section The Nginx is accessible. - .. figure:: /_static/images/en-us_image_0000001569182677.png + .. figure:: /_static/images/en-us_image_0000001695736993.png :alt: **Figure 3** Accessing Nginx through the LoadBalancer Service **Figure 3** Accessing Nginx through the LoadBalancer Service -.. _cce_10_0014__section12168131904611: +.. _cce_10_0681__section6422152185311: Using kubectl to Create a Service (Automatically Creating a Load Balancer) -------------------------------------------------------------------------- -You can add a Service when creating a workload using kubectl. This section uses an Nginx workload as an example to describe how to add a LoadBalancer Service using kubectl. +You can set the Service when creating a workload using kubectl. This section uses an Nginx workload as an example to describe how to add a LoadBalancer Service using kubectl. #. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. -#. Create and edit the **nginx-deployment.yaml** and **nginx-elb-svc.yaml** files. +#. Create the files named **nginx-deployment.yaml** and **nginx-elb-svc.yaml** and edit them. The file names are user-defined. **nginx-deployment.yaml** and **nginx-elb-svc.yaml** are merely example file names. @@ -362,7 +421,7 @@ You can add a Service when creating a workload using kubectl. This section uses - The workload protocol is TCP. - Anti-affinity has been configured between pods of the workload. That is, all pods of the workload are deployed on different nodes. For details, see :ref:`Scheduling Policy (Affinity/Anti-affinity) `. - Example of a Service using a shared, public network load balancer: + Example of a Service using a public network shared load balancer: .. code-block:: @@ -371,15 +430,25 @@ You can add a Service when creating a workload using kubectl. This section uses metadata: annotations: kubernetes.io/elb.class: union - kubernetes.io/elb.autocreate: - '{ - "type": "public", - "bandwidth_name": "cce-bandwidth-1551163379627", - "bandwidth_chargemode": "bandwidth", - "bandwidth_size": 5, - "bandwidth_sharetype": "PER", - "eip_type": "5_bgp" - }' + kubernetes.io/elb.autocreate: '{ + "type": "public", + "bandwidth_name": "cce-bandwidth-1551163379627", + "bandwidth_chargemode": "bandwidth", + "bandwidth_size": 5, + "bandwidth_sharetype": "PER", + "eip_type": "5_bgp" + }' + kubernetes.io/elb.enterpriseID: '0' # ID of the enterprise project to which the load balancer belongs + kubernetes.io/elb.lb-algorithm: ROUND_ROBIN # Load balancer algorithm + kubernetes.io/elb.session-affinity-mode: SOURCE_IP # The sticky session type is source IP address. + kubernetes.io/elb.session-affinity-option: '{"persistence_timeout": "30"}' # Stickiness duration (min) + kubernetes.io/elb.health-check-flag: 'on' # Enable the ELB health check function. + kubernetes.io/elb.health-check-option: '{ + "protocol":"TCP", + "delay":"5", + "timeout":"10", + "max_retries":"3" + }' labels: app: nginx name: nginx @@ -393,7 +462,7 @@ You can add a Service when creating a workload using kubectl. This section uses app: nginx type: LoadBalancer - Example Service using a public network dedicated load balancer (for clusters of v1.17 and later only): + Example Service using a public network dedicated load balancer (only for clusters of v1.17 and later): .. code-block:: @@ -406,19 +475,29 @@ You can add a Service when creating a workload using kubectl. This section uses namespace: default annotations: kubernetes.io/elb.class: performance - kubernetes.io/elb.autocreate: - '{ - "type": "public", - "bandwidth_name": "cce-bandwidth-1626694478577", - "bandwidth_chargemode": "bandwidth", - "bandwidth_size": 5, - "bandwidth_sharetype": "PER", - "eip_type": "5_bgp", - "available_zone": [ - "" - ], - "l4_flavor_name": "L4_flavor.elb.s1.small" - }' + kubernetes.io/elb.autocreate: '{ + "type": "public", + "bandwidth_name": "cce-bandwidth-1626694478577", + "bandwidth_chargemode": "bandwidth", + "bandwidth_size": 5, + "bandwidth_sharetype": "PER", + "eip_type": "5_bgp", + "available_zone": [ + "" + ], + "l4_flavor_name": "L4_flavor.elb.s1.small" + }' + kubernetes.io/elb.enterpriseID: '0' # ID of the enterprise project to which the load balancer belongs + kubernetes.io/elb.lb-algorithm: ROUND_ROBIN # Load balancer algorithm + kubernetes.io/elb.session-affinity-mode: SOURCE_IP # The sticky session type is source IP address. + kubernetes.io/elb.session-affinity-option: '{"persistence_timeout": "30"}' # Stickiness duration (min) + kubernetes.io/elb.health-check-flag: 'on' # Enable the ELB health check function. + kubernetes.io/elb.health-check-option: '{ + "protocol":"TCP", + "delay":"5", + "timeout":"10", + "max_retries":"3" + }' spec: selector: app: nginx @@ -430,80 +509,82 @@ You can add a Service when creating a workload using kubectl. This section uses protocol: TCP type: LoadBalancer - .. table:: **Table 4** Key parameters + The preceding example uses annotations to implement some advanced functions of load balancing, such as sticky session and health check. For details, see :ref:`Table 5 `. - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Mandatory | Type | Description | - +===========================================+=================+===============================================================+=======================================================================================================================================================================================================================================================================================+ - | kubernetes.io/elb.class | Yes | String | Select a proper load balancer type as required. | - | | | | | - | | | | The value can be: | - | | | | | - | | | | - **union**: shared load balancer | - | | | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.subnet-id | N/A | String | This parameter indicates the ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. | - | | | | | - | | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. | - | | | | - Optional for clusters later than v1.11.7-r0. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.session-affinity-option | No | :ref:`Table 2 ` Object | Sticky session timeout. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.autocreate | Yes | :ref:`elb.autocreate ` object | Whether to automatically create a load balancer associated with the Service. | - | | | | | - | | | | **Example:** | - | | | | | - | | | | - If a public network load balancer will be automatically created, set this parameter to the following value: | - | | | | | - | | | | {"type":"public","bandwidth_name":"cce-bandwidth-1551163379627","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"} | - | | | | | - | | | | - If a private network load balancer will be automatically created, set this parameter to the following value: | - | | | | | - | | | | {"type":"inner","name":"A-location-d-test"} | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.lb-algorithm | No | String | This parameter indicates the load balancing algorithm of the backend server group. The default value is **ROUND_ROBIN**. | - | | | | | - | | | | Value range: | - | | | | | - | | | | - **ROUND_ROBIN**: weighted round robin algorithm | - | | | | - **LEAST_CONNECTIONS**: weighted least connections algorithm | - | | | | - **SOURCE_IP**: source IP hash algorithm | - | | | | | - | | | | When the value is **SOURCE_IP**, the weights of backend servers in the server group are invalid. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.health-check-flag | No | String | Whether to enable the ELB health check. | - | | | | | - | | | | - Enabling health check: Leave blank this parameter or set it to **on**. | - | | | | - Disabling health check: Set this parameter to **off**. | - | | | | | - | | | | If this parameter is enabled, the :ref:`kubernetes.io/elb.health-check-option ` field must also be specified at the same time. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.health-check-option | No | :ref:`Table 3 ` Object | ELB health check configuration items. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.session-affinity-mode | No | String | Listeners ensure session stickiness based on IP addresses. Requests from the same IP address will be forwarded to the same backend server. | - | | | | | - | | | | - Disabling sticky session: Do not set this parameter. | - | | | | - Enabling sticky session: Set this parameter to **SOURCE_IP**, indicating that the sticky session is based on the source IP address. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/elb.session-affinity-option | No | :ref:`Table 2 ` Object | Sticky session timeout. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kubernetes.io/hws-hostNetwork | No | String | This parameter indicates whether the workload Services use the host network. Setting this parameter to **true** will enable the ELB load balancer to forward requests to the host network. | - | | | | | - | | | | The host network is not used by default. The value can be **true** or **false**. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | externalTrafficPolicy | No | String | If sticky session is enabled, add this parameter so that requests are transferred to a fixed node. If a LoadBalancer Service with this parameter set to **Local** is created, a client can access the target backend only if the client is installed on the same node as the backend. | - +-------------------------------------------+-----------------+---------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + In addition to the functions in this example, for more annotations and examples related to advanced functions, see :ref:`Using Annotations to Configure Load Balancing `. - .. _cce_10_0014__table939522754617: + .. _cce_10_0681__table133089105019: - .. table:: **Table 5** Data structure of the elb.autocreate field + .. table:: **Table 5** annotations parameters + + +-------------------------------------------+-----------------+---------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +===========================================+=================+===============================================================+============================================================================================================================================================================================+ + | kubernetes.io/elb.class | Yes | String | Select a proper load balancer type. | + | | | | | + | | | | - **union**: shared load balancer | + | | | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | + +-------------------------------------------+-----------------+---------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.autocreate | Yes | :ref:`elb.autocreate ` object | Whether to automatically create a load balancer associated with the Service. | + | | | | | + | | | | **Example** | + | | | | | + | | | | - If a public network load balancer will be automatically created, set this parameter to the following value: | + | | | | | + | | | | {"type":"public","bandwidth_name":"cce-bandwidth-1551163379627","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"} | + | | | | | + | | | | - If a private network load balancer will be automatically created, set this parameter to the following value: | + | | | | | + | | | | {"type":"inner","name":"A-location-d-test"} | + +-------------------------------------------+-----------------+---------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.subnet-id | None | String | ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. | + | | | | | + | | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. | + | | | | - Optional for clusters later than v1.11.7-r0. | + +-------------------------------------------+-----------------+---------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.lb-algorithm | No | String | Specifies the load balancing algorithm of the backend server group. The default value is **ROUND_ROBIN**. | + | | | | | + | | | | Options: | + | | | | | + | | | | - **ROUND_ROBIN**: weighted round robin algorithm | + | | | | - **LEAST_CONNECTIONS**: weighted least connections algorithm | + | | | | - **SOURCE_IP**: source IP hash algorithm | + | | | | | + | | | | .. note:: | + | | | | | + | | | | If this parameter is set to **SOURCE_IP**, the weight setting (**weight** field) of backend servers bound to the backend server group is invalid, and sticky session cannot be enabled. | + +-------------------------------------------+-----------------+---------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.session-affinity-mode | No | String | Source IP address-based sticky session is supported. That is, access requests from the same IP address are forwarded to the same backend server. | + | | | | | + | | | | - Disabling sticky session: Do not configure this parameter. | + | | | | - Enabling sticky session: Set this parameter to **SOURCE_IP**, indicating that the sticky session is based on the source IP address. | + | | | | | + | | | | .. note:: | + | | | | | + | | | | When **kubernetes.io/elb.lb-algorithm** is set to **SOURCE_IP** (source IP address algorithm), sticky session cannot be enabled. | + +-------------------------------------------+-----------------+---------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.session-affinity-option | No | :ref:`Table 3 ` object | Sticky session timeout. | + +-------------------------------------------+-----------------+---------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.health-check-flag | No | String | Whether to enable the ELB health check. | + | | | | | + | | | | - Enabling health check: Leave blank this parameter or set it to **on**. | + | | | | - Disabling health check: Set this parameter to **off**. | + | | | | | + | | | | If this parameter is enabled, the :ref:`kubernetes.io/elb.health-check-option ` field must also be specified at the same time. | + +-------------------------------------------+-----------------+---------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kubernetes.io/elb.health-check-option | No | :ref:`Table 4 ` object | ELB health check configuration items. | + +-------------------------------------------+-----------------+---------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + .. _cce_10_0681__table939522754617: + + .. table:: **Table 6** Data structure of the **elb.autocreate** field +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Mandatory | Type | Description | +======================+=======================================+==================+==================================================================================================================================================================================================================================================================================================================================================================================+ - | name | No | String | Name of the load balancer that is automatically created. | + | name | No | String | Name of the automatically created load balancer. | | | | | | - | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | + | | | | The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed. | | | | | | | | | | Default: **cce-lb+service.UID** | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -516,11 +597,22 @@ You can add a Service when creating a workload using kubectl. This section uses +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | | | | | | - | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | + | | | | The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed. | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | bandwidth_chargemode | No | String | Bandwidth mode. | + | | | | | + | | | | - **bandwidth**: billed by bandwidth | + | | | | - **traffic**: billed by traffic | + | | | | | + | | | | Default: **bandwidth** | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_size | Yes for public network load balancers | Integer | Bandwidth size. The default value is 1 to 2000 Mbit/s. Set this parameter based on the bandwidth range allowed in your region. | + | bandwidth_size | Yes for public network load balancers | Integer | Bandwidth size. The default value is 1 to 2000 Mbit/s. Configure this parameter based on the bandwidth range allowed in your region. | + | | | | | + | | | | The minimum increment for bandwidth adjustment varies depending on the bandwidth range. | + | | | | | + | | | | - The minimum increment is 1 Mbit/s if the allowed bandwidth does not exceed 300 Mbit/s. | + | | | | - The minimum increment is 50 Mbit/s if the allowed bandwidth ranges from 300 Mbit/s to 1000 Mbit/s. | + | | | | - The minimum increment is 500 Mbit/s if the allowed bandwidth exceeds 1000 Mbit/s. | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | bandwidth_sharetype | Yes for public network load balancers | String | Bandwidth sharing mode. | | | | | | @@ -541,7 +633,7 @@ You can add a Service when creating a workload using kubectl. This section uses +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | l7_flavor_name | No | String | Flavor name of the layer-7 load balancer. | | | | | | - | | | | This parameter is available only for dedicated load balancers. | + | | | | This parameter is available only for dedicated load balancers. The value of this parameter must be the same as that of **l4_flavor_name**, that is, both are elastic specifications or fixed specifications. | +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | elb_virsubnet_ids | No | Array of strings | Subnet where the backend server of the load balancer is located. If this parameter is left blank, the default cluster subnet is used. Load balancers occupy different number of subnet IP addresses based on their specifications. Therefore, you are not advised to use the subnet CIDR blocks of other resources (such as clusters and nodes) as the load balancer CIDR block. | | | | | | @@ -566,7 +658,7 @@ You can add a Service when creating a workload using kubectl. This section uses deployment/nginx created - **kubectl get po** + **kubectl get pod** If information similar to the following is displayed, the workload is running. @@ -587,7 +679,7 @@ You can add a Service when creating a workload using kubectl. This section uses **kubectl get svc** - If information similar to the following is displayed, the access type has been set successfully, and the workload is accessible. + If information similar to the following is displayed, the access type has been set, and the workload is accessible. .. code-block:: @@ -595,86 +687,12 @@ You can add a Service when creating a workload using kubectl. This section uses kubernetes ClusterIP 10.247.0.1 443/TCP 3d nginx LoadBalancer 10.247.130.196 10.78.42.242 80:31540/TCP 51s -#. Enter the URL in the address box of the browser, for example, **10.78.42.242:80**. **10.78.42.242** indicates the IP address of the load balancer, and **80** indicates the access port displayed on the CCE console. +#. Enter the URL in the address box of the browser, for example, **10.**\ *XXX.XXX.XXX*\ **:80**. **10.**\ *XXX.XXX.XXX* indicates the IP address of the load balancer, and **80** indicates the access port displayed on the CCE console. The Nginx is accessible. - .. figure:: /_static/images/en-us_image_0000001517743552.png + .. figure:: /_static/images/en-us_image_0000001647576596.png :alt: **Figure 4** Accessing Nginx through the LoadBalancer Service **Figure 4** Accessing Nginx through the LoadBalancer Service - -ELB Forwarding --------------- - -After a Service of the LoadBalancer type is created, you can view the listener forwarding rules of the load balancer on the ELB console. - -You can find that a listener is created for the load balancer. Its backend server is the node where the pod is located, and the backend server port is the NodePort (node port) of the Service. When traffic passes through ELB, it is forwarded to *IP address of the node where the pod is located:Node port*. That is, the Service is accessed and then the pod is accessed, which is the same as that described in :ref:`Scenario `. - -In the passthrough networking scenario (CCE Turbo + dedicated load balancer), after a LoadBalancer Service is created, you can view the listener forwarding rules of the load balancer on the ELB console. - -You can find that a listener is created for the load balancer. The backend server address is the IP address of the pod, and the service port is the container port. This is because the pod uses an ENI or sub-ENI. When traffic passes through the load balancer, it directly forwards the traffic to the pod. This is the same as that described in :ref:`Scenario `. - -.. _cce_10_0014__section52631714117: - -Why a Cluster Fails to Access Services by Using the ELB Address ---------------------------------------------------------------- - -If the service affinity of a LoadBalancer Service is set to the node level, that is, the value of **externalTrafficPolicy** is **Local**, the ELB address may fail to be accessed from the cluster (specifically, nodes or containers). Information similar to the following is displayed: - -.. code-block:: - - upstream connect error or disconnect/reset before headers. reset reason: connection failure - -This is because when the LoadBalancer Service is created, kube-proxy adds the ELB access address as the external IP to iptables or IPVS. If a client initiates a request to access the ELB address from inside the cluster, the address is considered as the external IP address of the service and is directly forwarded by kube-proxy without passing through the ELB outside the cluster. - -When the value of **externalTrafficPolicy** is **Local**, the situation varies according to the container network model and service forwarding mode. The details are as follows: - -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ -| Server | Client | Container Tunnel Network Cluster (IPVS) | VPC Network Cluster (IPVS) | Container Tunnel Network Cluster (iptables) | VPC Network Cluster (iptables) | -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ -| NodePort Service | Same node | OK. The node where the pod runs is accessible, not any other nodes. | OK. The node where the pod runs is accessible. | OK. The node where the pod runs is accessible. | OK. The node where the pod runs is accessible. | -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ -| | Cross-node | OK. The node where the pod runs is accessible, not any other nodes. | OK. The node where the pod runs is accessible. | OK. The node where the pod runs is accessible by visiting the node IP + port, not by any other ways. | OK. The node where the pod runs is accessible by visiting the node IP + port, not by any other ways. | -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ -| | Containers on the same node | OK. The node where the pod runs is accessible, not any other nodes. | OK. The node where the pod runs is not accessible. | OK. The node where the pod runs is accessible. | OK. The node where the pod runs is not accessible. | -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ -| | Containers across nodes | OK. The node where the pod runs is accessible, not any other nodes. | OK. The node where the pod runs is accessible. | OK. The node where the pod runs is accessible. | OK. The node where the pod runs is accessible. | -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ -| LoadBalancer Service using a dedicated load balancer | Same node | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ -| | Containers on the same node | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ -| Local Service of the nginx-ingress add-on using a dedicated load balancer | Same node | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ -| | Containers on the same node | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | Accessible for public networks, not private networks. | -+---------------------------------------------------------------------------+-----------------------------+---------------------------------------------------------------------+-------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+ - -The following methods can be used to solve this problem: - -- (**Recommended**) In the cluster, use the ClusterIP Service or service domain name for access. - -- Set **externalTrafficPolicy** of the Service to **Cluster**, which means cluster-level service affinity. Note that this affects source address persistence. - - .. code-block:: - - apiVersion: v1 - kind: Service - metadata: - annotations: - kubernetes.io/elb.class: union - kubernetes.io/elb.autocreate: '{"type":"public","bandwidth_name":"cce-bandwidth","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"}' - labels: - app: nginx - name: nginx - spec: - externalTrafficPolicy: Cluster - ports: - - name: service0 - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: nginx - type: LoadBalancer diff --git a/umn/source/workloads/configuring_a_container/enabling_icmp_security_group_rules.rst b/umn/source/network/service/loadbalancer/enabling_icmp_security_group_rules.rst similarity index 100% rename from umn/source/workloads/configuring_a_container/enabling_icmp_security_group_rules.rst rename to umn/source/network/service/loadbalancer/enabling_icmp_security_group_rules.rst diff --git a/umn/source/network/service/loadbalancer/enabling_passthrough_networking_for_loadbalancer_services.rst b/umn/source/network/service/loadbalancer/enabling_passthrough_networking_for_loadbalancer_services.rst new file mode 100644 index 0000000..da32f7f --- /dev/null +++ b/umn/source/network/service/loadbalancer/enabling_passthrough_networking_for_loadbalancer_services.rst @@ -0,0 +1,156 @@ +:original_name: cce_10_0355.html + +.. _cce_10_0355: + +Enabling Passthrough Networking for LoadBalancer Services +========================================================= + +Challenges +---------- + +A Kubernetes cluster can publish applications running on a group of pods as Services, which provide unified layer-4 access entries. For a Loadbalancer Service, kube-proxy configures the LoadbalanceIP in **status** of the Service to the local forwarding rule of the node by default. When a pod accesses the load balancer from within the cluster, the traffic is forwarded within the cluster instead of being forwarded by the load balancer. + +kube-proxy is responsible for intra-cluster forwarding. kube-proxy has two forwarding modes: iptables and IPVS. iptables is a simple polling forwarding mode. IPVS has multiple forwarding modes but it requires modifying the startup parameters of kube-proxy. Compared with iptables and IPVS, load balancers provide more flexible forwarding policies as well as health check capabilities. + +Solution +-------- + +CCE supports passthrough networking. You can configure the **annotation** of **kubernetes.io/elb.pass-through** for the Loadbalancer Service. Intra-cluster access to the Service load balancer address is then forwarded to backend pods by the load balancer. + + +.. figure:: /_static/images/en-us_image_0000001695736965.png + :alt: **Figure 1** Passthrough networking illustration + + **Figure 1** Passthrough networking illustration + +- CCE clusters + + When a LoadBalancer Service is accessed within the cluster, the access is forwarded to the backend pods using iptables/IPVS by default. + + When a LoadBalancer Service (configured with elb.pass-through) is accessed within the cluster, the access is first forwarded to the load balancer, then the nodes, and finally to the backend pods using iptables/IPVS. + +- CCE Turbo clusters + + When a LoadBalancer Service is accessed within the cluster, the access is forwarded to the backend pods using iptables/IPVS by default. + + When a LoadBalancer Service (configured with elb.pass-through) is accessed within the cluster, the access is first forwarded to the load balancer, and then to the pods. + +Notes and Constraints +--------------------- + +- After passthrough networking is configured for a dedicated load balancer, containers on the node where the workload runs cannot be accessed through the Service. +- Passthrough networking is not supported for clusters of v1.15 or earlier. +- In IPVS network mode, the pass-through settings of Service connected to the same ELB must be the same. + +Procedure +--------- + +This section describes how to create a Deployment using an Nginx image and create a Service with passthrough networking enabled. + +#. Use the Nginx image to create a Deployment. + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: nginx + spec: + replicas: 2 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - image: nginx:latest + name: container-0 + resources: + limits: + cpu: 100m + memory: 200Mi + requests: + cpu: 100m + memory: 200Mi + imagePullSecrets: + - name: default-secret + +#. Create a LoadBalancer Service and configure **kubernetes.io/elb.pass-through** to **true**. + + .. code-block:: + + apiVersion: v1 + kind: Service + metadata: + annotations: + kubernetes.io/elb.pass-through: "true" + kubernetes.io/elb.class: union + kubernetes.io/elb.autocreate: '{"type":"public","bandwidth_name":"cce-bandwidth","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"}' + labels: + app: nginx + name: nginx + spec: + externalTrafficPolicy: Local + ports: + - name: service0 + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: nginx + type: LoadBalancer + + A shared load balancer named **james** is automatically created. Use **kubernetes.io/elb.subnet-id** to specify the VPC subnet where the load balancer is located. The load balancer and the cluster must be in the same VPC. + +Verification +------------ + +Check the ELB load balancer corresponding to the created Service. The load balancer name is **james**. The number of ELB connections is **0**, as shown in the following figure. + +|image1| + +Use kubectl to connect to the cluster, go to an Nginx container, and access the ELB address. The access is successful. + +.. code-block:: + + # kubectl get pod + NAME READY STATUS RESTARTS AGE + nginx-7c4c5cc6b5-vpncx 1/1 Running 0 9m47s + nginx-7c4c5cc6b5-xj5wl 1/1 Running 0 9m47s + # kubectl exec -it nginx-7c4c5cc6b5-vpncx -- /bin/sh + # curl 120.46.141.192 + + + + Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and + working. Further configuration is required.

+ +

For online documentation and support please refer to + nginx.org.
+ Commercial support is available at + nginx.com.

+ +

Thank you for using nginx.

+ + + +Wait for a period of time and view the ELB monitoring data. A new access connection is created for the ELB, indicating that the access passes through the ELB load balancer as expected. + +|image2| + +.. |image1| image:: /_static/images/en-us_image_0000001647576552.png +.. |image2| image:: /_static/images/en-us_image_0000001647417300.png diff --git a/umn/source/network/service/loadbalancer/index.rst b/umn/source/network/service/loadbalancer/index.rst new file mode 100644 index 0000000..b954685 --- /dev/null +++ b/umn/source/network/service/loadbalancer/index.rst @@ -0,0 +1,24 @@ +:original_name: cce_10_0014.html + +.. _cce_10_0014: + +LoadBalancer +============ + +- :ref:`Creating a LoadBalancer Service ` +- :ref:`Using Annotations to Configure Load Balancing ` +- :ref:`Service Using HTTP ` +- :ref:`Configuring Health Check for Multiple Ports ` +- :ref:`Enabling Passthrough Networking for LoadBalancer Services ` +- :ref:`Enabling ICMP Security Group Rules ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + creating_a_loadbalancer_service + using_annotations_to_configure_load_balancing + service_using_http + configuring_health_check_for_multiple_ports + enabling_passthrough_networking_for_loadbalancer_services + enabling_icmp_security_group_rules diff --git a/umn/source/network/service/loadbalancer/service_using_http.rst b/umn/source/network/service/loadbalancer/service_using_http.rst new file mode 100644 index 0000000..55b86da --- /dev/null +++ b/umn/source/network/service/loadbalancer/service_using_http.rst @@ -0,0 +1,82 @@ +:original_name: cce_10_0683.html + +.. _cce_10_0683: + +Service Using HTTP +================== + +Constraints +----------- + +- Only clusters of v1.19.16 or later support HTTP. + +- Do not connect the ingress and Service that uses HTTP to the same listener of the same load balancer. Otherwise, a port conflict occurs. + +- Layer-7 routing of ELB can be enabled for Services. Both shared and dedicated ELB load balancers can be interconnected. + + Restrictions on dedicated ELB load balancers are as follows: + + - To interconnect with an existing dedicated load balancer, the load balancer flavor **must support both the layer-4 and layer-7 routing**. Otherwise, the load balancer will not work as expected. + - If you use an automatically created load balancer, you cannot use the CCE console to automatically create a layer-7 dedicated load balancer. Instead, you can use YAML to create a layer-7 dedicated load balancer, use both the layer-4 and layer-7 capabilities of the exclusive ELB instance (that is, specify the layer-4 and layer-7 flavors in the annotation of kubernetes.io/elb.autocreate). + + +Service Using HTTP +------------------ + +The following annotations need to be added: + +- **kubernetes.io/elb.protocol-port**: "https:443,http:80" + + The value of **protocol-port** must be the same as the port in the **spec.ports** field of the Service. The format is *Protocol:Port*. The port matches the one in the **service.spec.ports** field and is released as the corresponding protocol. + +- **kubernetes.io/elb.cert-id**: "17e3b4f4bc40471c86741dc3aa211379" + + **cert-id** indicates the certificate ID in ELB certificate management. When **https** is configured for **protocol-port**, the certificate of the ELB listener will be set to the **cert-id** certificate. When multiple HTTPS services are released, the same certificate is used. + +The following is a configuration example. The two ports in **spec.ports** correspond to those in **kubernetes.io/elb.protocol-port**. Ports 443 and 80 are enabled for HTTPS and HTTP requests, respectively. + +.. code-block:: + + apiVersion: v1 + kind: Service + metadata: + annotations: + # When an ELB load balancer is automatically created, both layer-4 and layer-7 flavors need to be specified. + kubernetes.io/elb.autocreate: ' + { + "type": "public", + "bandwidth_name": "cce-bandwidth-1634816602057", + "bandwidth_chargemode": "bandwidth", + "bandwidth_size": 5, + "bandwidth_sharetype": "PER", + "eip_type": "5_bgp", + "available_zone": [ + "" + ], + "l7_flavor_name": "L7_flavor.elb.s2.small" + }' + kubernetes.io/elb.class: performance + kubernetes.io/elb.protocol-port: "https:443,http:80" + kubernetes.io/elb.cert-id: "17e3b4f4bc40471c86741dc3aa211379" + labels: + app: nginx + name: test + name: test + namespace: default + spec: + ports: + - name: cce-service-0 + port: 443 + protocol: TCP + targetPort: 80 + - name: cce-service-1 + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: nginx + version: v1 + sessionAffinity: None + type: LoadBalancer + +Use the preceding example configurations to create a Service. In the new ELB load balancer, you can see that the listeners on ports 443 and 80 are created. diff --git a/umn/source/network/service/loadbalancer/using_annotations_to_configure_load_balancing.rst b/umn/source/network/service/loadbalancer/using_annotations_to_configure_load_balancing.rst new file mode 100644 index 0000000..2d73ec0 --- /dev/null +++ b/umn/source/network/service/loadbalancer/using_annotations_to_configure_load_balancing.rst @@ -0,0 +1,601 @@ +:original_name: cce_10_0385.html + +.. _cce_10_0385: + +Using Annotations to Configure Load Balancing +============================================= + +You can add annotations to a YAML file to use some CCE advanced functions. This section describes the available annotations when a LoadBalancer service is created. + +- :ref:`Interconnection with ELB ` +- :ref:`Sticky Session ` +- :ref:`Health Check ` +- :ref:`HTTP Protocol ` +- :ref:`Dynamic Adjustment of the Weight of the Backend ECS ` +- :ref:`Pass-through Capability ` +- :ref:`Whitelist ` +- :ref:`Host Network ` + +.. _cce_10_0385__section584135019388: + +Interconnection with ELB +------------------------ + +.. table:: **Table 1** Annotations for interconnecting with ELB + + +--------------------------------+----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +================================+====================================================+========================================================================================================================================================================================================================================================================================================+================================================+ + | kubernetes.io/elb.class | String | Select a proper load balancer type. | v1.9 or later | + | | | | | + | | | - **union**: shared load balancer | | + | | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | | + +--------------------------------+----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/elb.id | String | Mandatory **when an existing load balancer is to be associated**. | v1.9 or later | + | | | | | + | | | ID of a load balancer. | | + | | | | | + | | | **How to obtain**: | | + | | | | | + | | | On the management console, click **Service List**, and choose **Networking** > **Elastic Load Balance**. Click the name of the target load balancer. On the **Summary** tab page, find and copy the ID. | | + | | | | | + | | | .. note:: | | + | | | | | + | | | The system preferentially connects to the load balancer based on the **kubernetes.io/elb.id** field. If this field is not specified, the **spec.loadBalancerIP** field is used (optional and available only in 1.23 and earlier versions). | | + | | | | | + | | | Do not use the **spec.loadBalancerIP** field to connect to the load balancer. This field will be discarded by Kubernetes. For details, see `Deprecation `__. | | + +--------------------------------+----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/elb.autocreate | :ref:`Table 9 ` | Mandatory **when load balancers are automatically created**. | v1.9 or later | + | | | | | + | | | **Example:** | | + | | | | | + | | | - If a public network load balancer will be automatically created, set this parameter to the following value: | | + | | | | | + | | | {"type":"public","bandwidth_name":"cce-bandwidth-1551163379627","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"} | | + | | | | | + | | | - If a private network load balancer will be automatically created, set this parameter to the following value: | | + | | | | | + | | | {"type":"inner","name":"A-location-d-test"} | | + +--------------------------------+----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/elb.subnet-id | String | Optional **when load balancers are automatically created**. | Mandatory for versions earlier than v1.11.7-r0 | + | | | | | + | | | ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. | Discarded in versions later than v1.11.7-r0 | + | | | | | + | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. | | + | | | - Optional for clusters later than v1.11.7-r0. | | + +--------------------------------+----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + | kubernetes.io/elb.lb-algorithm | String | Specifies the load balancing algorithm of the backend server group. The default value is **ROUND_ROBIN**. | v1.9 or later | + | | | | | + | | | Options: | | + | | | | | + | | | - **ROUND_ROBIN**: weighted round robin algorithm | | + | | | - **LEAST_CONNECTIONS**: weighted least connections algorithm | | + | | | - **SOURCE_IP**: source IP hash algorithm | | + | | | | | + | | | .. note:: | | + | | | | | + | | | If this parameter is set to **SOURCE_IP**, the weight setting (**weight** field) of backend servers bound to the backend server group is invalid, and sticky session cannot be enabled. | | + +--------------------------------+----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+ + +The following shows how to use the preceding annotations: + +- Associating an existing load balancer. For details, see :ref:`Using kubectl to Create a Service (Using an Existing Load Balancer) `. + + .. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: nginx + annotations: + kubernetes.io/elb.id: # ELB ID. Replace it with the actual value. + kubernetes.io/elb.class: performance # Load balancer type + kubernetes.io/elb.lb-algorithm: ROUND_ROBIN # Load balancer algorithm + spec: + selector: + app: nginx + ports: + - name: service0 + port: 80 + protocol: TCP + targetPort: 80 + type: LoadBalancer + +- Automatically creating a load balancer. For details, see :ref:`Using kubectl to Create a Service (Automatically Creating a Load Balancer) `. + + Shared load balancer: + + .. code-block:: + + apiVersion: v1 + kind: Service + metadata: + annotations: + kubernetes.io/elb.class: union + kubernetes.io/elb.autocreate: '{ + "type": "public", + "bandwidth_name": "cce-bandwidth-1551163379627", + "bandwidth_chargemode": "bandwidth", + "bandwidth_size": 5, + "bandwidth_sharetype": "PER", + "eip_type": "5_bgp" + }' + kubernetes.io/elb.enterpriseID: '0' # ID of the enterprise project to which the load balancer belongs + kubernetes.io/elb.lb-algorithm: ROUND_ROBIN # Load balancer algorithm + labels: + app: nginx + name: nginx + spec: + ports: + - name: service0 + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: nginx + type: LoadBalancer + + Dedicated load balancer: + + .. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: nginx + labels: + app: nginx + namespace: default + annotations: + kubernetes.io/elb.class: performance + kubernetes.io/elb.autocreate: '{ + "type": "public", + "bandwidth_name": "cce-bandwidth-1626694478577", + "bandwidth_chargemode": "bandwidth", + "bandwidth_size": 5, + "bandwidth_sharetype": "PER", + "eip_type": "5_bgp", + "available_zone": [ + "" + ], + "l4_flavor_name": "L4_flavor.elb.s1.small" + }' + kubernetes.io/elb.enterpriseID: '0' # ID of the enterprise project to which the load balancer belongs + kubernetes.io/elb.lb-algorithm: ROUND_ROBIN # Load balancer algorithm + spec: + selector: + app: nginx + ports: + - name: cce-service-0 + targetPort: 80 + nodePort: 0 + port: 80 + protocol: TCP + type: LoadBalancer + +.. _cce_10_0385__section1370139104012: + +Sticky Session +-------------- + +.. table:: **Table 2** Annotations for sticky session + + +-------------------------------------------+---------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +===========================================+===================================================+==================================================================================================================================================+===========================+ + | kubernetes.io/elb.session-affinity-mode | String | Source IP address-based sticky session is supported. That is, access requests from the same IP address are forwarded to the same backend server. | v1.9 or later | + | | | | | + | | | - Disabling sticky session: Do not configure this parameter. | | + | | | - Enabling sticky session: Set this parameter to **SOURCE_IP**, indicating that the sticky session is based on the source IP address. | | + | | | | | + | | | .. note:: | | + | | | | | + | | | When **kubernetes.io/elb.lb-algorithm** is set to **SOURCE_IP** (source IP address algorithm), sticky session cannot be enabled. | | + +-------------------------------------------+---------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------+ + | kubernetes.io/elb.session-affinity-option | :ref:`Table 12 ` | Sticky session timeout. | v1.9 or later | + +-------------------------------------------+---------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------+ + +The following shows how to use the preceding annotations: + +.. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: nginx + annotations: + kubernetes.io/elb.id: # ELB ID. Replace it with the actual value. + kubernetes.io/elb.class: union # Load balancer type + kubernetes.io/elb.session-affinity-mode: SOURCE_IP # The sticky session type is source IP address. + kubernetes.io/elb.session-affinity-option: '{"persistence_timeout": "30"}' # Stickiness duration (min) + spec: + selector: + app: nginx + ports: + - name: service0 + port: 80 + protocol: TCP + targetPort: 80 + type: LoadBalancer + +.. _cce_10_0385__section1327831714595: + +Health Check +------------ + +.. table:: **Table 3** Annotations for health check + + +----------------------------------------+----------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +========================================+====================================================+==================================================================================================================================================================+===========================+ + | kubernetes.io/elb.health-check-flag | String | Whether to enable the ELB health check. | v1.9 or later | + | | | | | + | | | - Enabling health check: Leave blank this parameter or set it to **on**. | | + | | | - Disabling health check: Set this parameter to **off**. | | + | | | | | + | | | If this parameter is enabled, the :ref:`kubernetes.io/elb.health-check-option ` field must also be specified at the same time. | | + +----------------------------------------+----------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------+ + | kubernetes.io/elb.health-check-option | :ref:`Table 10 ` | ELB health check configuration items. | v1.9 or later | + +----------------------------------------+----------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------+ + | kubernetes.io/elb.health-check-options | :ref:`Table 11 ` | ELB health check configuration item. Each Service port can be configured separately, and you can configure only some ports. | v1.19.16-r5 or later | + | | | | | + | | | .. note:: | v1.21.8-r0 or later | + | | | | | + | | | **kubernetes.io/elb.health-check-option** and **kubernetes.io/elb.health-check-options** cannot be configured at the same time. | v1.23.6-r0 or later | + | | | | | + | | | | v1.25.2-r0 or later | + +----------------------------------------+----------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------+ + +- The following shows how to use **kubernetes.io/elb.health-check-option**: + + .. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: nginx + annotations: + kubernetes.io/elb.id: # ELB ID. Replace it with the actual value. + kubernetes.io/elb.class: union # Load balancer type + kubernetes.io/elb.health-check-flag: 'on' # Enable the ELB health check function. + kubernetes.io/elb.health-check-option: '{ + "protocol":"TCP", + "delay":"5", + "timeout":"10", + "max_retries":"3" + }' + spec: + selector: + app: nginx + ports: + - name: service0 + port: 80 + protocol: TCP + targetPort: 80 + type: LoadBalancer + +- For details about how to use **kubernetes.io/elb.health-check-options**, see :ref:`Configuring Health Check for Multiple Ports `. + +.. _cce_10_0385__section12416195865818: + +HTTP Protocol +------------- + +.. table:: **Table 4** Annotations for using HTTP protocols + + +---------------------------------+--------+--------------------------------------------------------------+---------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +=================================+========+==============================================================+===========================+ + | kubernetes.io/elb.protocol-port | String | Layer-7 forwarding configuration port used by the Service. | v1.19.16 or later | + +---------------------------------+--------+--------------------------------------------------------------+---------------------------+ + | kubernetes.io/elb.cert-id | String | HTTP certificate used by the Service for Layer-7 forwarding. | v1.19.16 or later | + +---------------------------------+--------+--------------------------------------------------------------+---------------------------+ + +For details about the application scenarios, see :ref:`Service Using HTTP `. + +.. _cce_10_0385__section3712956145815: + +Dynamic Adjustment of the Weight of the Backend ECS +--------------------------------------------------- + +.. table:: **Table 5** Annotations for dynamically adjusting the weight of the backend ECS + + +-----------------------------------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------+---------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +===================================+=================+=====================================================================================================================================+===========================+ + | kubernetes.io/elb.adaptive-weight | String | Dynamically adjusts the weight of the load balancer backend ECS based on pods. The requests received by each pod are more balanced. | v1.21 or later | + | | | | | + | | | - **true**: enabled | | + | | | - **false**: disabled | | + | | | | | + | | | This parameter applies only to clusters of v1.21 or later and is invalid in passthrough networking. | | + +-----------------------------------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------+---------------------------+ + +The following shows how to use the preceding annotations: + +.. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: nginx + annotations: + kubernetes.io/elb.id: # ELB ID. Replace it with the actual value. + kubernetes.io/elb.class: union # Load balancer type + kubernetes.io/elb.adaptive-weight: 'true' # Enable dynamic adjustment of the weight of the backend ECS. + spec: + selector: + app: nginx + ports: + - name: service0 + port: 80 + protocol: TCP + targetPort: 80 + type: LoadBalancer + +.. _cce_10_0385__section5456195255814: + +Pass-through Capability +----------------------- + +.. table:: **Table 6** Annotations for pass-through capability + + +--------------------------------+--------+--------------------------------------------------------------------------------------------------------+---------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +================================+========+========================================================================================================+===========================+ + | kubernetes.io/elb.pass-through | String | Whether the access requests from within the cluster to the Service pass through the ELB load balancer. | v1.19 or later | + +--------------------------------+--------+--------------------------------------------------------------------------------------------------------+---------------------------+ + +For details about the application scenarios, see :ref:`Enabling Passthrough Networking for LoadBalancer Services `. + +.. _cce_10_0385__section79480421873: + +Whitelist +--------- + +.. table:: **Table 7** Annotations for ELB access list + + +------------------------------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +==============================+=================+===============================================================================================================================================================================================================================+=============================+ + | kubernetes.io/elb.acl-id | String | This parameter is mandatory when you configure an IP address whitelist for a load balancer. The value of this parameter is the IP address group ID of the load balancer.. | v1.19.16, v1.21.4, or later | + | | | | | + | | | **This parameter takes effect only for dedicated load balancers and takes effect only when a Service is created or a new service port (listener) is specified.** | | + | | | | | + | | | **How to obtain**: | | + | | | | | + | | | Log in to the console. In the **Service List**, choose **Networking > Elastic Load Balance**. On the Network Console, choose **Elastic Load Balance > IP Address Groups** and copy the **ID** of the target IP address group. | | + +------------------------------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | kubernetes.io/elb.acl-status | String | This parameter is mandatory when you set an IP address whitelist for a load balancer. The value is **on**, indicating that access control is enabled. | v1.19.16, v1.21.4, or later | + | | | | | + | | | **This parameter takes effect only for dedicated load balancers and takes effect only when a Service is created or a new service port (listener) is specified.** | | + +------------------------------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + | kubernetes.io/elb.acl-type | String | This parameter is mandatory when you set the IP address whitelist for a load balancer. | v1.19.16, v1.21.4, or later | + | | | | | + | | | - black: indicates the blacklist. The selected IP address group cannot access the ELB address. | | + | | | - white: indicates the whitelist. Only the selected IP address group can access the ELB address. | | + | | | | | + | | | **This parameter takes effect only for dedicated load balancers and takes effect only when a Service is created or a new service port (listener) is specified.** | | + +------------------------------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+ + +The following shows how to use the preceding annotations: + +.. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: nginx + annotations: + kubernetes.io/elb.id: # ELB ID. Replace it with the actual value. + kubernetes.io/elb.class: performance # Load balancer type + kubernetes.io/elb.acl-id: # ELB IP address group ID + kubernetes.io/elb.acl-status: 'on' # Enable access control. + kubernetes.io/elb.acl-type: 'white' # Whitelist control + spec: + selector: + app: nginx + ports: + - name: service0 + port: 80 + protocol: TCP + targetPort: 80 + type: LoadBalancer + +.. _cce_10_0385__section952710224812: + +Host Network +------------ + +.. table:: **Table 8** Annotations for host network + + +-------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------+---------------------------+ + | Parameter | Type | Description | Supported Cluster Version | + +===============================+=================+==================================================================================================================+===========================+ + | kubernetes.io/hws-hostNetwork | String | If the pod uses **hostNetwork**, the ELB forwards the request to the host network after this annotation is used. | v1.9 or later | + | | | | | + | | | Options: | | + | | | | | + | | | - **true**: enabled | | + | | | - **false** (default): disabled | | + +-------------------------------+-----------------+------------------------------------------------------------------------------------------------------------------+---------------------------+ + +The following shows how to use the preceding annotations: + +.. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: nginx + annotations: + kubernetes.io/elb.id: # ELB ID. Replace it with the actual value. + kubernetes.io/elb.class: union # Load balancer type + kubernetes.io/hws-hostNetwork: 'true' # The load balancer forwards the request to the host network. + spec: + selector: + app: nginx + ports: + - name: service0 + port: 80 + protocol: TCP + targetPort: 80 + type: LoadBalancer + +Data Structure +-------------- + +.. _cce_10_0385__table148341447193017: + +.. table:: **Table 9** Data structure of the **elb.autocreate** field + + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +======================+=======================================+==================+==================================================================================================================================================================================================================================================================================================================================================================================+ + | name | No | String | Name of the automatically created load balancer. | + | | | | | + | | | | The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed. | + | | | | | + | | | | Default: **cce-lb+service.UID** | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | type | No | String | Network type of the load balancer. | + | | | | | + | | | | - **public**: public network load balancer | + | | | | - **inner**: private network load balancer | + | | | | | + | | | | Default: **inner** | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | + | | | | | + | | | | The value can contain 1 to 64 characters. Only letters, digits, underscores (_), hyphens (-), and periods (.) are allowed. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_chargemode | No | String | Bandwidth mode. | + | | | | | + | | | | - **bandwidth**: billed by bandwidth | + | | | | - **traffic**: billed by traffic | + | | | | | + | | | | Default: **bandwidth** | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_size | Yes for public network load balancers | Integer | Bandwidth size. The default value is 1 to 2000 Mbit/s. Configure this parameter based on the bandwidth range allowed in your region. | + | | | | | + | | | | The minimum increment for bandwidth adjustment varies depending on the bandwidth range. | + | | | | | + | | | | - The minimum increment is 1 Mbit/s if the allowed bandwidth does not exceed 300 Mbit/s. | + | | | | - The minimum increment is 50 Mbit/s if the allowed bandwidth ranges from 300 Mbit/s to 1000 Mbit/s. | + | | | | - The minimum increment is 500 Mbit/s if the allowed bandwidth exceeds 1000 Mbit/s. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | bandwidth_sharetype | Yes for public network load balancers | String | Bandwidth sharing mode. | + | | | | | + | | | | - **PER**: dedicated bandwidth | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | eip_type | Yes for public network load balancers | String | EIP type. | + | | | | | + | | | | - **5_bgp**: dynamic BGP | + | | | | - **5_sbgp**: static BGP | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | available_zone | Yes | Array of strings | AZ where the load balancer is located. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | l4_flavor_name | Yes | String | Flavor name of the layer-4 load balancer. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | l7_flavor_name | No | String | Flavor name of the layer-7 load balancer. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. The value of this parameter must be the same as that of **l4_flavor_name**, that is, both are elastic specifications or fixed specifications. | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | elb_virsubnet_ids | No | Array of strings | Subnet where the backend server of the load balancer is located. If this parameter is left blank, the default cluster subnet is used. Load balancers occupy different number of subnet IP addresses based on their specifications. Therefore, you are not advised to use the subnet CIDR blocks of other resources (such as clusters and nodes) as the load balancer CIDR block. | + | | | | | + | | | | This parameter is available only for dedicated load balancers. | + | | | | | + | | | | Example: | + | | | | | + | | | | .. code-block:: | + | | | | | + | | | | "elb_virsubnet_ids": [ | + | | | | "14567f27-8ae4-42b8-ae47-9f847a4690dd" | + | | | | ] | + +----------------------+---------------------------------------+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0385__table19192143412319: + +.. table:: **Table 10** Data structure description of the **elb.health-check-option** field + + +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +=================+=================+=================+====================================================================================+ + | delay | No | String | Initial waiting time (in seconds) for starting the health check. | + | | | | | + | | | | Value range: 1 to 50. Default value: **5** | + +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ + | timeout | No | String | Health check timeout, in seconds. | + | | | | | + | | | | Value range: 1 to 50. Default value: **10** | + +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ + | max_retries | No | String | Maximum number of health check retries. | + | | | | | + | | | | Value range: 1 to 10. Default value: **3** | + +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ + | protocol | No | String | Health check protocol. | + | | | | | + | | | | Value options: TCP or HTTP | + +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ + | path | No | String | Health check URL. This parameter needs to be configured when the protocol is HTTP. | + | | | | | + | | | | Default value: **/** | + | | | | | + | | | | The value can contain 1 to 10,000 characters. | + +-----------------+-----------------+-----------------+------------------------------------------------------------------------------------+ + +.. _cce_10_0385__table33328411456: + +.. table:: **Table 11** Data structure description of the **elb.health-check-options** field + + +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +=====================+=================+=================+==============================================================================================================================================+ + | target_service_port | Yes | String | Port for health check specified by spec.ports. The value consists of the protocol and port number, for example, TCP:80. | + +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ + | monitor_port | No | String | Re-specified port for health check. If this parameter is not specified, the service port is used by default. | + | | | | | + | | | | .. note:: | + | | | | | + | | | | Ensure that the port is in the listening state on the node where the pod is located. Otherwise, the health check result will be affected. | + +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ + | delay | No | String | Initial waiting time (in seconds) for starting the health check. | + | | | | | + | | | | Value range: 1 to 50. Default value: **5** | + +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ + | timeout | No | String | Health check timeout, in seconds. | + | | | | | + | | | | Value range: 1 to 50. Default value: **10** | + +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ + | max_retries | No | String | Maximum number of health check retries. | + | | | | | + | | | | Value range: 1 to 10. Default value: **3** | + +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ + | protocol | No | String | Health check protocol. | + | | | | | + | | | | Default value: protocol of the associated Service | + | | | | | + | | | | Value options: TCP, UDP, or HTTP | + +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ + | path | No | String | Health check URL. This parameter needs to be configured when the protocol is **HTTP**. | + | | | | | + | | | | Default value: **/** | + | | | | | + | | | | The value can contain 1 to 10,000 characters. | + +---------------------+-----------------+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0385__table3340195463412: + +.. table:: **Table 12** Data structure of the **elb.session-affinity-option** field + + +---------------------+-----------------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Type | Description | + +=====================+=================+=================+==============================================================================================================================+ + | persistence_timeout | Yes | String | Sticky session timeout, in minutes. This parameter is valid only when **elb.session-affinity-mode** is set to **SOURCE_IP**. | + | | | | | + | | | | Value range: 1 to 60. Default value: **60** | + +---------------------+-----------------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/networking/services/nodeport.rst b/umn/source/network/service/nodeport.rst similarity index 72% rename from umn/source/networking/services/nodeport.rst rename to umn/source/network/service/nodeport.rst index 2561bf8..8ea8bbd 100644 --- a/umn/source/networking/services/nodeport.rst +++ b/umn/source/network/service/nodeport.rst @@ -8,22 +8,22 @@ NodePort Scenario -------- -A Service is exposed on each node's IP address at a static port (NodePort). A ClusterIP Service, to which the NodePort Service will route, is automatically created. By requesting :, you can access a NodePort Service from outside the cluster. +A Service is exposed on each node's IP address at a static port (NodePort). When you create a NodePort Service, Kubernetes automatically allocates an internal IP address (ClusterIP) of the cluster. When clients outside the cluster access :, the traffic will be forwarded to the target pod through the ClusterIP of the NodePort Service. -.. figure:: /_static/images/en-us_image_0000001517743380.png +.. figure:: /_static/images/en-us_image_0000001647417292.png :alt: **Figure 1** NodePort access **Figure 1** NodePort access -Notes and Constraints ---------------------- +Constraints +----------- -- By default, a NodePort Service is accessed within a VPC. If you need to use an EIP to access a NodePort Service through public networks, bind an EIP to the node in the cluster in advance. -- After a Service is created, if the affinity setting is switched from the cluster level to the node level, the connection tracing table will not be cleared. You are advised not to modify the Service affinity setting after the Service is created. If you need to modify it, create a Service again. +- By default, a NodePort Service is accessed within a VPC. To use an EIP to access a NodePort Service through public networks, bind an EIP to the node in the cluster in advance. +- After a Service is created, if the affinity setting is switched from the cluster level to the node level, the connection tracing table will not be cleared. Do not modify the Service affinity setting after the Service is created. To modify it, create a Service again. - CCE Turbo clusters support only cluster-level service affinity. - In VPC network mode, when container A is published through a NodePort service and the service affinity is set to the node level (that is, **externalTrafficPolicy** is set to **local**), container B deployed on the same node cannot access container A through the node IP address and NodePort service. -- When a NodePort service is created in a cluster of v1.21.7 or later, the port on the node is not displayed using **netstat** by default. If the cluster forwarding mode is **iptables**, run the **iptables -t nat -L** command to view the port. If the cluster forwarding mode is **ipvs**, run the **ipvsadm -nL** command to view the port. +- When a NodePort service is created in a cluster of v1.21.7 or later, the port on the node is not displayed using **netstat** by default. If the cluster forwarding mode is **iptables**, run the **iptables -t nat -L** command to view the port. If the cluster forwarding mode is **IPVS**, run the **ipvsadm -Ln** command to view the port. Creating a NodePort Service --------------------------- @@ -35,13 +35,13 @@ Creating a NodePort Service - **Service Name**: Specify a Service name, which can be the same as the workload name. - **Service Type**: Select **NodePort**. - **Namespace**: Namespace to which the workload belongs. - - **Service Affinity**: For details, see :ref:`externalTrafficPolicy (Service Affinity) `. + - **Service Affinity**: For details, see :ref:`externalTrafficPolicy (Service Affinity) `. - **Cluster level**: The IP addresses and access ports of all nodes in a cluster can access the workload associated with the Service. Service access will cause performance loss due to route redirection, and the source IP address of the client cannot be obtained. - **Node level**: Only the IP address and access port of the node where the workload is located can access the workload associated with the Service. Service access will not cause performance loss due to route redirection, and the source IP address of the client can be obtained. - **Selector**: Add a label and click **Add**. A Service selects a pod based on the added label. You can also click **Reference Workload Label** to reference the label of an existing workload. In the dialog box that is displayed, select a workload and click **OK**. - - **Port Settings** + - **Port** - **Protocol**: protocol used by the Service. - **Service Port**: port used by the Service. The port number ranges from 1 to 65535. @@ -53,7 +53,7 @@ Creating a NodePort Service Using kubectl ------------- -You can run kubectl commands to set the access type. This section uses a Nginx workload as an example to describe how to set a NodePort Service using kubectl. +You can run kubectl commands to set the access type. This section uses an Nginx workload as an example to describe how to set a NodePort Service using kubectl. #. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. @@ -187,41 +187,3 @@ You can run kubectl commands to set the access type. This section uses a Nginx w / # - -.. _cce_10_0142__section18134208069: - -externalTrafficPolicy (Service Affinity) ----------------------------------------- - -For a NodePort Service, requests are first sent to the node port, then the Service, and finally the pod backing the Service. The backing pod may be not located in the node receiving the requests. By default, the backend workload can be accessed from any node IP address and service port. If the pod is not on the node that receives the request, the request will be redirected to the node where the pod is located, which may cause performance loss. - -**externalTrafficPolicy** is a configuration parameter of the Service. - -.. code-block:: - - apiVersion: v1 - kind: Service - metadata: - name: nginx-nodeport - spec: - externalTrafficPolicy: local - ports: - - name: service - nodePort: 30000 - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: nginx - type: NodePort - -If the value of **externalTrafficPolicy** is **local**, requests sent from *Node IP address:Service port* will be forwarded only to the pod on the local node. If the node does not have a pod, the requests are suspended. - -The other value of **externalTrafficPolicy** is **cluster** (default value), which indicates that requests are forwarded in a cluster. - -You can set this parameter when creating a Service of the NodePort type on the CCE console. - -The values of **externalTrafficPolicy** are as follows: - -- **cluster**: The IP addresses and access ports of all nodes in a cluster can access the workload associated with the Service. Service access will cause performance loss due to route redirection, and the source IP address of the client cannot be obtained. -- **local**: Only the IP address and access port of the node where the workload is located can access the workload associated with the Service. Service access will not cause performance loss due to route redirection, and the source IP address of the client can be obtained. diff --git a/umn/source/network/service/overview.rst b/umn/source/network/service/overview.rst new file mode 100644 index 0000000..ceda9cc --- /dev/null +++ b/umn/source/network/service/overview.rst @@ -0,0 +1,175 @@ +:original_name: cce_10_0249.html + +.. _cce_10_0249: + +Overview +======== + +Direct Access to a Pod +---------------------- + +After a pod is created, the following problems may occur if you directly access to the pod: + +- The pod can be deleted and created again at any time by a controller such as a Deployment, and the result of accessing the pod becomes unpredictable. +- The IP address of the pod is allocated only after the pod is started. Before the pod is started, the IP address of the pod is unknown. +- An application is usually composed of multiple pods that run the same image. Accessing pods one by one is not efficient. + +For example, an application uses Deployments to create the frontend and backend. The frontend calls the backend for computing, as shown in :ref:`Figure 1 `. Three pods are running in the backend, which are independent and replaceable. When a backend pod is created again, the new pod is assigned with a new IP address, of which the frontend pod is unaware. + +.. _cce_10_0249__en-us_topic_0249851121_fig2173165051811: + +.. figure:: /_static/images/en-us_image_0000001647417852.png + :alt: **Figure 1** Inter-pod access + + **Figure 1** Inter-pod access + +Using Services for Pod Access +----------------------------- + +Kubernetes Services are used to solve the preceding pod access problems. A Service has a fixed IP address. (When a CCE cluster is created, a Service CIDR block is set, which is used to allocate IP addresses to Services.) A Service forwards requests accessing the Service to pods based on labels, and at the same time, performs load balancing for these pods. + +In the preceding example, a Service is added for the frontend pod to access the backend pods. In this way, the frontend pod does not need to be aware of the changes on backend pods, as shown in :ref:`Figure 2 `. + +.. _cce_10_0249__en-us_topic_0249851121_fig163156154816: + +.. figure:: /_static/images/en-us_image_0000001695896373.png + :alt: **Figure 2** Accessing pods through a Service + + **Figure 2** Accessing pods through a Service + +Service Types +------------- + +Kubernetes allows you to specify a Service of a required type. The values and actions of different types of Services are as follows: + +- :ref:`ClusterIP ` + + A ClusterIP Service allows workloads in the same cluster to use their cluster-internal domain names to access each other. + +- :ref:`NodePort ` + + A Service is exposed on each node's IP address at a static port (NodePort). A ClusterIP Service, to which the NodePort Service will route, is automatically created. By requesting :, you can access a NodePort Service from outside the cluster. + +- :ref:`LoadBalancer ` + + LoadBalancer Services can access workloads from the public network through a load balancer, which is more reliable than EIP-based access. LoadBalancer Services are recommended for accessing workloads from outside the cluster. + +.. _cce_10_0249__section18134208069: + +externalTrafficPolicy (Service Affinity) +---------------------------------------- + +For a NodePort and LoadBalancer Service, requests are first sent to the node port, then the Service, and finally the pod backing the Service. The backing pod may be not located in the node receiving the requests. By default, the backend workload can be accessed from any node IP address and service port. If the pod is not on the node that receives the request, the request will be redirected to the node where the pod is located, which may cause performance loss. + +**externalTrafficPolicy** is a configuration parameter of the Service. + +.. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: nginx-nodeport + spec: + externalTrafficPolicy: Local + ports: + - name: service + nodePort: 30000 + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: nginx + type: NodePort + +If the value of **externalTrafficPolicy** is **Local**, requests sent from *Node IP address:Service port* will be forwarded only to the pod on the local node. If the node does not have a pod, the requests are suspended. + +If the value of **externalTrafficPolicy** is **Cluster**, requests are forwarded within the cluster and the backend workload can be accessed from any node IP address and service port. + +If **externalTrafficPolicy** is not set, the default value **Cluster** will be used. + +You can set this parameter when creating a Service of the NodePort type on the CCE console. + +The values of **externalTrafficPolicy** are as follows: + +- **Cluster**: The IP addresses and access ports of all nodes in a cluster can access the workload associated with the Service. Service access will cause performance loss due to route redirection, and the source IP address of the client cannot be obtained. +- **Local**: Only the IP address and access port of the node where the workload is located can access the workload associated with the Service. Service access will not cause performance loss due to route redirection, and the source IP address of the client can be obtained. In this scenario, Services may fail to be accessed from within the cluster. For details, see :ref:`Why a Service Fail to Be Accessed from Within the Cluster `. + +.. _cce_10_0249__section52631714117: + +Why a Service Fail to Be Accessed from Within the Cluster +--------------------------------------------------------- + +If the service affinity of a Service is set to the node level, that is, the value of **externalTrafficPolicy** is **Local**, the Service may fail to be accessed from within the cluster (specifically, nodes or containers). Information similar to the following is displayed: + +.. code-block:: + + upstream connect error or disconnect/reset before headers. reset reason: connection failure + Or + curl: (7) Failed to connect to 192.168.10.36 port 900: Connection refused + +It is common that a load balancer in a cluster cannot be accessed. The reason is as follows: When Kubernetes creates a Service, kube-proxy adds the access address of the load balancer as an external IP address (External-IP, as shown in the following command output) to iptables or IPVS. If a client inside the cluster initiates a request to access the load balancer, the address is considered as the external IP address of the Service, and the request is directly forwarded by kube-proxy without passing through the load balancer outside the cluster. + +.. code-block:: + + # kubectl get svc nginx + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + nginx LoadBalancer 10.247.76.156 123.**.**.**,192.168.0.133 80:32146/TCP 37s + +When the value of **externalTrafficPolicy** is **Local**, the access failures in different container network models and service forwarding modes are as follows: + +.. note:: + + - For a multi-pod workload, ensure that all pods are accessible. Otherwise, there is a possibility that the access to the workload fails. + - CCE Turbo clusters using Cloud Native 2.0 networking do not support node-level service affinity. + ++------------------------------------------------------+------------------------+----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+ +| Service Type Released on the Server | Access Type | Request Initiation Location on the Client | Tunnel Network Cluster (IPVS) | VPC Network Cluster (IPVS) | Tunnel Network Cluster (iptables) | VPC Network Cluster (iptables) | ++======================================================+========================+==========================================================+=========================================================================================================================+==================================================================================================================+==================================================================================================================+==================================================================================================================+ +| NodePort Service | Public/Private network | Same node as the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. | Access the IP address and NodePort on the node where the server is located: The access is successful. | Access the IP address and NodePort on the node where the server is located: The access is successful. | Access the IP address and NodePort on the node where the server is located: The access is successful. | +| | | | | | | | +| | | | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | ++------------------------------------------------------+------------------------+----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+ +| | | Different nodes from the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. | Access the IP address and NodePort on the node where the server is located: The access is successful. | The access is successful. | The access is successful. | +| | | | | | | | +| | | | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | | | ++------------------------------------------------------+------------------------+----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+ +| | | Other containers on the same node as the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. | The access failed. | Access the IP address and NodePort on the node where the server is located: The access is successful. | The access failed. | +| | | | | | | | +| | | | Access the IP address and NodePort on a node other than the node where the server is located: The access is successful. | | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | | ++------------------------------------------------------+------------------------+----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+ +| | | Other containers on different nodes from the service pod | Access the IP address and NodePort on the node where the server is located: The access is successful. | Access the IP address and NodePort on the node where the server is located: The access is successful. | Access the IP address and NodePort on the node where the server is located: The access is successful. | Access the IP address and NodePort on the node where the server is located: The access is successful. | +| | | | | | | | +| | | | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | Access the IP address and NodePort on a node other than the node where the server is located: The access failed. | ++------------------------------------------------------+------------------------+----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+ +| LoadBalancer Service using a dedicated load balancer | Private network | Same node as the service pod | The access failed. | The access failed. | The access failed. | The access failed. | ++------------------------------------------------------+------------------------+----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+ +| | | Other containers on the same node as the service pod | The access failed. | The access failed. | The access failed. | The access failed. | ++------------------------------------------------------+------------------------+----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+ + +The following methods can be used to solve this problem: + +- (**Recommended**) In the cluster, use the ClusterIP Service or service domain name for access. + +- Set **externalTrafficPolicy** of the Service to **Cluster**, which means cluster-level service affinity. Note that this affects source address persistence. + + .. code-block:: + + apiVersion: v1 + kind: Service + metadata: + annotations: + kubernetes.io/elb.class: union + kubernetes.io/elb.autocreate: '{"type":"public","bandwidth_name":"cce-bandwidth","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"}' + labels: + app: nginx + name: nginx + spec: + externalTrafficPolicy: Cluster + ports: + - name: service0 + port: 80 + protocol: TCP + targetPort: 80 + selector: + app: nginx + type: LoadBalancer diff --git a/umn/source/networking/dns/overview.rst b/umn/source/networking/dns/overview.rst deleted file mode 100644 index 0e8c019..0000000 --- a/umn/source/networking/dns/overview.rst +++ /dev/null @@ -1,59 +0,0 @@ -:original_name: cce_10_0360.html - -.. _cce_10_0360: - -Overview -======== - -Introduction to CoreDNS ------------------------ - -When you create a cluster, the :ref:`coredns add-on ` is installed to resolve domain names in the cluster. - -You can view the pod of the coredns add-on in the kube-system namespace. - -.. code-block:: - - $ kubectl get po --namespace=kube-system - NAME READY STATUS RESTARTS AGE - coredns-7689f8bdf-295rk 1/1 Running 0 9m11s - coredns-7689f8bdf-h7n68 1/1 Running 0 11m - -After coredns is installed, it becomes a DNS. After the Service is created, coredns records the Service name and IP address. In this way, the pod can obtain the Service IP address by querying the Service name from coredns. - -**nginx..svc.cluster.local** is used to access the Service. **nginx** is the Service name, **** is the namespace, and **svc.cluster.local** is the domain name suffix. In actual use, you can omit **.svc.cluster.local** in the same namespace and use the ServiceName. - -An advantage of using ServiceName is that you can write ServiceName into the program when developing the application. In this way, you do not need to know the IP address of a specific Service. - -After the coredns add-on is installed, there is also a Service in the kube-system namespace, as shown below. - -.. code-block:: - - $ kubectl get svc -n kube-system - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - coredns ClusterIP 10.247.3.10 53/UDP,53/TCP,8080/TCP 13d - -By default, after other pods are created, the address of the coredns Service is written as the address of the domain name resolution server in the **/etc/resolv.conf** file of the pod. Create a pod and view the **/etc/resolv.conf** file as follows: - -.. code-block:: - - $ kubectl exec test01-6cbbf97b78-krj6h -it -- /bin/sh - / # cat /etc/resolv.conf - nameserver 10.247.3.10 - search default.svc.cluster.local svc.cluster.local cluster.local - options ndots:5 timeout single-request-reopen - -When a user accesses the *Service name:Port* of the Nginx pod, the IP address of the Nginx Service is resolved from CoreDNS, and then the IP address of the Nginx Service is accessed. In this way, the user can access the backend Nginx pod. - - -.. figure:: /_static/images/en-us_image_0000001568822905.png - :alt: **Figure 1** Example of domain name resolution in a cluster - - **Figure 1** Example of domain name resolution in a cluster - -Related Operations ------------------- - -You can also configure DNS in a workload. For details, see :ref:`DNS Configuration `. - -You can also use coredns to implement user-defined domain name resolution. For details, see :ref:`Using CoreDNS for Custom Domain Name Resolution `. diff --git a/umn/source/networking/dns/using_coredns_for_custom_domain_name_resolution.rst b/umn/source/networking/dns/using_coredns_for_custom_domain_name_resolution.rst deleted file mode 100644 index 6632e80..0000000 --- a/umn/source/networking/dns/using_coredns_for_custom_domain_name_resolution.rst +++ /dev/null @@ -1,257 +0,0 @@ -:original_name: cce_10_0361.html - -.. _cce_10_0361: - -Using CoreDNS for Custom Domain Name Resolution -=============================================== - -Challenges ----------- - -When using CCE, you may need to resolve custom internal domain names in the following scenarios: - -- In the legacy code, a fixed domain name is configured for calling other internal services. If the system decides to use Kubernetes Services, the code refactoring workload could be heavy. -- A service is created outside the cluster. Data in the cluster needs to be sent to the service through a fixed domain name. - -Solution --------- - -There are several CoreDNS-based solutions for custom domain name resolution: - -- :ref:`Configuring the Stub Domain for CoreDNS `: You can add it on the console, which is easy to operate. -- :ref:`Using the CoreDNS Hosts plug-in to configure resolution for any domain name `: You can add any record set, which is similar to adding a record set in the local **/etc/hosts** file. -- :ref:`Using the CoreDNS Rewrite plug-in to point a domain name to a service in the cluster `: A nickname is assigned to the Kubernetes Service. You do not need to know the IP address of the resolution record in advance. -- :ref:`Using the CoreDNS Forward plug-in to set the self-built DNS as the upstream DNS `: The self-built DNS can manage a large number of resolution records. You do not need to modify the CoreDNS configuration when adding or deleting records. - -Precautions ------------ - -Improper modification on CoreDNS configuration may cause domain name resolution failures in the cluster. Perform tests before and after the modification. - -.. _cce_10_0361__section5202157467: - -Configuring the Stub Domain for CoreDNS ---------------------------------------- - -Cluster administrators can modify the ConfigMap for the CoreDNS Corefile to change how service discovery works. - -Assume that a cluster administrator has a Consul DNS server located at 10.150.0.1 and all Consul domain names have the suffix **.consul.local**. - -#. Log in to the CCE console and access the cluster console. - -#. In the navigation pane, choose **Add-ons**. On the displayed page, click **Edit** under CoreDNS. - -#. Add a stub domain in the **Parameters** area. - - Modify the **stub_domains** parameter in the format of a key-value pair. The key is a DNS suffix domain name, and the value is a DNS IP address or a group of DNS IP addresses. - - .. code-block:: - - { - "stub_domains": { - "consul.local": [ - "10.150.0.1" - ] - }, - "upstream_nameservers": [] - } - -#. Click **OK**. - -You can also modify the ConfigMap as follows: - -.. important:: - - The parameter values in red in the example can only be modified and cannot be deleted. - -.. code-block:: - - $ kubectl edit configmap coredns -n kube-system - apiVersion: v1 - data: - Corefile: |- - .:5353 { - bind {$POD_IP} - cache 30 - errors - health {$POD_IP}:8080 - kubernetes cluster.local in-addr.arpa ip6.arpa { - pods insecure - fallthrough in-addr.arpa ip6.arpa - } - loadbalance round_robin - prometheus {$POD_IP}:9153 - forward . /etc/resolv.conf { - policy random - } - reload - } - - consul.local:5353 { - bind {$POD_IP} - errors - cache 30 - forward . 10.150.0.1 - } - kind: ConfigMap - metadata: - creationTimestamp: "2022-05-04T04:42:24Z" - labels: - app: coredns - k8s-app: coredns - kubernetes.io/cluster-service: "true" - kubernetes.io/name: CoreDNS - release: cceaddon-coredns - name: coredns - namespace: kube-system - resourceVersion: "8663493" - uid: bba87142-9f8d-4056-b8a6-94c3887e9e1d - -.. _cce_10_0361__section106211954135311: - -Modifying the CoreDNS Hosts Configuration File ----------------------------------------------- - -#. Use kubectl to connect to the cluster. - -#. Modify the CoreDNS configuration file and add the custom domain name to the hosts file. - - Point **www.example.com** to **192.168.1.1**. When CoreDNS resolves **www.example.com**, **192.168.1.1** is returned. - - .. important:: - - The fallthrough field must be configured. **fallthrough** indicates that when the domain name to be resolved cannot be found in the hosts file, the resolution task is transferred to the next CoreDNS plug-in. If **fallthrough** is not specified, the task ends and the domain name resolution stops. As a result, the domain name resolution in the cluster fails. - - For details about how to configure the hosts file, visit https://coredns.io/plugins/hosts/. - - .. code-block:: - - $ kubectl edit configmap coredns -n kube-system - apiVersion: v1 - data: - Corefile: |- - .:5353 { - bind {$POD_IP} - cache 30 - errors - health {$POD_IP}:8080 - kubernetes cluster.local in-addr.arpa ip6.arpa { - pods insecure - fallthrough in-addr.arpa ip6.arpa - } - hosts { - 192.168.1.1 www.example.com - fallthrough - } - loadbalance round_robin - prometheus {$POD_IP}:9153 - forward . /etc/resolv.conf - reload - } - kind: ConfigMap - metadata: - creationTimestamp: "2021-08-23T13:27:28Z" - labels: - app: coredns - k8s-app: coredns - kubernetes.io/cluster-service: "true" - kubernetes.io/name: CoreDNS - release: cceaddon-coredns - name: coredns - namespace: kube-system - resourceVersion: "460" - selfLink: /api/v1/namespaces/kube-system/configmaps/coredns - uid: be64aaad-1629-441f-8a40-a3efc0db9fa9 - - After modifying the hosts file in CoreDNS, you do not need to configure the hosts file in each pod. - -.. _cce_10_0361__section2213823544: - -Adding the CoreDNS Rewrite Configuration to Point the Domain Name to Services in the Cluster --------------------------------------------------------------------------------------------- - -Use the Rewrite plug-in of CoreDNS to resolve a specified domain name to the domain name of a Service. - -#. Use kubectl to connect to the cluster. - -#. Modify the CoreDNS configuration file to point **example.com** to the **example** service in the **default** namespace. - - .. code-block:: - - $ kubectl edit configmap coredns -n kube-system - apiVersion: v1 - data: - Corefile: |- - .:5353 { - bind {$POD_IP} - cache 30 - errors - health {$POD_IP}:8080 - kubernetes cluster.local in-addr.arpa ip6.arpa { - pods insecure - fallthrough in-addr.arpa ip6.arpa - } - rewrite name example.com example.default.svc.cluster.local - loadbalance round_robin - prometheus {$POD_IP}:9153 - forward . /etc/resolv.conf - reload - } - kind: ConfigMap - metadata: - creationTimestamp: "2021-08-23T13:27:28Z" - labels: - app: coredns - k8s-app: coredns - kubernetes.io/cluster-service: "true" - kubernetes.io/name: CoreDNS - release: cceaddon-coredns - name: coredns - namespace: kube-system - resourceVersion: "460" - selfLink: /api/v1/namespaces/kube-system/configmaps/coredns - uid: be64aaad-1629-441f-8a40-a3efc0db9fa9 - -.. _cce_10_0361__section677819913541: - -Using CoreDNS to Cascade Self-Built DNS ---------------------------------------- - -#. Use kubectl to connect to the cluster. - -#. Modify the CoreDNS configuration file and change **/etc/resolv.conf** following **forward** to the IP address of the external DNS server. - - .. code-block:: - - $ kubectl edit configmap coredns -n kube-system - apiVersion: v1 - data: - Corefile: |- - .:5353 { - bind {$POD_IP} - cache 30 - errors - health {$POD_IP}:8080 - kubernetes cluster.local in-addr.arpa ip6.arpa { - pods insecure - fallthrough in-addr.arpa ip6.arpa - } - loadbalance round_robin - prometheus {$POD_IP}:9153 - forward . 192.168.1.1 - reload - } - kind: ConfigMap - metadata: - creationTimestamp: "2021-08-23T13:27:28Z" - labels: - app: coredns - k8s-app: coredns - kubernetes.io/cluster-service: "true" - kubernetes.io/name: CoreDNS - release: cceaddon-coredns - name: coredns - namespace: kube-system - resourceVersion: "460" - selfLink: /api/v1/namespaces/kube-system/configmaps/coredns - uid: be64aaad-1629-441f-8a40-a3efc0db9fa9 diff --git a/umn/source/networking/ingresses/index.rst b/umn/source/networking/ingresses/index.rst deleted file mode 100644 index 36f8d50..0000000 --- a/umn/source/networking/ingresses/index.rst +++ /dev/null @@ -1,18 +0,0 @@ -:original_name: cce_10_0248.html - -.. _cce_10_0248: - -Ingresses -========= - -- :ref:`Ingress Overview ` -- :ref:`Using ELB Ingresses on the Console ` -- :ref:`Using kubectl to Create an ELB Ingress ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - ingress_overview - using_elb_ingresses_on_the_console - using_kubectl_to_create_an_elb_ingress diff --git a/umn/source/networking/ingresses/ingress_overview.rst b/umn/source/networking/ingresses/ingress_overview.rst deleted file mode 100644 index 2cd2f45..0000000 --- a/umn/source/networking/ingresses/ingress_overview.rst +++ /dev/null @@ -1,43 +0,0 @@ -:original_name: cce_10_0094.html - -.. _cce_10_0094: - -Ingress Overview -================ - -Why We Need Ingresses ---------------------- - -A Service is generally used to forward access requests based on TCP and UDP and provide layer-4 load balancing for clusters. However, in actual scenarios, if there is a large number of HTTP/HTTPS access requests on the application layer, the Service cannot meet the forwarding requirements. Therefore, the Kubernetes cluster provides an HTTP-based access mode, that is, ingress. - -An ingress is an independent resource in the Kubernetes cluster and defines rules for forwarding external access traffic. As shown in :ref:`Figure 1 `, you can customize forwarding rules based on domain names and URLs to implement fine-grained distribution of access traffic. - -.. _cce_10_0094__fig18155819416: - -.. figure:: /_static/images/en-us_image_0000001517903200.png - :alt: **Figure 1** Ingress diagram - - **Figure 1** Ingress diagram - -The following describes the ingress-related definitions: - -- Ingress object: a set of access rules that forward requests to specified Services based on domain names or URLs. It can be added, deleted, modified, and queried by calling APIs. -- Ingress Controller: an executor for request forwarding. It monitors the changes of resource objects such as ingresses, Services, endpoints, secrets (mainly TLS certificates and keys), nodes, and ConfigMaps in real time, parses rules defined by ingresses, and forwards requests to the corresponding backend Services. - -Working Principle of ELB Ingress Controller -------------------------------------------- - -ELB Ingress Controller developed by CCE implements layer-7 network access for the internet and intranet (in the same VPC) based on ELB and distributes access traffic to the corresponding Services using different URLs. - -ELB Ingress Controller is deployed on the master node and bound to the load balancer in the VPC where the cluster resides. Different domain names, ports, and forwarding policies can be configured for the same load balancer (with the same IP address). :ref:`Figure 2 ` shows the working principle of ELB Ingress Controller. - -#. A user creates an ingress object and configures a traffic access rule in the ingress, including the load balancer, URL, SSL, and backend service port. -#. When Ingress Controller detects that the ingress object changes, it reconfigures the listener and backend server route on the ELB side according to the traffic access rule. -#. When a user accesses a workload, the traffic is forwarded to the corresponding backend service port based on the forwarding policy configured on ELB, and then forwarded to each associated workload through the Service. - -.. _cce_10_0094__fig122542486129: - -.. figure:: /_static/images/en-us_image_0000001568822925.png - :alt: **Figure 2** Working principle of ELB Ingress Controller - - **Figure 2** Working principle of ELB Ingress Controller diff --git a/umn/source/networking/ingresses/using_elb_ingresses_on_the_console.rst b/umn/source/networking/ingresses/using_elb_ingresses_on_the_console.rst deleted file mode 100644 index f0b2f63..0000000 --- a/umn/source/networking/ingresses/using_elb_ingresses_on_the_console.rst +++ /dev/null @@ -1,135 +0,0 @@ -:original_name: cce_10_0251.html - -.. _cce_10_0251: - -Using ELB Ingresses on the Console -================================== - -Prerequisites -------------- - -- An ingress provides network access for backend workloads. Ensure that a workload is available in a cluster. If no workload is available, deploy a workload by referring to :ref:`Creating a Deployment `, :ref:`Creating a StatefulSet `, or :ref:`Creating a DaemonSet `. -- A NodePort Service has been configured for the workload. For details about how to configure the Service, see :ref:`NodePort `. -- Dedicated load balancers must be the application type (HTTP/HTTPS) supporting private networks (with a private IP). -- In ELB passthrough networking (CCE Turbo cluster + dedicated load balancer), ELB Ingress supports ClusterIP Services. In other scenarios, ELB Ingress supports NodePort Services. - -Notes ------ - -- It is recommended that other resources not use the load balancer automatically created by an ingress. Otherwise, the load balancer will be occupied when the ingress is deleted, resulting in residual resources. -- After an ingress is created, upgrade and maintain the configuration of the selected load balancers on the CCE console. Do not modify the configuration on the ELB console. Otherwise, the ingress service may be abnormal. -- The URL registered in an ingress forwarding policy must be the same as the URL exposed by the backend Service. Otherwise, a 404 error will be returned. -- In a cluster using the IPVS proxy mode, if the ingress and Service use the same ELB load balancer, the ingress cannot be accessed from the nodes and containers in the cluster because kube-proxy mounts the LoadBalancer Service address to the ipvs-0 bridge. This bridge intercepts the traffic of the load balancer connected to the ingress. You are advised to use different ELB load balancers for the ingress and Service. - -Adding an ELB Ingress ---------------------- - -This section uses an Nginx workload as an example to describe how to add an ELB ingress. - -#. Log in to the CCE console and click the cluster name to access the cluster console. - -#. Choose **Networking** in the navigation pane, click the **Ingresses** tab, and click **Create Service** in the upper right corner. - -#. Set ingress parameters. - - - **Name**: Specify a name of an ingress, for example, **ingress-demo**. - - - **Load Balancer** - - Select the load balancer to interconnect. Only load balancers in the same VPC as the cluster are supported. If no load balancer is available, click **Create Load Balancer** to create one on the ELB console. - - Dedicated load balancers must support HTTP and the network type must support private networks. - - - **Listener**: Ingress configures a listener for the load balancer, which listens to requests from the load balancer and distributes traffic. After the configuration is complete, a listener is created on the load balancer. The default listener name is *k8s___*, for example, *k8s_HTTP_80*. - - - **Front-End Protocol**: **HTTP** and **HTTPS** are available. - - - **External Port**: Port number that is open to the ELB service address. The port number can be specified randomly. - - - **Server Certificate**: When an HTTPS listener is created for a load balancer, you need to bind a certificate to the load balancer to support encrypted authentication for HTTPS data transmission. - - .. note:: - - If there is already an HTTPS ingress for the chosen port on the load balancer, the certificate of the new HTTPS ingress must be the same as the certificate of the existing ingress. This means that a listener has only one certificate. If two certificates, each with a different ingress, are added to the same listener of the same load balancer, only the certificate added earliest takes effect on the load balancer. - - - **SNI**: Server Name Indication (SNI) is an extended protocol of TLS. It allows multiple TLS-based access domain names to be provided for external systems using the same IP address and port. Different domain names can use different security certificates. After SNI is enabled, the client is allowed to submit the requested domain name when initiating a TLS handshake request. After receiving the TLS request, the load balancer searches for the certificate based on the domain name in the request. If the certificate corresponding to the domain name is found, the load balancer returns the certificate for authorization. Otherwise, the default certificate (server certificate) is returned for authorization. - - .. note:: - - - The **SNI** option is available only when **HTTPS** is selected. - - - This function is supported only for clusters of v1.15.11 and later. - - Specify the domain name for the SNI certificate. Only one domain name can be specified for each certificate. Wildcard-domain certificates are supported. - - - **Security Policy**: combinations of different TLS versions and supported cipher suites available to HTTPS listeners. - - For details about security policies, see *Elastic Load Balance User Guide*. - - .. note:: - - - **Security Policy** is available only when **HTTPS** is selected. - - This function is supported only for clusters of v1.17.9 and later. - - - **Forwarding Policies**: When the access address of a request matches the forwarding policy (a forwarding policy consists of a domain name and URL, for example, 10.117.117.117:80/helloworld), the request is forwarded to the corresponding target Service for processing. Click |image1| to add multiple forwarding policies. - - - **Domain Name**: actual domain name. Ensure that the domain name has been registered and archived. Once a domain name rule is configured, you must use the domain name for access. - - - **URL Matching Rule**: - - - **Prefix match**: If the URL is set to **/healthz**, the URL that meets the prefix can be accessed. For example, **/healthz/v1** and **/healthz/v2**. - - **Exact match**: The URL can be accessed only when it is fully matched. For example, if the URL is set to **/healthz**, only /healthz can be accessed. - - **Regular expression**: The URL is matched based on the regular expression. For example, if the regular expression is **/[A-Za-z0-9_.-]+/test**, all URLs that comply with this rule can be accessed, for example, **/abcA9/test** and **/v1-Ab/test**. Two regular expression standards are supported: POSIX and Perl. - - - **URL**: access path to be registered, for example, **/healthz**. - - .. note:: - - The URL added here must exist in the backend application. Otherwise, the forwarding fails. - - For example, the default access URL of the Nginx application is **/usr/share/nginx/html**. When adding **/test** to the ingress forwarding policy, ensure that your Nginx application contains the same URL, that is, **/usr/share/nginx/html/test**, otherwise, 404 is returned. - - - **Destination Service**: Select an existing Service or create a Service. Services that do not meet search criteria are automatically filtered out. - - - .. _cce_10_0251__li118614181492: - - **Destination Service Port**: Select the access port of the destination Service. - - - **Set ELB**: - - - **Distribution Policy**: Three algorithms are available: weighted round robin, weighted least connections algorithm, or source IP hash. - - .. note:: - - - **Weighted round robin**: Requests are forwarded to different servers based on their weights, which indicate server processing performance. Backend servers with higher weights receive proportionately more requests, whereas equal-weighted servers receive the same number of requests. This algorithm is often used for short connections, such as HTTP services. - - **Weighted least connections**: In addition to the weight assigned to each server, the number of connections processed by each backend server is also considered. Requests are forwarded to the server with the lowest connections-to-weight ratio. Building on **least connections**, the **weighted least connections** algorithm assigns a weight to each server based on their processing performance. This algorithm is often used for persistent connections, such as database connections. - - **Source IP hash**: The source IP address of each request is calculated using the hash algorithm to obtain a unique hash key, and all backend servers are numbered. The generated key allocates the client to a particular server. This allows requests from different clients to be routed based on source IP addresses and ensures that a client is directed to the same server as always. This algorithm applies to TCP connections without cookies. - - - **Type**: This function is disabled by default. You can select **Load balancer cookie**. - - **Health Check**: configured for the load balancer. When TCP is selected during the :ref:`port settings `, you can choose either TCP or HTTP. Currently, UDP is not supported. By default, the service port (Node Port and container port of the Service) is used for health check. You can also specify another port for health check. After the port is specified, a service port named **cce-healthz** will be added for the Service. - - - **Operation**: Click **Delete** to delete the configuration. - - - **Annotation**: Ingresses provide some advanced CCE functions, which are implemented by annotations. When you use kubectl to create a container, annotations will be used. For details, see :ref:`Creating an Ingress - Automatically Creating a Load Balancer ` and :ref:`Creating an Ingress - Interconnecting with an Existing Load Balancer `. - -#. After the configuration is complete, click **OK**. After the ingress is created, it is displayed in the ingress list. - - On the ELB console, you can view the ELB automatically created through CCE. The default name is **cce-lb-ingress.UID**. Click the ELB name to access its details page. On the **Listeners** tab page, view the route settings of the ingress, including the URL, listener port, and backend server group port. - - .. important:: - - After the ingress is created, upgrade and maintain the selected load balancer on the CCE console. Do not maintain the load balancer on the ELB console. Otherwise, the ingress service may be abnormal. - -#. Access the /healthz interface of the workload, for example, workload **defaultbackend**. - - a. Obtain the access address of the **/healthz** interface of the workload. The access address consists of the load balancer IP address, external port, and mapping URL, for example, 10.**.**.**:80/healthz. - - b. Enter the URL of the /healthz interface, for example, http://10.**.**.**:80/healthz, in the address box of the browser to access the workload, as shown in :ref:`Figure 1 `. - - .. _cce_10_0251__fig17115192714367: - - .. figure:: /_static/images/en-us_image_0000001518062672.png - :alt: **Figure 1** Accessing the /healthz interface of defaultbackend - - **Figure 1** Accessing the /healthz interface of defaultbackend - -.. |image1| image:: /_static/images/en-us_image_0000001568822825.png diff --git a/umn/source/networking/services/index.rst b/umn/source/networking/services/index.rst deleted file mode 100644 index 590a08b..0000000 --- a/umn/source/networking/services/index.rst +++ /dev/null @@ -1,26 +0,0 @@ -:original_name: cce_10_0247.html - -.. _cce_10_0247: - -Services -======== - -- :ref:`Service Overview ` -- :ref:`Intra-Cluster Access (ClusterIP) ` -- :ref:`NodePort ` -- :ref:`LoadBalancer ` -- :ref:`Headless Service ` -- :ref:`Service Annotations ` -- :ref:`Configuring Health Check for Multiple Ports ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - service_overview - intra-cluster_access_clusterip - nodeport - loadbalancer - headless_service - service_annotations - configuring_health_check_for_multiple_ports diff --git a/umn/source/networking/services/service_annotations.rst b/umn/source/networking/services/service_annotations.rst deleted file mode 100644 index 487712d..0000000 --- a/umn/source/networking/services/service_annotations.rst +++ /dev/null @@ -1,186 +0,0 @@ -:original_name: cce_10_0385.html - -.. _cce_10_0385: - -Service Annotations -=================== - -CCE allows you to add annotations to a YAML file to realize some advanced Service functions. The following table describes the annotations you can add. - -The annotations of a Service are the parameters that need to be specified for connecting to a load balancer. For details about how to use the annotations, see :ref:`Using kubectl to Create a Service (Automatically Creating a Load Balancer) `. - -.. table:: **Table 1** Service annotations - - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | Parameter | Type | Description | Default Value on the Console | Supported Cluster Version | - +===========================================+====================================================+=========================================================================================================================================================================================================+==============================+================================================+ - | kubernetes.io/elb.class | String | Select a proper load balancer type. | performance | v1.9 or later | - | | | | | | - | | | The value can be: | | | - | | | | | | - | | | - **union**: shared load balancer | | | - | | | - **performance**: dedicated load balancer, which can be used only in clusters of v1.17 and later. | | | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | kubernetes.io/elb.id | String | ID of a load balancer. The value can contain 1 to 100 characters. | None | v1.9 or later | - | | | | | | - | | | Mandatory when an existing load balancer is to be associated. | | | - | | | | | | - | | | **How to obtain**: | | | - | | | | | | - | | | On the management console, click **Service List**, and choose **Networking** > **Elastic Load Balance**. Click the name of the target load balancer. On the **Summary** tab page, find and copy the ID. | | | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | kubernetes.io/elb.subnet-id | String | ID of the subnet where the cluster is located. The value can contain 1 to 100 characters. | None | Mandatory for versions earlier than v1.11.7-r0 | - | | | | | | - | | | - Mandatory when a cluster of v1.11.7-r0 or earlier is to be automatically created. | | Discarded in versions later than v1.11.7-r0 | - | | | - Optional for clusters later than v1.11.7-r0. | | | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | kubernetes.io/elb.autocreate | :ref:`Table 2 ` | Whether to automatically create a load balancer associated with the Service. | None | v1.9 or later | - | | | | | | - | | | **Example:** | | | - | | | | | | - | | | - If a public network load balancer will be automatically created, set this parameter to the following value: | | | - | | | | | | - | | | {"type":"public","bandwidth_name":"cce-bandwidth-1551163379627","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","name":"james"} | | | - | | | | | | - | | | - If a private network load balancer will be automatically created, set this parameter to the following value: | | | - | | | | | | - | | | {"type":"inner","name":"A-location-d-test"} | | | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | kubernetes.io/elb.lb-algorithm | String | Specifies the load balancing algorithm of the backend server group. | ROUND_ROBIN | v1.9 or later | - | | | | | | - | | | Value range: | | | - | | | | | | - | | | - **ROUND_ROBIN**: weighted round robin algorithm | | | - | | | - **LEAST_CONNECTIONS**: weighted least connections algorithm | | | - | | | - **SOURCE_IP**: source IP hash algorithm | | | - | | | | | | - | | | When the value is **SOURCE_IP**, the weights of backend servers in the server group are invalid. | | | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | kubernetes.io/elb.health-check-flag | String | Whether to enable the ELB health check. | off | v1.9 or later | - | | | | | | - | | | - Enabling health check: Leave blank this parameter or set it to **on**. | | | - | | | - Disabling health check: Set this parameter to **off**. | | | - | | | | | | - | | | If this parameter is enabled, the :ref:`kubernetes.io/elb.health-check-option ` field must also be specified at the same time. | | | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | kubernetes.io/elb.health-check-option | :ref:`Table 3 ` | ELB health check configuration items. | None | v1.9 or later | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | kubernetes.io/elb.session-affinity-mode | String | Listeners ensure session stickiness based on IP addresses. Requests from the same IP address will be forwarded to the same backend server. | None | v1.9 or later | - | | | | | | - | | | - Disabling sticky session: Do not set this parameter. | | | - | | | - Enabling sticky session: Set this parameter to **SOURCE_IP**, indicating that the sticky session is based on the source IP address. | | | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | kubernetes.io/elb.session-affinity-option | :ref:`Table 4 ` | Sticky session timeout. | None | v1.9 or later | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - | kubernetes.io/hws-hostNetwork | Boolean | Whether the workload Services use the host network. Setting this parameter to **true** will enable the load balancer to forward requests to the host network. | None | v1.9 or later | - | | | | | | - | | | The value is **true** or **false**. | | | - | | | | | | - | | | The default value is **false**, indicating that the host network is not used. | | | - +-------------------------------------------+----------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+------------------------------------------------+ - -.. _cce_10_0385__table148341447193017: - -.. table:: **Table 2** Data structure of the elb.autocreate field - - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Mandatory | Type | Description | - +======================+=======================================+==================+================================================================================================================================================================================================================================================================================================================================================+ - | name | No | String | Name of the automatically created load balancer. | - | | | | | - | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | - | | | | | - | | | | Default: **cce-lb+service.UID** | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | type | No | String | Network type of the load balancer. | - | | | | | - | | | | - **public**: public network load balancer | - | | | | - **inner**: private network load balancer | - | | | | | - | | | | Default: **inner** | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_name | Yes for public network load balancers | String | Bandwidth name. The default value is **cce-bandwidth-*****\***. | - | | | | | - | | | | Value range: 1 to 64 characters, including lowercase letters, digits, and underscores (_). The value must start with a lowercase letter and end with a lowercase letter or digit. | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_chargemode | No | String | Bandwidth mode. | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_size | Yes for public network load balancers | Integer | Bandwidth size. The default value is 1 to 2000 Mbit/s. Configure this parameter based on the bandwidth range allowed in your region. | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bandwidth_sharetype | Yes for public network load balancers | String | Bandwidth sharing mode. | - | | | | | - | | | | - **PER**: dedicated bandwidth. | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | eip_type | Yes for public network load balancers | String | EIP type. | - | | | | | - | | | | - **5_bgp**: dynamic BGP | - | | | | - **5_sbgp**: static BGP | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | available_zone | Yes | Array of strings | AZ where the load balancer is located. | - | | | | | - | | | | This parameter is available only for dedicated load balancers. | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | l4_flavor_name | Yes | String | Flavor name of the layer-4 load balancer. | - | | | | | - | | | | This parameter is available only for dedicated load balancers. | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | l7_flavor_name | No | String | Flavor name of the layer-7 load balancer. | - | | | | | - | | | | This parameter is available only for dedicated load balancers. | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | elb_virsubnet_ids | No | Array of strings | Subnet where the backend server of the load balancer is located. If this parameter is left blank, the default cluster subnet is used. Load balancers occupy different number of subnet IP addresses based on their specifications. Do not use the CIDR blocks of other resources (such as clusters and nodes) as the load balancer CIDR block. | - | | | | | - | | | | This parameter is available only for dedicated load balancers. | - | | | | | - | | | | Example: | - | | | | | - | | | | .. code-block:: | - | | | | | - | | | | "elb_virsubnet_ids": [ | - | | | | "14567f27-8ae4-42b8-ae47-9f847a4690dd" | - | | | | ] | - +----------------------+---------------------------------------+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -.. _cce_10_0385__table19192143412319: - -.. table:: **Table 3** Data structure description of the elb.health-check-option field - - +-----------------+-----------------+-----------------+----------------------------------------------------------------------------+ - | Parameter | Mandatory | Type | Description | - +=================+=================+=================+============================================================================+ - | delay | No | String | Initial waiting time (in seconds) for starting the health check. | - | | | | | - | | | | Value range: 1 to 50. Default value: **5** | - +-----------------+-----------------+-----------------+----------------------------------------------------------------------------+ - | timeout | No | String | Health check timeout, in seconds. | - | | | | | - | | | | Value range: 1 to 50. Default value: **10** | - +-----------------+-----------------+-----------------+----------------------------------------------------------------------------+ - | max_retries | No | String | Maximum number of health check retries. | - | | | | | - | | | | Value range: 1 to 10. Default value: **3** | - +-----------------+-----------------+-----------------+----------------------------------------------------------------------------+ - | protocol | No | String | Health check protocol. | - | | | | | - | | | | Default value: protocol of the associated Service | - | | | | | - | | | | Value options: **TCP**, **UDP**, or **HTTP** | - +-----------------+-----------------+-----------------+----------------------------------------------------------------------------+ - | path | No | String | Health check URL. This parameter needs to be configured when HTTP is used. | - | | | | | - | | | | Default value: **/** | - | | | | | - | | | | The value can contain 1 to 10,000 characters. | - +-----------------+-----------------+-----------------+----------------------------------------------------------------------------+ - -.. _cce_10_0385__table3340195463412: - -.. table:: **Table 4** Data structure of the elb.session-affinity-option field - - +---------------------+-----------------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Mandatory | Type | Description | - +=====================+=================+=================+==============================================================================================================================+ - | persistence_timeout | Yes | String | Sticky session timeout, in minutes. This parameter is valid only when **elb.session-affinity-mode** is set to **SOURCE_IP**. | - | | | | | - | | | | Value range: 1 to 60. Default value: **60** | - +---------------------+-----------------+-----------------+------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/networking/services/service_overview.rst b/umn/source/networking/services/service_overview.rst deleted file mode 100644 index eb88125..0000000 --- a/umn/source/networking/services/service_overview.rst +++ /dev/null @@ -1,55 +0,0 @@ -:original_name: cce_10_0249.html - -.. _cce_10_0249: - -Service Overview -================ - -Direct Access to a Pod ----------------------- - -After a pod is created, the following problems may occur if you directly access to the pod: - -- The pod can be deleted and created again at any time by a controller such as a Deployment, and the result of accessing the pod becomes unpredictable. -- The IP address of the pod is allocated only after the pod is started. Before the pod is started, the IP address of the pod is unknown. -- An application is usually composed of multiple pods that run the same image. Accessing pods one by one is not efficient. - -For example, an application uses Deployments to create the frontend and backend. The frontend calls the backend for computing, as shown in :ref:`Figure 1 `. Three pods are running in the backend, which are independent and replaceable. When a backend pod is created again, the new pod is assigned with a new IP address, of which the frontend pod is unaware. - -.. _cce_10_0249__en-us_topic_0249851121_fig2173165051811: - -.. figure:: /_static/images/en-us_image_0000001517743624.png - :alt: **Figure 1** Inter-pod access - - **Figure 1** Inter-pod access - -Using Services for Pod Access ------------------------------ - -Kubernetes Services are used to solve the preceding pod access problems. A Service has a fixed IP address. (When a CCE cluster is created, a Service CIDR block is set, which is used to allocate IP addresses to Services.) A Service forwards requests accessing the Service to pods based on labels, and at the same time, performs load balancing for these pods. - -In the preceding example, a Service is added for the frontend pod to access the backend pods. In this way, the frontend pod does not need to be aware of the changes on backend pods, as shown in :ref:`Figure 2 `. - -.. _cce_10_0249__en-us_topic_0249851121_fig163156154816: - -.. figure:: /_static/images/en-us_image_0000001517743432.png - :alt: **Figure 2** Accessing pods through a Service - - **Figure 2** Accessing pods through a Service - -Service Types -------------- - -Kubernetes allows you to specify a Service of a required type. The values and actions of different types of Services are as follows: - -- :ref:`ClusterIP ` - - A ClusterIP Service allows workloads in the same cluster to use their cluster-internal domain names to access each other. - -- :ref:`NodePort ` - - A NodePort Service is exposed on each node's IP at a static port. A ClusterIP Service, to which the NodePort Service routes, is automatically created. By requesting <*NodeIP*>:<*NodePort*>, you can access a NodePort Service from outside the cluster. - -- :ref:`LoadBalancer ` - - A workload can be accessed from public networks through a load balancer, which is more secure and reliable than EIP. diff --git a/umn/source/node_pools/creating_a_node_pool.rst b/umn/source/node_pools/creating_a_node_pool.rst index a60943e..27979fa 100644 --- a/umn/source/node_pools/creating_a_node_pool.rst +++ b/umn/source/node_pools/creating_a_node_pool.rst @@ -20,7 +20,7 @@ Procedure #. Log in to the CCE console. -#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right. +#. Click the cluster name to access the cluster console. Choose **Nodes** in the navigation pane on the left and click the **Node Pools** tab on the right. #. In the upper right corner of the page, click **Create Node Pool**. @@ -60,15 +60,15 @@ Procedure | | | | | **Scale-in cooling interval configured in a node pool** | | | | - | | This interval indicates the period during which nodes added to the current node pool after a scale-out operation cannot be deleted. This interval takes effect at the node pool level. | + | | This interval indicates the period during which nodes added to the current node pool after a scale-out operation cannot be deleted. This setting takes effect in the entire node pool. | | | | | | **Scale-in cooling interval configured in the autoscaler add-on** | | | | - | | The interval after a scale-out indicates the period during which the entire cluster cannot be scaled in after the autoscaler add-on triggers scale-out (due to the unschedulable pods, metrics, and scaling policies). This interval takes effect at the cluster level. | + | | The interval after a scale-out indicates the period during which the entire cluster cannot be scaled in after the autoscaler add-on triggers scale-out (due to the unschedulable pods, metrics, and scaling policies). This setting takes effect in the entire cluster. | | | | - | | The interval after a node is deleted indicates the period during which the cluster cannot be scaled in after the autoscaler add-on triggers scale-in. This interval takes effect at the cluster level. | + | | The interval after a node is deleted indicates the period during which the cluster cannot be scaled in after the autoscaler add-on triggers scale-in. This setting takes effect in the entire cluster. | | | | - | | The interval after a failed scale-in indicates the period during which the cluster cannot be scaled in after the autoscaler add-on triggers scale-in. This interval takes effect at the cluster level. | + | | The interval after a failed scale-in indicates the period during which the cluster cannot be scaled in after the autoscaler add-on triggers scale-in. This setting takes effect in the entire cluster. | | | | | | .. note:: | | | | @@ -81,42 +81,36 @@ Procedure .. table:: **Table 2** Configuration parameters - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+====================================================================================================================================================================================================================================+ - | AZ | AZ where the node is located. Nodes in a cluster can be created in different AZs for higher reliability. The value cannot be changed after the node is created. | - | | | - | | You are advised to select **Random** to deploy your node in a random AZ based on the selected node flavor. | - | | | - | | An AZ is a physical region where resources use independent power supply and networks. AZs are physically isolated but interconnected through an internal network. To enhance workload availability, create nodes in different AZs. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Node Type | CCE cluster: | - | | | - | | - ECS (VM): Containers run on ECSs. | - | | | - | | CCE Turbo Cluster: | - | | | - | | - ECS (VM): Containers run on ECSs. Only Trunkport ECSs (models that can be bound with multiple elastic network interfaces) are supported. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Container Engine | CCE clusters support Docker and containerd in some scenarios. | - | | | - | | - VPC network clusters of v1.23 and later versions support containerd. Container tunnel network clusters of v1.23.2-r0 and later versions support containerd. | - | | - For a CCE Turbo cluster, both **Docker** and **containerd** are supported. For details, see :ref:`Mapping between Node OSs and Container Engines `. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Specifications | Select a node specification based on service requirements. The available node specifications vary depending on regions or AZs. For details, see the CCE console. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | OS | Select an OS type. Different types of nodes support different OSs. For details, see :ref:`Supported Node Specifications `. | - | | | - | | **Public image**: Select an OS for the node. | - | | | - | | **Private image**: You can use private images. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Login Mode | - **Key Pair** | - | | | - | | Select the key pair used to log in to the node. You can select a shared key. | - | | | - | | A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click **Create Key Pair**.. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+==========================================================================================================================================================================================+ + | Node Type | CCE cluster: | + | | | + | | - ECS (VM): Containers run on ECSs. | + | | | + | | CCE Turbo cluster: | + | | | + | | - ECS (VM): Containers run on ECSs. Only the ECSs that can be bound with multiple NICs are supported. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Container engine | CCE clusters support Docker and containerd in some scenarios. | + | | | + | | - VPC network clusters of v1.23 and later versions support containerd. Tunnel network clusters of v1.23.2-r0 and later versions support containerd. | + | | - For a CCE Turbo cluster, both **Docker** and **containerd** are supported. For details, see :ref:`Mapping between Node OSs and Container Engines `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Specifications | Select a node specification based on service requirements. The available node specifications vary depending on regions or AZs. For details, see the CCE console. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | OS | Select an OS type. Different types of nodes support different OSs. For details, see :ref:`Supported Node Specifications `. | + | | | + | | **Public image**: Select an OS for the node. | + | | | + | | **Private image**: You can use private images. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Login mode | - **Key Pair** | + | | | + | | Select the key pair used to log in to the node. You can select a shared key. | + | | | + | | A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click **Create Key Pair**. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Storage Settings** @@ -124,44 +118,56 @@ Procedure .. table:: **Table 3** Configuration parameters - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+===============================================================================================================================================================================================================================================================================================+ - | System Disk | System disk used by the node OS. The value ranges from 40 GiB to 1,024 GiB. The default value is 50 GiB. | - | | | - | | **Encryption**: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function. **This function is available only in certain regions.** | - | | | - | | - **Encryption** is not selected by default. | - | | - After you select **Encryption**, you can select an existing key. If no key is available, click the link next to the drop-down box to create a key. After the key is created, click the refresh icon. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Data Disk | **At least one data disk is required** for the container runtime and kubelet. **The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.** | - | | | - | | - First data disk: used for container runtime and kubelet components. The value ranges from 20 GiB to 32,768 GiB. The default value is 100 GiB. | - | | - Other data disks: You can set the data disk size to a value ranging from 10 GiB to 32,768 GiB. The default value is 100 GiB. | - | | | - | | **Advanced Settings** | - | | | - | | Click **Expand** to set the following parameters: | - | | | - | | - **Allocate Disk Space**: Select this option to define the disk space occupied by the container runtime to store the working directories, container image data, and image metadata. For details about how to allocate data disk space, see :ref:`Data Disk Space Allocation `. | - | | - **Encryption**: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function. **This function is available only in certain regions.** | - | | | - | | - **Encryption** is not selected by default. | - | | - After you select **Encryption**, you can select an existing key. If no key is available, click the link next to the drop-down box to create a key. After the key is created, click the refresh icon. | - | | | - | | **Adding Multiple Data Disks** | - | | | - | | A maximum of four data disks can be added. By default, raw disks are created without any processing. You can also click **Expand** and select any of the following options: | - | | | - | | - **Default**: By default, a raw disk is created without any processing. | - | | - **Mount Disk**: The data disk is attached to a specified directory. | - | | | - | | **Local Disk Description** | - | | | - | | If the node flavor is disk-intensive or ultra-high I/O, one data disk can be a local disk. | - | | | - | | Local disks may break down and do not ensure data reliability. It is recommended that you store service data in EVS disks, which are more reliable than local disks. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | System Disk | System disk used by the node OS. The value ranges from 40 GiB to 1,024 GiB. The default value is 50 GiB. | + | | | + | | **Encryption**: System disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption setting. **This function is available only in certain regions.** | + | | | + | | - **Encryption** is not selected by default. | + | | - After selecting **Encryption**, you can select an existing key in the displayed dialog box. If no key is available, click **View Key List** and create a key. After the key is created, click the refresh icon next to the **Encryption** text box. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Data Disk | **At least one data disk is required** for the container runtime and kubelet. **The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.** | + | | | + | | - First data disk: used for container runtime and kubelet components. The value ranges from 20 GiB to 32,768 GiB. The default value is 100 GiB. | + | | - Other data disks: You can set the data disk size to a value ranging from 10 GB to 32,768 GiB. The default value is 100 GiB. | + | | | + | | .. note:: | + | | | + | | If the node flavor is disk-intensive or ultra-high I/O, one data disk can be a local disk. | + | | | + | | Local disks may break down and do not ensure data reliability. Store your service data in EVS disks, which are more reliable than local disks. | + | | | + | | **Advanced Settings** | + | | | + | | Click **Expand** to configure the following parameters: | + | | | + | | - **Data Disk Space Allocation**: After selecting **Set Container Engine Space**, you can specify the proportion of the space for the container engine, image, and temporary storage on the data disk. The container engine space is used to store the working directory, container image data, and image metadata for the container runtime. The remaining space of the data disk is used for pod configuration files, keys, and EmptyDir. For details about how to allocate data disk space, see :ref:`Data Disk Space Allocation `. | + | | - **Encryption**: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption setting. **This function is available only in certain regions.** | + | | | + | | - **Encryption** is not selected by default. | + | | - After selecting **Encryption**, you can select an existing key in the displayed dialog box. If no key is available, click **View Key List** and create a key. After the key is created, click the refresh icon next to the **Encryption** text box. | + | | | + | | **Adding Multiple Data Disks** | + | | | + | | A maximum of four data disks can be added. By default, raw disks are created without any processing. You can also click **Expand** and select any of the following options: | + | | | + | | - **Default**: By default, a raw disk is created without any processing. | + | | - **Mount Disk**: The data disk is attached to a specified directory. | + | | - **Use as PV**: applicable to scenarios in which there is a high performance requirement on PVs. The **node.kubernetes.io/local-storage-persistent** label is added to the node with PV configured. The value is **linear** or **striped**. | + | | - **Use as ephemeral volume**: applicable to scenarios in which there is a high performance requirement on EmptyDir. | + | | | + | | .. note:: | + | | | + | | - Local PVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 2.1.23 or later. Version 2.1.23 or later is recommended. | + | | - Local EVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 1.2.29 or later. | + | | | + | | :ref:`Local Persistent Volumes (Local PVs) ` and :ref:`Local EVs ` support the following write modes: | + | | | + | | - **Linear**: A linear logical volume integrates one or more physical volumes. Data is written to the next physical volume when the previous one is used up. | + | | - **Striped**: A striped logical volume stripes data into blocks of the same size and stores them in multiple physical volumes in sequence, allowing data to be concurrently read and written. A storage pool consisting of striped volumes cannot be scaled-out. This option can be selected only when multiple volumes exist. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Network Settings** @@ -174,7 +180,7 @@ Procedure +===================================+======================================================================================================================================================================================+ | Node Subnet | The node subnet selected during cluster creation is used by default. You can choose another subnet instead. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Node IP Address | Random allocation is supported. | + | Node IP | Random allocation is supported. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Associate Security Group | Security group used by the nodes created in the node pool. A maximum of 5 security groups can be selected. | | | | @@ -192,19 +198,19 @@ Procedure +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | +===================================+================================================================================================================================================================================================================================================================+ - | Kubernetes Label | Click **Add** to set the key-value pair attached to the Kubernetes objects (such as pods). A maximum of 20 labels can be added. | + | Kubernetes Label | A key-value pair added to a Kubernetes object (such as a pod). A maximum of 20 labels can be added. | | | | | | Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see `Labels and Selectors `__. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Resource Tag | You can add resource tags to classify resources. | | | | - | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use these tags to improve tagging and resource migration efficiency. | + | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are available to all service resources that support tags. You can use these tags to improve tagging and resource migration efficiency. | | | | | | CCE will automatically create the "CCE-Dynamic-Provisioning-Node=\ *node id*" tag. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Taint | This parameter is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | + | Taint | This parameter is left blank by default. You can add taints to configure anti-affinity for the node. A maximum of 20 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | @@ -233,6 +239,10 @@ Procedure | Post-installation Command | Enter commands. A maximum of 1,000 characters are allowed. | | | | | | The script will be executed after Kubernetes software is installed and will not affect the installation. | + | | | + | | .. note:: | + | | | + | | Do not run the **reboot** command in the post-installation script to restart the system immediately. To restart the system, run the **shutdown -r 1** command to delay the restart for one minute. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Agency | An agency is created by the account administrator on the IAM console. By creating an agency, you can share your cloud server resources with another account, or entrust a more professional person or team to manage your resources. | | | | @@ -243,4 +253,4 @@ Procedure #. Click **Submit**. -.. |image1| image:: /_static/images/en-us_image_0000001518222604.png +.. |image1| image:: /_static/images/en-us_image_0000001647576848.png diff --git a/umn/source/node_pools/managing_a_node_pool/configuring_a_node_pool.rst b/umn/source/node_pools/managing_a_node_pool/configuring_a_node_pool.rst index 1b93b49..357fec5 100644 --- a/umn/source/node_pools/managing_a_node_pool/configuring_a_node_pool.rst +++ b/umn/source/node_pools/managing_a_node_pool/configuring_a_node_pool.rst @@ -10,108 +10,126 @@ Constraints The default node pool DefaultPool does not support the following management operations. -Configuring Kubernetes Parameters ---------------------------------- +Configuration Management +------------------------ -CCE allows you to highly customize Kubernetes parameter settings on core components in a cluster. For more information, see `kubelet `__. +CCE allows you to highly customize Kubernetes parameter settings on core components in a cluster. For more information, see `kubelet `__. -This function is supported only for clusters of **v1.15 and later**. It is not displayed for clusters earlier than v1.15. +This function is supported only in clusters of **v1.15 and later**. It is not displayed for clusters earlier than v1.15. #. Log in to the CCE console. -#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right. -#. Choose **More** > **Manage** next to the node pool name. -#. On the **Manage Component** page on the right, change the values of the following Kubernetes parameters: +#. Click the cluster name to access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right. +#. Choose **More** > **Manage** in the **Operation** column of the target node pool +#. On the **Manage Components** page on the right, change the values of the following Kubernetes parameters: .. table:: **Table 1** kubelet - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | Default Value | Remarks | - +=========================+====================================================================================================================================================================================================================================================================================================================================================================================================================+===========================================================================================================================================================================+=======================================================================================================================================================================================================================================================================+ - | cpu-manager-policy | CPU management policy configuration. For details, see :ref:`CPU Core Binding `. | none | None | - | | | | | - | | - **none**: disables pods from exclusively occupying CPUs. Select this value if you want a large pool of shareable CPU cores. | | | - | | - **static**: enables pods to exclusively occupy CPUs. Select this value if your workload is sensitive to latency in CPU cache and scheduling. | | | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kube-api-qps | Query per second (QPS) to use while talking with kube-apiserver. | 100 | None | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kube-api-burst | Burst to use while talking with kube-apiserver. | 100 | None | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | max-pods | Maximum number of pods managed by kubelet. | - For a CCE cluster, the maximum number of pods is determined based on :ref:`the maximum number of pods on a node `. | None | - | | | - For a CCE Turbo cluster, the maximum number of pods is determined based on :ref:`the number of NICs on a CCE Turbo cluster node `. | | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | pod-pids-limit | PID limit in Kubernetes | -1 | None | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | with-local-dns | Whether to use the local IP address as the ClusterDNS of the node. | false | None | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | event-qps | QPS limit for event creation | 5 | None | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | allowed-unsafe-sysctls | Insecure system configuration allowed. | [] | None | - | | | | | - | | Starting from **v1.17.17**, CCE enables pod security policies for kube-apiserver. You need to add corresponding configurations to **allowedUnsafeSysctls** of a pod security policy to make the policy take effect. (This configuration is not required for clusters earlier than v1.17.17.) For details, see :ref:`Example of Enabling Unsafe Sysctls in Pod Security Policy `. | | | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | kube-reserved-mem | Reserved node memory. | Depends on node specifications. For details, see :ref:`Formula for Calculating the Reserved Resources of a Node `. | The sum of kube-reserved-mem and system-reserved-mem is less than half of the memory. | - | | | | | - | system-reserved-mem | | | | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | topology-manager-policy | Set the topology management policy. | none | The values can be modified during the node pool lifecycle. | - | | | | | - | | Valid values are as follows: | | .. important:: | - | | | | | - | | - **restricted**: kubelet accepts only pods that achieve optimal NUMA alignment on the requested resources. | | NOTICE: | - | | - **best-effort**: kubelet preferentially selects pods that implement NUMA alignment on CPU and device resources. | | Exercise caution when modifying topology-manager-policy and topology-manager-scope will restart kubelet and recalculate the resource allocation of pods based on the modified policy. As a result, running pods may restart or even fail to receive any resources. | - | | - **none** (default): The topology management policy is disabled. | | | - | | - **single-numa-node**: kubelet allows only pods that are aligned to the same NUMA node in terms of CPU and device resources. | | | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | topology-manager-scope | Set the resource alignment granularity of the topology management policy. Valid values are as follows: | Container | | - | | | | | - | | - **container** (default) | | | - | | - **pod** | | | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | resolv-conf | DNS resolution configuration file specified by a container | The default value is null. | None | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | runtime-request-timeout | Timeout interval of all runtime requests except long-running requests (pull, logs, exec, and attach). | The default value is **2m0s**. | None | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | registry-pull-qps | Maximum number of image pulls per second. | The default value is **5**. | The value ranges from 1 to 50. | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | registry-burst | Maximum number of burst image pulls. | The default value is **10**. | The value ranges from 1 to 100 and must be greater than or equal to the value of **registry-pull-qps**. | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | serialize-image-pulls | When this function is enabled, kubelet is notified to pull only one image at a time. | The default value is **true**. | None | - +-------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | Default Value | Modification | Remarks | + +============================+========================================================================================================================================================================================================================================================================================================================================================================================================+===========================================================================================================================================================================+=========================================================================================================+====================================================================================================================================================================================================================================================================+ + | cpu-manager-policy | CPU management policy configuration. For details, see :ref:`CPU Scheduling `. | none | None | None | + | | | | | | + | | - **none**: disables pods from exclusively occupying CPUs. Select this value if you want a large pool of shareable CPU cores. | | | | + | | - **static**: enables pods to exclusively occupy CPUs. Select this value if your workload is sensitive to latency in CPU cache and scheduling. | | | | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kube-api-qps | Query per second (QPS) for communicating with kube-apiserver. | 100 | None | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kube-api-burst | Burst to use while talking with kube-apiserver. | 100 | None | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | max-pods | Maximum number of pods managed by kubelet. | - For a CCE cluster, the maximum number of pods is determined based on :ref:`the maximum number of pods on a node `. | None | None | + | | | - For a CCE Turbo cluster, the maximum number of pods is determined based on :ref:`the number of NICs on a CCE Turbo cluster node `. | | | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | pod-pids-limit | Limited number of PIDs in Kubernetes | -1 | None | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | with-local-dns | Whether to use the local IP address as the ClusterDNS of the node. | false | None | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | event-qps | QPS limit for event creation | 5 | None | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | allowed-unsafe-sysctls | Insecure system configuration allowed. | [] | None | None | + | | | | | | + | | Starting from **v1.17.17**, CCE enables pod security policies for kube-apiserver. Add corresponding configurations to **allowedUnsafeSysctls** of a pod security policy to make the policy take effect. (This configuration is not required for clusters earlier than v1.17.17.) For details, see :ref:`Example of Enabling Unsafe Sysctls in Pod Security Policy `. | | | | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | over-subscription-resource | Whether to enable node oversubscription. | true | None | None | + | | | | | | + | | If this parameter is set to **true**, node oversubscription is enabled. For details, see :ref:`Dynamic Resource Oversubscription `. | | | | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | colocation | Whether to enable hybrid deployment on nodes. | true | None | None | + | | | | | | + | | If this parameter is set to **true**, hybrid deployment is enabled on nodes. For details, see :ref:`Dynamic Resource Oversubscription `. | | | | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | kube-reserved-mem | Reserved node memory. | Depends on node specifications. For details, see :ref:`Node Resource Reservation Policy `. | None | The sum of **kube-reserved-mem** and **system-reserved-mem** is less than half of the memory. | + | | | | | | + | system-reserved-mem | | | | | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | topology-manager-policy | Set the topology management policy. | none | None | .. important:: | + | | | | | | + | | Valid values are as follows: | | | NOTICE: | + | | | | | Modifying **topology-manager-policy** and **topology-manager-scope** will restart kubelet, and the resource allocation of pods will be recalculated based on the modified policy. In this case, running pods may restart or even fail to receive any resources. | + | | - **restricted**: kubelet accepts only pods that achieve optimal NUMA alignment on the requested resources. | | | | + | | - **best-effort**: kubelet preferentially selects pods that implement NUMA alignment on CPU and device resources. | | | | + | | - **none** (default): The topology management policy is disabled. | | | | + | | - **single-numa-node**: kubelet allows only pods that are aligned to the same NUMA node in terms of CPU and device resources. | | | | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | topology-manager-scope | Set the resource alignment granularity of the topology management policy. Valid values are as follows: | container | | | + | | | | | | + | | - **container** (default) | | | | + | | - **pod** | | | | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | resolv-conf | DNS resolution configuration file specified by the container | The default value is null. | None | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | runtime-request-timeout | Timeout interval of all runtime requests except long-running requests (pull, logs, exec, and attach). | The default value is **2m0s**. | None | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | registry-pull-qps | Maximum number of image pulls per second. | The default value is **5**. | The value ranges from 1 to 50. | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | registry-burst | Maximum number of burst image pulls. | The default value is **10**. | The value ranges from 1 to 100 and must be greater than or equal to the value of **registry-pull-qps**. | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | serialize-image-pulls | When this function is enabled, kubelet is notified to pull only one image at a time. | The default value is **true**. | None | None | + +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ .. table:: **Table 2** kube-proxy - +----------------------------------+-------------------------------------------------------------+---------------+---------+ - | Parameter | Description | Default Value | Remarks | - +==================================+=============================================================+===============+=========+ - | conntrack-min | sysctl -w net.nf_conntrack_max | 131072 | None | - +----------------------------------+-------------------------------------------------------------+---------------+---------+ - | conntrack-tcp-timeout-close-wait | sysctl -w net.netfilter.nf_conntrack_tcp_timeout_close_wait | 1h0m0s | None | - +----------------------------------+-------------------------------------------------------------+---------------+---------+ + +----------------------------------+----------------------------------------------------------------+-----------------+-----------------+ + | Parameter | Description | Default Value | Modification | + +==================================+================================================================+=================+=================+ + | conntrack-min | Maximum number of connection tracking entries | 131072 | None | + | | | | | + | | To obtain the value, run the following command: | | | + | | | | | + | | .. code-block:: | | | + | | | | | + | | sysctl -w net.nf_conntrack_max | | | + +----------------------------------+----------------------------------------------------------------+-----------------+-----------------+ + | conntrack-tcp-timeout-close-wait | Wait time of a closed TCP connection | 1h0m0s | None | + | | | | | + | | To obtain the value, run the following command: | | | + | | | | | + | | .. code-block:: | | | + | | | | | + | | sysctl -w net.netfilter.nf_conntrack_tcp_timeout_close_wait | | | + +----------------------------------+----------------------------------------------------------------+-----------------+-----------------+ .. table:: **Table 3** Network components (available only for CCE Turbo clusters) - +---------------------------+------------------------------------------------------------------------------------------------------+------------------+-----------------+ - | Parameter | Description | Default Value | Remarks | - +===========================+======================================================================================================+==================+=================+ - | nic-threshold | Low threshold of the number of bound ENIs:High threshold of the number of bound ENIs | Default: **0:0** | None | - | | | | | - | | .. note:: | | | - | | | | | - | | This parameter is being discarded. Use the dynamic pre-binding parameters of the other four ENIs. | | | - +---------------------------+------------------------------------------------------------------------------------------------------+------------------+-----------------+ - | nic-minimum-target | Minimum number of ENIs bound to a node at the node pool level | Default: **10** | None | - +---------------------------+------------------------------------------------------------------------------------------------------+------------------+-----------------+ - | nic-maximum-target | Maximum number of ENIs pre-bound to a node at the node pool level | Default: **0** | None | - +---------------------------+------------------------------------------------------------------------------------------------------+------------------+-----------------+ - | nic-warm-target | Number of ENIs pre-bound to a node at the node pool level | Default: **2** | None | - +---------------------------+------------------------------------------------------------------------------------------------------+------------------+-----------------+ - | nic-max-above-warm-target | Reclaim number of ENIs pre-bound to a node at the node pool level | Default: **2** | None | - +---------------------------+------------------------------------------------------------------------------------------------------+------------------+-----------------+ + +---------------------------+---------------------------------------------------------------------------------------+-----------------+------------------------------------------------------------------------------------------------------+ + | Parameter | Description | Default Value | Modification | + +===========================+=======================================================================================+=================+======================================================================================================+ + | nic-threshold | Low threshold of the number of bound ENIs: High threshold of the number of bound ENIs | Default: 0:0 | .. note:: | + | | | | | + | | | | This parameter is being discarded. Use the dynamic pre-binding parameters of the other four ENIs. | + +---------------------------+---------------------------------------------------------------------------------------+-----------------+------------------------------------------------------------------------------------------------------+ + | nic-minimum-target | Minimum number of ENIs bound to the nodes in the node pool | Default: 10 | None | + +---------------------------+---------------------------------------------------------------------------------------+-----------------+------------------------------------------------------------------------------------------------------+ + | nic-maximum-target | Maximum number of ENIs pre-bound to a node at the node pool level | Default: 0 | None | + +---------------------------+---------------------------------------------------------------------------------------+-----------------+------------------------------------------------------------------------------------------------------+ + | nic-warm-target | Number of ENIs pre-bound to a node at the node pool level | Default: 2 | None | + +---------------------------+---------------------------------------------------------------------------------------+-----------------+------------------------------------------------------------------------------------------------------+ + | nic-max-above-warm-target | Reclaim number of ENIs pre-bound to a node at the node pool level | Default: 2 | None | + +---------------------------+---------------------------------------------------------------------------------------+-----------------+------------------------------------------------------------------------------------------------------+ .. table:: **Table 4** Pod security group in a node pool (available only for CCE Turbo clusters) +------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-----------------+ - | Parameter | Description | Default Value | Remarks | + | Parameter | Description | Default Value | Modification | +==============================+=====================================================================================================================================================================================================================================================================================================+=================+=================+ | security_groups_for_nodepool | - Default security group used by pods in a node pool. You can enter the security group ID. If this parameter is not set, the default security group of the cluster container network is used. A maximum of five security group IDs can be specified at the same time, separated by semicolons (;). | None | None | | | - The priority of the security group is lower than that of the security group configured for :ref:`Security Groups `. | | | @@ -120,7 +138,7 @@ This function is supported only for clusters of **v1.15 and later**. It is not d .. table:: **Table 5** Docker (available only for node pools that use Docker) +-----------------------+---------------------------------------------------------------+-----------------+--------------------------------------------------------------------------------------------------------+ - | Parameter | Description | Default Value | Remarks | + | Parameter | Description | Default Value | Modification | +=======================+===============================================================+=================+========================================================================================================+ | native-umask | \`--exec-opt native.umask | normal | Cannot be changed. | +-----------------------+---------------------------------------------------------------+-----------------+--------------------------------------------------------------------------------------------------------+ @@ -141,4 +159,24 @@ This function is supported only for clusters of **v1.15 and later**. It is not d | | | | sysctl -a | grep nr_open | +-----------------------+---------------------------------------------------------------+-----------------+--------------------------------------------------------------------------------------------------------+ + .. table:: **Table 6** containerd (available only for node pools that use containerd) + + +-----------------------+---------------------------------------------------------------+-----------------+--------------------------------------------------------------------------------------------------------+ + | Parameter | Description | Default Value | Modification | + +=======================+===============================================================+=================+========================================================================================================+ + | devmapper-base-size | Available data space of a single container | None | Cannot be changed. | + +-----------------------+---------------------------------------------------------------+-----------------+--------------------------------------------------------------------------------------------------------+ + | limitcore | Maximum size of a core file in a container. The unit is byte. | 5368709120 | None | + | | | | | + | | If not specified, the value is **infinity**. | | | + +-----------------------+---------------------------------------------------------------+-----------------+--------------------------------------------------------------------------------------------------------+ + | default-ulimit-nofile | Limit on the number of handles in a container | 1048576 | The value cannot exceed the value of the kernel parameter **nr_open** and cannot be a negative number. | + | | | | | + | | | | You can run the following command to obtain the kernel parameter **nr_open**: | + | | | | | + | | | | .. code-block:: | + | | | | | + | | | | sysctl -a | grep nr_open | + +-----------------------+---------------------------------------------------------------+-----------------+--------------------------------------------------------------------------------------------------------+ + #. Click **OK**. diff --git a/umn/source/node_pools/managing_a_node_pool/copying_a_node_pool.rst b/umn/source/node_pools/managing_a_node_pool/copying_a_node_pool.rst index 3bba7bd..b04eb6a 100644 --- a/umn/source/node_pools/managing_a_node_pool/copying_a_node_pool.rst +++ b/umn/source/node_pools/managing_a_node_pool/copying_a_node_pool.rst @@ -8,7 +8,7 @@ Copying a Node Pool You can copy the configuration of an existing node pool to create a new node pool on the CCE console. #. Log in to the CCE console. -#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right. -#. Choose **More > Copy** next to a node pool name to copy the node pool. +#. Click the cluster name to access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right. +#. Choose **More > Copy** in the **Operation** column of the target node pool. #. The configurations of the selected node pool are replicated to the **Clone Node Pool** page. You can edit the configurations as required. For details about configuration items, see :ref:`Creating a Node Pool `. After confirming the configuration, click **Next: Confirm**. #. On the **Confirm** page, confirm the node pool configuration and click **Submit**. Then, a new node pool is created based on the edited configuration. diff --git a/umn/source/node_pools/managing_a_node_pool/index.rst b/umn/source/node_pools/managing_a_node_pool/index.rst index ea0fca1..2384c69 100644 --- a/umn/source/node_pools/managing_a_node_pool/index.rst +++ b/umn/source/node_pools/managing_a_node_pool/index.rst @@ -5,11 +5,11 @@ Managing a Node Pool ==================== -- :ref:`Configuring a Node Pool ` - :ref:`Updating a Node Pool ` +- :ref:`Configuring a Node Pool ` +- :ref:`Copying a Node Pool ` - :ref:`Synchronizing Node Pools ` - :ref:`Upgrading an OS ` -- :ref:`Copying a Node Pool ` - :ref:`Migrating a Node ` - :ref:`Deleting a Node Pool ` @@ -17,10 +17,10 @@ Managing a Node Pool :maxdepth: 1 :hidden: - configuring_a_node_pool updating_a_node_pool + configuring_a_node_pool + copying_a_node_pool synchronizing_node_pools upgrading_an_os - copying_a_node_pool migrating_a_node deleting_a_node_pool diff --git a/umn/source/node_pools/managing_a_node_pool/updating_a_node_pool.rst b/umn/source/node_pools/managing_a_node_pool/updating_a_node_pool.rst index 63ee135..b5ff5b6 100644 --- a/umn/source/node_pools/managing_a_node_pool/updating_a_node_pool.rst +++ b/umn/source/node_pools/managing_a_node_pool/updating_a_node_pool.rst @@ -8,7 +8,7 @@ Updating a Node Pool Constraints ----------- -- When editing the resource tags of the node pool. The modified configuration takes effect only for new nodes. To synchronize the configuration to the existing nodes, you need to manually reset the existing nodes. +- When editing the resource tags of the node pool. The modified configuration takes effect only for new nodes. To synchronize the configuration to the existing nodes, manually reset the existing nodes. - Updates of kubernetes labels and taints are automatically synchronized to existing nodes. You do not need to reset nodes. @@ -17,7 +17,7 @@ Updating a Node Pool #. Log in to the CCE console. -#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right. +#. Click the cluster name to access the cluster console. Choose **Nodes** in the navigation pane and click the **Node Pools** tab on the right. #. Click **Update** next to the name of the node pool you will edit. Configure the parameters in the displayed **Update Node Pool** page. @@ -25,27 +25,27 @@ Updating a Node Pool .. table:: **Table 1** Basic settings - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | Node Pool Name | Name of the node pool. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Nodes | Modify the number of nodes based on service requirements. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Auto Scaling | By default, this parameter is disabled. | - | | | - | | After you enable autoscaler by clicking |image1|, nodes in the node pool are automatically created or deleted based on service requirements. | - | | | - | | - **Maximum Nodes** and **Minimum Nodes**: You can set the maximum and minimum number of nodes to ensure that the number of nodes to be scaled is within a proper range. | - | | | - | | - **Priority**: A larger value indicates a higher priority. For example, if this parameter is set to **1** and **4** respectively for node pools A and B, B has a higher priority than A, and auto scaling is first triggered for B. If the priorities of multiple node pools are set to the same value, for example, **2**, the node pools are not prioritized and the system performs scaling based on the minimum resource waste principle. | - | | | - | | After the priority is updated, the configuration takes effect within 1 minute. | - | | | - | | - **Cooldown Period**: Enter a period, in minutes. This field indicates the period during which the nodes added in the current node pool cannot be scaled in. | - | | | - | | If the **Autoscaler** field is set to on, install the :ref:`autoscaler add-on ` to use the autoscaler feature. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Node Pool Name | Name of the node pool. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Nodes | Modify the number of nodes based on service requirements. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Auto Scaling | This function is disabled by default. | + | | | + | | After you enable autoscaler by clicking |image1|, nodes in the node pool are automatically created or deleted based on service requirements. | + | | | + | | - **Maximum Nodes** and **Minimum Nodes**: You can set the maximum and minimum number of nodes to ensure that the number of nodes to be scaled is within a proper range. | + | | | + | | - **Node Pool Priority**: indicates the priority of a node pool for a scale-out. A larger value indicates a higher priority. For example, the node pool with priority **4** is scaled out prior to the one with priority **1**. If the priorities of multiple node pools are set to the same value, these node pools are not prioritized and they will be scaled out by following the rule of maximizing resource utilization. | + | | | + | | After the priority is changed, the modification takes effect within 1 minute. | + | | | + | | - **Cooldown Period**: Enter a period, in minutes. It specifies a period during which the nodes added in the current node pool cannot be scaled in. | + | | | + | | To ensure the proper running of AS, install the :ref:`autoscaler `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Advanced Settings** @@ -54,7 +54,7 @@ Updating a Node Pool +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | +===================================+================================================================================================================================================================================================================================================================+ - | Kubernetes Label | Click **Add** to set the key-value pair attached to the Kubernetes objects (such as pods). A maximum of 20 labels can be added. | + | Kubernetes Label | A Kubernetes label is a key-value pair added to a Kubernetes object (such as a pod). After specifying a label, click **Add**. A maximum of 20 labels can be added. | | | | | | Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see `Labels and Selectors `__. | | | | @@ -64,17 +64,17 @@ Updating a Node Pool +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Resource Tag | You can add resource tags to classify resources. | | | | - | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use these tags to improve tagging and resource migration efficiency. | + | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are available to all service resources that support tags. You can use these tags to improve tagging and resource migration efficiency. | | | | | | CCE will automatically create the "CCE-Dynamic-Provisioning-Node=\ *node id*" tag. | | | | | | .. note:: | | | | - | | After a **resource tag** is modified, the modification automatically takes effect when a node is added. For existing nodes, you need to manually reset the nodes for the modification to take effect. | + | | After a resource tag is modified, the modification automatically takes effect on newly added nodes. For existing nodes, manually reset the nodes for the modification to take effect. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Taint | This field is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | + | Taint | This field is left blank by default. You can add taints to configure node anti-affinity. A maximum of 20 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | @@ -82,17 +82,17 @@ Updating a Node Pool | | | | | .. note:: | | | | - | | After a **taint** is modified, the inventory nodes in the node pool are updated synchronously. | + | | After a **taint** is modified, the existing nodes in the node pool are updated synchronously. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Edit Key pair | Only node pools that use key pairs for login support key pair editing. You can select another key pair. | + | Edit key pair | Only node pools that use key pairs for login support key pair editing. You can select another key pair. | | | | | | .. note:: | | | | - | | The edited key pair automatically takes effect when a node is added. For existing nodes, you need to manually reset the nodes for the key pair to take effect. | + | | The edited key pair automatically takes effect on newly added nodes. For existing nodes, manually reset the nodes for the modification to take effect. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -#. After the configuration is complete, click **OK**. +#. After the configuration, click **OK**. After the node pool parameters are updated, go to the **Nodes** page to check whether the node to which the node pool belongs is updated. You can reset the node to synchronize the configuration updates for the node pool. -.. |image1| image:: /_static/images/en-us_image_0000001629926113.png +.. |image1| image:: /_static/images/en-us_image_0000001654936892.png diff --git a/umn/source/node_pools/node_pool_overview.rst b/umn/source/node_pools/node_pool_overview.rst index 0984306..437ff6f 100644 --- a/umn/source/node_pools/node_pool_overview.rst +++ b/umn/source/node_pools/node_pool_overview.rst @@ -27,10 +27,10 @@ Generally, all nodes in a node pool have the following same attributes: - Node OS - Node specifications - Node login mode -- Node runtime +- Node container runtime - Startup parameters of Kubernetes components on a node - User-defined startup script of a node -- **K8s Labels** and **Taints** +- **Kubernetes Labels** and **Taints** CCE provides the following extended attributes for node pools: @@ -44,7 +44,7 @@ Description of DefaultPool DefaultPool is not a real node pool. It only **classifies** nodes that are not in the user-created node pools. These nodes are directly created on the console or by calling APIs. DefaultPool does not support any user-created node pool functions, including scaling and parameter configuration. DefaultPool cannot be edited, deleted, expanded, or auto scaled, and nodes in it cannot be migrated. -Application scenario +Applicable Scenarios -------------------- When a large-scale cluster is required, you are advised to use node pools to manage nodes. @@ -66,35 +66,35 @@ The following table describes multiple scenarios of large-scale cluster manageme Functions and Precautions ------------------------- -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Function | Description | Notes | -+=======================================+========================================================================================================================================================+========================================================================================================================================================================================================================+ -| Creating a node pool | Add a node pool. | It is recommended that a cluster contains no more than 100 node pools. | -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Deleting a node pool | Deleting a node pool will delete nodes in the pool. Pods on these nodes will be automatically migrated to available nodes in other node pools. | If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable. | -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Enabling auto scaling for a node pool | After auto scaling is enabled, nodes will be automatically created or deleted in the node pool based on the cluster loads. | You are advised not to store important data on nodes in a node pool because after auto scaling, data cannot be restored as nodes may be deleted. | -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Enabling auto scaling for a node pool | After auto scaling is disabled, the number of nodes in a node pool will not automatically change with the cluster loads. | None | -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Adjusting the size of a node pool | The number of nodes in a node pool can be directly adjusted. If the number of nodes is reduced, nodes are randomly removed from the current node pool. | After auto scaling is enabled, you are not advised to manually adjust the node pool size. | -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Changing node pool configurations | You can modify the node pool name, node quantity, Kubernetes labels (and their quantity), and taints. | The deleted or added Kubernetes labels and taints (as well as their quantity) will apply to all nodes in the node pool, which may cause pod re-scheduling. Therefore, exercise caution when performing this operation. | -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Removing a node from a node pool | Nodes in a node pool can be migrated to the default node pool of the same cluster. | Nodes in the default node pool cannot be migrated to other node pools, and nodes in a user-created node pool cannot be migrated to other user-created node pools. | -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Copying a Node Pool | You can copy the configuration of an existing node pool to create a new node pool. | None | -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Setting Kubernetes parameters | You can configure core components with fine granularity. | - This function is supported only in clusters of v1.15 and later. It is not displayed for versions earlier than v1.15. | -| | | - The default node pool DefaultPool does not support this type of configuration. | -+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Function | Description | Precaution | ++=======================================+=====================================================================================================================================================================================+========================================================================================================================================================================================================================+ +| Creating a node pool | Add a node pool. | It is recommended that a cluster contains no more than 100 node pools. | ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Deleting a node pool | When a node pool is deleted, the nodes in the node pool are deleted first. Workloads on the original nodes are automatically migrated to available nodes in other node pools. | If pods in the node pool have a specific node selector and none of the other nodes in the cluster satisfies the node selector, the pods will become unschedulable. | ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Enabling auto scaling for a node pool | After auto scaling is enabled, nodes will be automatically created or deleted in the node pool based on the cluster loads. | You are advised not to store important data on nodes in a node pool because after auto scaling, data cannot be restored as nodes may be deleted. | ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Enabling auto scaling for a node pool | After auto scaling is disabled, the number of nodes in a node pool will not automatically change with the cluster loads. | None | ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Adjusting the size of a node pool | The number of nodes in a node pool can be directly adjusted. If the number of nodes is reduced, nodes are randomly removed from the current node pool. | After auto scaling is enabled, you are not advised to manually adjust the node pool size. | ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Changing node pool configurations | You can modify the node pool name, node quantity, Kubernetes labels (and their quantity), and taints and adjust the disk, OS, and container engine configurations of the node pool. | The deleted or added Kubernetes labels and taints (as well as their quantity) will apply to all nodes in the node pool, which may cause pod re-scheduling. Therefore, exercise caution when performing this operation. | ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Removing a node from a node pool | Nodes in a node pool can be migrated to the default node pool of the same cluster. | Nodes in the default node pool cannot be migrated to other node pools, and nodes in a user-created node pool cannot be migrated to other user-created node pools. | ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Cloning a node pool | You can copy the configuration of an existing node pool to create a new node pool. | None | ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Setting Kubernetes parameters | You can configure core components with fine granularity. | - This function is supported only in clusters of v1.15 and later. It is not displayed for versions earlier than v1.15. | +| | | - The default node pool DefaultPool does not support this type of configuration. | ++---------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Deploying a Workload in a Specified Node Pool --------------------------------------------- When creating a workload, you can constrain pods to run in a specified node pool. -For example, on the CCE console, you can set the affinity between the workload and the node on the **Scheduling Policies** tab page on the workload details page to forcibly deploy the workload to a specific node pool. In this way, the workload runs only on nodes in the node pool. If you need to better control where the workload is to be scheduled, you can use affinity or anti-affinity policies between workloads and nodes described in :ref:`Scheduling Policy (Affinity/Anti-affinity) `. +For example, on the CCE console, you can set the affinity between the workload and the node on the **Scheduling Policies** tab page on the workload details page to forcibly deploy the workload to a specific node pool. In this way, the workload runs only on nodes in the node pool. To better control where the workload is to be scheduled, you can use affinity or anti-affinity policies between workloads and nodes described in :ref:`Scheduling Policy (Affinity/Anti-affinity) `. For example, you can use container's resource request as a nodeSelector so that workloads will run only on the nodes that meet the resource request. diff --git a/umn/source/nodes/adding_nodes_for_management.rst b/umn/source/nodes/adding_nodes_for_management.rst index 5b1afb7..9bfc42d 100644 --- a/umn/source/nodes/adding_nodes_for_management.rst +++ b/umn/source/nodes/adding_nodes_for_management.rst @@ -8,20 +8,21 @@ Adding Nodes for Management Scenario -------- -In CCE, you can create a node (:ref:`Creating a Node `) or add existing nodes (ECSs) into your cluster. +In CCE, you can create a node (:ref:`Creating a Node `) or add existing nodes (ECSs or) to your cluster. .. important:: - While an ECS is being accepted into a cluster, the operating system of the ECS will be reset to the standard OS image provided by CCE to ensure node stability. The CCE console prompts you to select the operating system and the login mode during the reset. - - The system disk and data disk of an ECS will be formatted while the ECS is being accepted into a cluster. Ensure that information in the disks has been backed up. + - LVM information, including volume groups (VGs), logical volumes (LVs), and physical volumes (PVs), will be deleted from the system disks and data disks attached to the selected ECSs during management. Ensure that the information has been backed up. - While an ECS is being accepted into a cluster, do not perform any operation on the ECS through the ECS console. -Notes and Constraints ---------------------- +Constraints +----------- - The cluster version must be 1.15 or later. -- If the password or key has been set when a VM node is created, the VM node can be accepted into a cluster 10 minutes after it is available. During the management, the original password or key will become invalid. You need to reset the password or key. +- If a password or key has been set when the original VM node was created, reset the password or key during management. The original password or key will become invalid. - Nodes in a CCE Turbo cluster must support sub-ENIs or be bound to at least 16 ENIs. For details about the node specifications, see the nodes that can be selected on the console when you create a node. +- Data disks that have been partitioned will be ignored during node management. Ensure that there is at least one unpartitioned data disk meeting the specifications is attached to the node. Prerequisites ------------- @@ -30,14 +31,14 @@ A cloud server that meets the following conditions can be accepted: - The node to be accepted must be in the **Running** state and not used by other clusters. In addition, the node to be accepted does not carry the CCE-Dynamic-Provisioning-Node tag. - The node to be accepted and the cluster must be in the same VPC. (If the cluster version is earlier than v1.13.10, the node to be accepted and the CCE cluster must be in the same subnet.) -- At least one data disk is attached to the node to be accepted. The data disk capacity is greater than or equal to 100 GB. -- The node to be accepted has 2-core or higher CPU, 4 GB or larger memory, and only one NIC. +- Data disks must be attached to the nodes to be managed. A local disk (disk-intensive disk) or a data disk of at least 20 GiB can be attached to the node, and any data disks already attached cannot be smaller than 10 GiB. +- The node to be accepted has 2-core or higher CPU, 4 GiB or larger memory, and only one NIC. - Only cloud servers with the same specifications, AZ, and data disk configuration can be added in batches. Procedure --------- -#. Log in to the CCE console and go to the cluster where the node to be managed resides. +#. Log in to the CCE console and go to the cluster where the node to be accepted resides. #. In the navigation pane, choose **Nodes**. On the displayed page, click **Accept Node** in the upper right corner. @@ -58,7 +59,7 @@ Procedure +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Container Engine | CCE clusters support Docker and containerd in some scenarios. | | | | - | | - VPC network clusters of v1.23 and later versions support containerd. Container tunnel network clusters of v1.23.2-r0 and later versions support containerd. | + | | - VPC network clusters of v1.23 and later versions support containerd. Tunnel network clusters of v1.23.2-r0 and later versions support containerd. | | | - For a CCE Turbo cluster, both **Docker** and **containerd** are supported. For details, see :ref:`Mapping between Node OSs and Container Engines `. | +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | OS | **Public image**: Select an OS for the node. | @@ -78,17 +79,17 @@ Procedure .. table:: **Table 2** Configuration parameters - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=================================================================================================================================================================================================================================================================+ - | System Disk | Directly use the system disk of the cloud server. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Data Disk | **At least one data disk is required** for the container runtime and kubelet. **The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.** | - | | | - | | Click **Expand** to define the disk space occupied by the container runtime to store the working directories, container image data, and image metadata. For details about how to allocate data disk space, see :ref:`Data Disk Space Allocation `. | - | | | - | | For other data disks, a raw disk is created without any processing by default. You can also click **Expand** to mount the data disk to a specified directory. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+====================================================================================================================================================================================================================================================================================================+ + | System Disk | Directly use the system disk of the cloud server. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Data Disk | **At least one data disk is required** for the container runtime and kubelet. **The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.** | + | | | + | | Click **Expand** and select **Allocate Disk Space** to define the disk space occupied by the container runtime to store the working directories, container image data, and image metadata. For details about how to allocate data disk space, see :ref:`Data Disk Space Allocation `. | + | | | + | | For other data disks, a raw disk is created without any processing by default. You can also click **Expand** and select **Mount Disk** to mount the data disk to a specified directory. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Advanced Settings** @@ -103,13 +104,13 @@ Procedure +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Resource Tag | You can add resource tags to classify resources. | | | | - | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use these tags to improve tagging and resource migration efficiency. | + | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are available to all service resources that support tags. You can use these tags to improve tagging and resource migration efficiency. | | | | | | CCE will automatically create the "CCE-Dynamic-Provisioning-Node=\ *node id*" tag. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Taint | This field is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | + | Taint | This field is left blank by default. You can add taints to configure anti-affinity for the node. A maximum of 20 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | diff --git a/umn/source/nodes/container_engine.rst b/umn/source/nodes/container_engine.rst new file mode 100644 index 0000000..2191564 --- /dev/null +++ b/umn/source/nodes/container_engine.rst @@ -0,0 +1,165 @@ +:original_name: cce_10_0462.html + +.. _cce_10_0462: + +Container Engine +================ + +Introduction to Container Engines +--------------------------------- + +Container engines, one of the most important components of Kubernetes, manage the lifecycle of images and containers. The kubelet interacts with a container runtime through the Container Runtime Interface (CRI). + +CCE supports containerd and Docker. **containerd is recommended for its shorter traces, fewer components, higher stability, and less consumption of node resources**. + +.. table:: **Table 1** Comparison between container engines + + +----------------------------+-------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ + | Item | containerd | Docker | + +============================+===================================================================+==================================================================================================+ + | Tracing | kubelet --> CRI plugin (in the containerd process) --> containerd | - Docker (Kubernetes 1.23 and earlier versions): | + | | | | + | | | kubelet --> dockershim (in the kubelet process) --> docker --> containerd | + | | | | + | | | - Docker (community solution for Kubernetes 1.24 and later versions): | + | | | | + | | | kubelet --> cri-dockerd (kubelet uses CRI to connect to cri-dockerd) --> docker--> containerd | + +----------------------------+-------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ + | Command | crictl | docker | + +----------------------------+-------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ + | Kubernetes CRI | Native support | Support through dockershim or cri-dockerd | + +----------------------------+-------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ + | Pod startup delay | Low | High | + +----------------------------+-------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ + | kubelet CPU/memory usage | Low | High | + +----------------------------+-------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ + | Runtime's CPU/memory usage | Low | High | + +----------------------------+-------------------------------------------------------------------+--------------------------------------------------------------------------------------------------+ + +.. _cce_10_0462__section159298451879: + +Mapping between Node OSs and Container Engines +---------------------------------------------- + +.. table:: **Table 2** Node OSs and container engines in CCE clusters + + +--------------+----------------+-------------------------------------------------+--------------------------+-------------------+ + | OS | Kernel Version | Container Engine | Container Storage Rootfs | Container Runtime | + +==============+================+=================================================+==========================+===================+ + | EulerOS 2.5 | 3.x | Docker | Device Mapper | runC | + +--------------+----------------+-------------------------------------------------+--------------------------+-------------------+ + | EulerOS 2.9 | 4.x | Docker | OverlayFS | runC | + | | | | | | + | | | Clusters of v1.23 and later support containerd. | | | + +--------------+----------------+-------------------------------------------------+--------------------------+-------------------+ + | Ubuntu 22.04 | 4.x | Docker | OverlayFS | runC | + | | | | | | + | | | Clusters of v1.23 and later support containerd. | | | + +--------------+----------------+-------------------------------------------------+--------------------------+-------------------+ + +.. table:: **Table 3** Node OSs and container engines in CCE Turbo clusters + + +-----------------------------------------+-------------+----------------+------------------+--------------------------+-------------------+ + | Node Type | OS | Kernel Version | Container Engine | Container Storage Rootfs | Container Runtime | + +=========================================+=============+================+==================+==========================+===================+ + | Elastic Cloud Server (VM) | EulerOS 2.9 | 3.x | Docker | OverlayFS | runC | + +-----------------------------------------+-------------+----------------+------------------+--------------------------+-------------------+ + | Elastic Cloud Server (physical machine) | EulerOS 2.9 | 4.x | containerd | Device Mapper | Kata | + +-----------------------------------------+-------------+----------------+------------------+--------------------------+-------------------+ + +Common Commands of containerd and Docker +---------------------------------------- + +containerd does not support Docker APIs and Docker CLI, but you can run crictl commands to implement similar functions. + +.. table:: **Table 4** Image-related commands + + +-----+---------------------------------------------------+---------------------------------------------------+-----------------------+ + | No. | Docker Command | containerd Command | Remarks | + +=====+===================================================+===================================================+=======================+ + | 1 | docker images [Option] [Image name[:Tag]] | crictl images [Option] [Image name[:Tag]] | List local images. | + +-----+---------------------------------------------------+---------------------------------------------------+-----------------------+ + | 2 | docker pull [Option] *Image name*\ [:Tag|@DIGEST] | crictl pull [Option] *Image name*\ [:Tag|@DIGEST] | Pull images. | + +-----+---------------------------------------------------+---------------------------------------------------+-----------------------+ + | 3 | docker push | None | Pushing images. | + +-----+---------------------------------------------------+---------------------------------------------------+-----------------------+ + | 4 | docker rmi [Option] *Image*... | crictl rmi [Option] *Image ID*... | Delete a local image. | + +-----+---------------------------------------------------+---------------------------------------------------+-----------------------+ + | 5 | docker inspect *Image ID* | crictl inspecti *Image ID* | Check images. | + +-----+---------------------------------------------------+---------------------------------------------------+-----------------------+ + +.. table:: **Table 5** Container-related commands + + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | No. | Docker Command | containerd Command | Remarks | + +=====+========================================================================+========================================================================+============================================+ + | 1 | docker ps [Option] | crictl ps [Option] | List containers. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 2 | docker create [Option] | crictl create [Option] | Create a container. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 3 | docker start [Option] *Container ID*... | crictl start [Option] *Container ID*... | Start a container. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 4 | docker stop [Option] *Container ID*... | crictl stop [Option] *Container ID*... | Stop a container. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 5 | docker rm [Option] *Container ID*... | crictl rm [Option] *Container ID*... | Delete a container. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 6 | docker attach [Option] *Container ID* | crictl attach [Option] *Container ID* | Connect to a container. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 7 | docker exec [Option] *Container ID* *Startup command* [*Parameter*...] | crictl exec [Option] *Container ID* *Startup command* [*Parameter*...] | Access the container. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 8 | docker inspect [Option] *Container name*\ \|\ *ID*... | crictl inspect [Option] *Container ID*... | Query container details. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 9 | docker logs [Option] *Container ID* | crictl logs [Option] *Container ID* | View container logs. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 10 | docker stats [Option] *Container ID*... | crictl stats [Option] *Container ID* | Check the resource usage of the container. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + | 11 | docker update [Option] *Container ID*... | crictl update [Option] *Container ID*... | Update container resource limits. | + +-----+------------------------------------------------------------------------+------------------------------------------------------------------------+--------------------------------------------+ + +.. table:: **Table 6** Pod-related commands + + +-----+----------------+--------------------------------------+-------------------+ + | No. | Docker Command | containerd Command | Remarks | + +=====+================+======================================+===================+ + | 1 | None | crictl pods [Option] | List pods. | + +-----+----------------+--------------------------------------+-------------------+ + | 2 | None | crictl inspectp [Option] *Pod ID*... | View pod details. | + +-----+----------------+--------------------------------------+-------------------+ + | 3 | None | crictl start [Option] *Pod ID*... | Start a pod. | + +-----+----------------+--------------------------------------+-------------------+ + | 4 | None | crictl runp [Option] *Pod ID*... | Run a pod. | + +-----+----------------+--------------------------------------+-------------------+ + | 5 | None | crictl stopp [Option] *Pod ID*... | Stop a pod. | + +-----+----------------+--------------------------------------+-------------------+ + | 6 | None | crictl rmp [Option] *Pod ID*... | Delete a pod. | + +-----+----------------+--------------------------------------+-------------------+ + +.. note:: + + Containers created and started by containerd are immediately deleted by kubelet. containerd does not support suspending, resuming, restarting, renaming, and waiting for containers, nor Docker image build, import, export, comparison, push, search, and labeling. containerd does not support file copy. You can log in to the image repository by modifying the configuration file of containerd. + +Differences in Tracing +---------------------- + +- Docker (Kubernetes 1.23 and earlier versions): + + kubelet --> docker shim (in the kubelet process) --> docker --> containerd + +- Docker (community solution for Kubernetes v1.24 or later): + + kubelet --> cri-dockerd (kubelet uses CRI to connect to cri-dockerd) --> docker--> containerd + +- containerd: + + kubelet --> cri plugin (in the containerd process) --> containerd + +Although Docker has added functions such as swarm cluster, docker build, and Docker APIs, it also introduces bugs. Compared with containerd, Docker has one more layer of calling. **Therefore, containerd is more resource-saving and secure.** + +Container Engine Version Description +------------------------------------ + +- Docker + + - EulerOS/CentOS: docker-engine 18.9.0, a Docker version customized for CCE. Security vulnerabilities will be fixed in a timely manner. + +- containerd: 1.6.14 diff --git a/umn/source/nodes/creating_a_node.rst b/umn/source/nodes/creating_a_node.rst index dddf1d2..5c1db44 100644 --- a/umn/source/nodes/creating_a_node.rst +++ b/umn/source/nodes/creating_a_node.rst @@ -14,10 +14,10 @@ Prerequisites Constraints ----------- -- The node has 2-core or higher CPU, 4 GiB or larger memory. -- To ensure node stability, a certain amount of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications. Therefore, the total number of node resources and assignable node resources in Kubernetes are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components. For details, see :ref:`Formula for Calculating the Reserved Resources of a Node `. -- The node networking (such as the VM networking and container networking) is taken over by CCE. You are not allowed to add and delete NICs or change routes. If you modify the networking configuration, the availability of CCE may be affected. For example, the NIC named **gw_11cbf51a@eth0** on the node is the container network gateway and cannot be modified. -- During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the DNS server address of the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name. +- The node has at least 2 vCPUs and 4 GiB of memory. +- To ensure node stability, a certain number of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications. Therefore, the total number of node resources and the number of allocatable node resources for your cluster are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components. For details, see :ref:`Node Resource Reservation Policy `. +- Networks including VM networks and container networks of nodes are all managed by CCE. Do not add or delete ENIs or change routes. Otherwise, services may be unavailable. For example, the NIC named **gw_11cbf51a@eth0** on the node is the container network gateway and cannot be modified. +- During the node creation, software packages are downloaded from OBS using the domain name. Use a private DNS server to resolve the OBS domain name, and configure the DNS server address of the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name. - Once a node is created, its AZ cannot be changed. Procedure @@ -25,9 +25,11 @@ Procedure After a cluster is created, you can create nodes for the cluster. -#. Log in to the CCE console. In the navigation pane, choose **Clusters**. Click the target cluster name to access its details page. +#. Log in to the CCE console. -#. In the navigation pane on the left, choose **Nodes**. On the page displayed, click **Create Node**. Set node parameters by referring to the following table. +#. In the navigation pane of the CCE console, choose **Clusters**. Click the target cluster name to access its details page. + +#. In the navigation pane on the left, choose **Nodes**. On the page displayed, click **Create Node**. In the **Node Settings** step, set node parameters by referring to the following table. **Compute Settings** @@ -35,48 +37,53 @@ After a cluster is created, you can create nodes for the cluster. .. table:: **Table 1** Configuration parameters - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+====================================================================================================================================================================================================================================+ - | AZ | AZ where the node is located. Nodes in a cluster can be created in different AZs for higher reliability. The value cannot be changed after the node is created. | - | | | - | | You are advised to select **Random** to deploy your node in a random AZ based on the selected node flavor. | - | | | - | | An AZ is a physical region where resources use independent power supply and networks. AZs are physically isolated but interconnected through an internal network. To enhance workload availability, create nodes in different AZs. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Node Type | CCE cluster: | - | | | - | | - ECS (VM): Containers run on ECSs. | - | | | - | | CCE Turbo Cluster: | - | | | - | | - ECS (VM): Containers run on ECSs. Only Trunkport ECSs (models that can be bound with multiple elastic network interfaces (ENIs)) are supported. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Container Engine | CCE clusters support Docker and containerd in some scenarios. | - | | | - | | - VPC network clusters of v1.23 and later versions support containerd. Container tunnel network clusters of v1.23.2-r0 and later versions support containerd. | - | | - For a CCE Turbo cluster, both **Docker** and **containerd** are supported. For details, see :ref:`Mapping between Node OSs and Container Engines `. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Specifications | Select the node specifications based on service requirements. The available node specifications vary depending on AZs. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | OS | Select an OS type. Different types of nodes support different OSs. | - | | | - | | **Public image**: Select an OS for the node. | - | | | - | | **Private image**: You can use private images. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Node Name | Name of the node. When nodes (ECSs) are created in batches, the value of this parameter is used as the name prefix for each ECS. | - | | | - | | The system generates a default name for you, which can be modified. | - | | | - | | A node name must start with a lowercase letter and cannot end with a hyphen (-). Only digits, lowercase letters, and hyphens (-) are allowed. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Login Mode | - **Key Pair** | - | | | - | | Select the key pair used to log in to the node. You can select a shared key. | - | | | - | | A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click **Create Key Pair**. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+==========================================================================================================================================================================================================================================+ + | AZ | AZ where the node is located. Nodes in a cluster can be created in different AZs for higher reliability. The value cannot be changed after the node is created. | + | | | + | | Select **Random** to deploy your node in a random AZ based on the selected node flavor. | + | | | + | | An AZ is a physical region where resources use independent power supply and networks. AZs are physically isolated but interconnected through an internal network. To enhance workload availability, create nodes in different AZs. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Node Type | CCE cluster: | + | | | + | | - ECS (VM): Containers run on ECSs. | + | | | + | | CCE Turbo cluster: | + | | | + | | - ECS (VM): Containers run on ECSs. Only the ECSs that can be bound with multiple NICs are supported. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Container Engine | CCE clusters support Docker and containerd in some scenarios. | + | | | + | | - VPC network clusters of v1.23 and later versions support containerd. Tunnel network clusters of v1.23.2-r0 and later versions support containerd. | + | | - For a CCE Turbo cluster, both **Docker** and **containerd** are supported. For details, see :ref:`Mapping between Node OSs and Container Engines `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Specifications | Select node specifications that best fit your service needs. | + | | | + | | The available node flavors vary depending on AZs. Obtain the flavors displayed on the console. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | OS | Select an OS type. Different types of nodes support different OSs. | + | | | + | | - **Public image**: Select a public image for the node. | + | | - **Private image**: Select a private image for the node. | + | | | + | | .. note:: | + | | | + | | - Service container runtimes share the kernel and underlying calls of nodes. To ensure compatibility, select a Linux distribution version that is the same as or close to that of the final service container image for the node OS. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Node Name | Name of the node. When nodes (ECSs) are created in batches, the value of this parameter is used as the name prefix for each ECS. | + | | | + | | The system generates a default name for you, which can be modified. | + | | | + | | A node name must start with a lowercase letter and cannot end with a hyphen (-). Only digits, lowercase letters, and hyphens (-) are allowed. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Login Mode | - **Key Pair** | + | | | + | | Select the key pair used to log in to the node. You can select a shared key. | + | | | + | | A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click **Create Key Pair**. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Storage Settings** @@ -84,44 +91,56 @@ After a cluster is created, you can create nodes for the cluster. .. table:: **Table 2** Configuration parameters - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+===============================================================================================================================================================================================================================================================================================+ - | System Disk | System disk used by the node OS. The value ranges from 40 GiB to 1,024 GiB. The default value is 50 GiB. | - | | | - | | **Encryption**: System disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function. **This function is available only in certain regions.** | - | | | - | | - **Encryption** is not selected by default. | - | | - After you select **Encryption**, you can select an existing key in the displayed dialog box. If no key is available, click **View Key List** to create a key. After the key is created, click the refresh icon. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Data Disk | **At least one data disk is required** for the container runtime and kubelet. **The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.** | - | | | - | | - First data disk: used for container runtime and kubelet components. The value ranges from 20 GiB to 32,768 GiB. The default value is 100 GiB. | - | | - Other data disks: You can set the data disk size to a value ranging from 10 GiB to 32,768 GiB. The default value is 100 GiB. | - | | | - | | **Advanced Settings** | - | | | - | | Click **Expand** to set the following parameters: | - | | | - | | - **Allocate Disk Space**: Select this option to define the disk space occupied by the container runtime to store the working directories, container image data, and image metadata. For details about how to allocate data disk space, see :ref:`Data Disk Space Allocation `. | - | | - **Encryption**: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption function. **This function is available only in certain regions.** | - | | | - | | - **Encryption** is not selected by default. | - | | - After you select **Encryption**, you can select an existing key. If no key is available, click the link next to the drop-down box to create a key. After the key is created, click the refresh icon. | - | | | - | | **Adding Multiple Data Disks** | - | | | - | | A maximum of four data disks can be added. By default, raw disks are created without any processing. You can also click **Expand** and select any of the following options: | - | | | - | | - **Default**: By default, a raw disk is created without any processing. | - | | - **Mount Disk**: The data disk is attached to a specified directory. | - | | | - | | **Local Disk Description** | - | | | - | | If the node flavor is disk-intensive or ultra-high I/O, one data disk can be a local disk. | - | | | - | | Local disks may break down and do not ensure data reliability. It is recommended that you store service data in EVS disks, which are more reliable than local disks. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | System Disk | System disk used by the node OS. The value ranges from 40 GiB to 1,024 GiB. The default value is 50 GiB. | + | | | + | | **Encryption**: System disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption setting. **This function is available only in certain regions.** | + | | | + | | - **Encryption** is not selected by default. | + | | - After selecting **Encryption**, you can select an existing key in the displayed dialog box. If no key is available, click **View Key List** and create a key. After the key is created, click the refresh icon next to the **Encryption** text box. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Data Disk | **At least one data disk is required** for the container runtime and kubelet. **The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.** | + | | | + | | - First data disk: used for container runtime and kubelet components. The value ranges from 20 GiB to 32,768 GiB. The default value is 100 GiB. | + | | - Other data disks: You can set the data disk size to a value ranging from 10 GB to 32,768 GiB. The default value is 100 GiB. | + | | | + | | .. note:: | + | | | + | | If the node flavor is disk-intensive or ultra-high I/O, one data disk can be a local disk. | + | | | + | | Local disks may break down and do not ensure data reliability. Store your service data in EVS disks, which are more reliable than local disks. | + | | | + | | **Advanced Settings** | + | | | + | | Click **Expand** to configure the following parameters: | + | | | + | | - **Data Disk Space Allocation**: After selecting **Set Container Engine Space**, you can specify the proportion of the space for the container engine, image, and temporary storage on the data disk. The container engine space is used to store the working directory, container image data, and image metadata for the container runtime. The remaining space of the data disk is used for pod configuration files, keys, and EmptyDir. For details about how to allocate data disk space, see :ref:`Data Disk Space Allocation `. | + | | - **Encryption**: Data disk encryption safeguards your data. Snapshots generated from encrypted disks and disks created using these snapshots automatically inherit the encryption setting. **This function is available only in certain regions.** | + | | | + | | - **Encryption** is not selected by default. | + | | - After selecting **Encryption**, you can select an existing key in the displayed dialog box. If no key is available, click **View Key List** and create a key. After the key is created, click the refresh icon next to the **Encryption** text box. | + | | | + | | **Adding Multiple Data Disks** | + | | | + | | A maximum of four data disks can be added. By default, raw disks are created without any processing. You can also click **Expand** and select any of the following options: | + | | | + | | - **Default**: By default, a raw disk is created without any processing. | + | | - **Mount Disk**: The data disk is attached to a specified directory. | + | | - **Use as PV**: applicable to scenarios in which there is a high performance requirement on PVs. The **node.kubernetes.io/local-storage-persistent** label is added to the node with PV configured. The value is **linear** or **striped**. | + | | - **Use as ephemeral volume**: applicable to scenarios in which there is a high performance requirement on EmptyDir. | + | | | + | | .. note:: | + | | | + | | - Local PVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 2.1.23 or later. Version 2.1.23 or later is recommended. | + | | - Local EVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 1.2.29 or later. | + | | | + | | :ref:`Local Persistent Volumes (Local PVs) ` and :ref:`Local EVs ` support the following write modes: | + | | | + | | - **Linear**: A linear logical volume integrates one or more physical volumes. Data is written to the next physical volume when the previous one is used up. | + | | - **Striped**: A striped logical volume stripes data into blocks of the same size and stores them in multiple physical volumes in sequence, allowing data to be concurrently read and written. A storage pool consisting of striped volumes cannot be scaled-out. This option can be selected only when multiple volumes exist. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Network Settings** @@ -136,9 +155,9 @@ After a cluster is created, you can create nodes for the cluster. +-----------------------------------+-------------------------------------------------------------------------------------------------------------+ | Node IP Address | IP address of the specified node. By default, the value is randomly allocated. | +-----------------------------------+-------------------------------------------------------------------------------------------------------------+ - | EIP | A cloud server without an EIP cannot access public networks or be accessed by public networks. | + | EIP | An ECS without a bound EIP cannot access the Internet or be accessed by public networks. | | | | - | | The default value is **Do not use**. You can select **Use existing** and **Auto create**. | + | | The default value is **Do not use**. **Use existing** and **Auto create** are supported. | +-----------------------------------+-------------------------------------------------------------------------------------------------------------+ **Advanced Settings** @@ -150,19 +169,19 @@ After a cluster is created, you can create nodes for the cluster. +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | +===================================+================================================================================================================================================================================================================================================================+ - | Kubernetes Label | Click **Add Label** to set the key-value pair attached to the Kubernetes objects (such as pods). A maximum of 20 labels can be added. | + | Kubernetes Label | A key-value pair added to a Kubernetes object (such as a pod). A maximum of 20 labels can be added. | | | | | | Labels can be used to distinguish nodes. With workload affinity settings, container pods can be scheduled to a specified node. For more information, see `Labels and Selectors `__. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Resource Tag | You can add resource tags to classify resources. | | | | - | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use these tags to improve tagging and resource migration efficiency. | + | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are available to all service resources that support tags. You can use these tags to improve tagging and resource migration efficiency. | | | | | | CCE will automatically create the "CCE-Dynamic-Provisioning-Node=\ *node id*" tag. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Taint | This parameter is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | + | Taint | This parameter is left blank by default. You can add taints to configure anti-affinity for the node. A maximum of 20 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | @@ -191,13 +210,17 @@ After a cluster is created, you can create nodes for the cluster. | Post-installation Command | Enter commands. A maximum of 1,000 characters are allowed. | | | | | | The script will be executed after Kubernetes software is installed and will not affect the installation. | + | | | + | | .. note:: | + | | | + | | Do not run the **reboot** command in the post-installation script to restart the system immediately. To restart the system, run the **shutdown -r 1** command to delay the restart for one minute. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Agency | An agency is created by the account administrator on the IAM console. By creating an agency, you can share your cloud server resources with another account, or entrust a more professional person or team to manage your resources. | | | | | | If no agency is available, click **Create Agency** on the right to create one. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -#. Click **Next: Confirm**. Confirm the configured parameters, specifications. +#. Configure the number of nodes to be purchased. Then, click **Next: Confirm**. Confirm the configured parameters and specifications. #. Click **Submit**. diff --git a/umn/source/nodes/index.rst b/umn/source/nodes/index.rst index 28f26de..94b4d2d 100644 --- a/umn/source/nodes/index.rst +++ b/umn/source/nodes/index.rst @@ -6,31 +6,21 @@ Nodes ===== - :ref:`Node Overview ` +- :ref:`Container Engine ` - :ref:`Creating a Node ` - :ref:`Adding Nodes for Management ` -- :ref:`Removing a Node ` -- :ref:`Resetting a Node ` - :ref:`Logging In to a Node ` -- :ref:`Managing Node Labels ` -- :ref:`Managing Node Taints ` -- :ref:`Synchronizing Data with Cloud Servers ` -- :ref:`Deleting a Node ` -- :ref:`Stopping a Node ` -- :ref:`Performing Rolling Upgrade for Nodes ` +- :ref:`Management Nodes ` +- :ref:`Node O&M ` .. toctree:: :maxdepth: 1 :hidden: - node_overview/index + node_overview + container_engine creating_a_node adding_nodes_for_management - removing_a_node - resetting_a_node logging_in_to_a_node - managing_node_labels - managing_node_taints - synchronizing_data_with_cloud_servers - deleting_a_node - stopping_a_node - performing_rolling_upgrade_for_nodes + management_nodes/index + node_o_and_m/index diff --git a/umn/source/nodes/logging_in_to_a_node.rst b/umn/source/nodes/logging_in_to_a_node.rst index 54d5dbe..fa180e5 100644 --- a/umn/source/nodes/logging_in_to_a_node.rst +++ b/umn/source/nodes/logging_in_to_a_node.rst @@ -9,7 +9,7 @@ Constraints ----------- - If you use SSH to log in to a node (an ECS), ensure that the ECS already has an EIP (a public IP address). -- Only log in to a running ECS is allowed. +- Only login to a running ECS is allowed. - Only the user linux can log in to a Linux server. Login Modes @@ -34,16 +34,16 @@ You can log in to an ECS in either of the following modes: .. table:: **Table 1** Linux ECS login modes - +-----------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ - | EIP Binding | On-Premises OS | Connection Method | - +=======================+=======================+==================================================================================================================================================+ - | Yes | Windows | Use a remote login tool, such as PuTTY or XShell. | - | | | | - | | | - SSH key authentication: `Login Using an SSH Key `__ | - +-----------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ - | Yes | Linux | Run commands. | - | | | | - | | | - SSH key authentication: `Login Using an SSH Key `__ | - +-----------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ - | Yes/No | Windows/Linux | Remote login using the management console\ `Login Using VNC `__ | - +-----------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | EIP Binding | On-Premises OS | Connection Method | + +=======================+=======================+============================================================================================================================================+ + | Yes | Windows | Use a remote login tool, such as PuTTY or Xshell. | + | | | | + | | | - SSH key authentication: `Login Using an SSH Key `__ | + +-----------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | Yes | Linux | Run commands. | + | | | | + | | | - SSH key authentication: `Login Using an SSH Key `__ | + +-----------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ + | Yes/No | Windows/Linux | Remote login using the management console: `Login Using VNC `__ | + +-----------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/nodes/deleting_a_node.rst b/umn/source/nodes/management_nodes/deleting_a_node.rst similarity index 96% rename from umn/source/nodes/deleting_a_node.rst rename to umn/source/nodes/management_nodes/deleting_a_node.rst index f234fcc..f3b8510 100644 --- a/umn/source/nodes/deleting_a_node.rst +++ b/umn/source/nodes/management_nodes/deleting_a_node.rst @@ -10,8 +10,8 @@ Scenario When a node in a CCE cluster is deleted, services running on the node will also be deleted. Exercise caution when performing this operation. -Notes and Constraints ---------------------- +Constraints +----------- - VM nodes that are being used by CCE do not support deletion on the ECS page. diff --git a/umn/source/nodes/management_nodes/index.rst b/umn/source/nodes/management_nodes/index.rst new file mode 100644 index 0000000..38bcaa4 --- /dev/null +++ b/umn/source/nodes/management_nodes/index.rst @@ -0,0 +1,28 @@ +:original_name: cce_10_0672.html + +.. _cce_10_0672: + +Management Nodes +================ + +- :ref:`Managing Node Labels ` +- :ref:`Managing Node Taints ` +- :ref:`Resetting a Node ` +- :ref:`Removing a Node ` +- :ref:`Synchronizing Data with Cloud Servers ` +- :ref:`Deleting a Node ` +- :ref:`Stopping a Node ` +- :ref:`Performing Rolling Upgrade for Nodes ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + managing_node_labels + managing_node_taints + resetting_a_node + removing_a_node + synchronizing_data_with_cloud_servers + deleting_a_node + stopping_a_node + performing_rolling_upgrade_for_nodes diff --git a/umn/source/nodes/managing_node_labels.rst b/umn/source/nodes/management_nodes/managing_node_labels.rst similarity index 96% rename from umn/source/nodes/managing_node_labels.rst rename to umn/source/nodes/management_nodes/managing_node_labels.rst index 22fbd55..94e9f48 100644 --- a/umn/source/nodes/managing_node_labels.rst +++ b/umn/source/nodes/management_nodes/managing_node_labels.rst @@ -21,9 +21,13 @@ Inherent Label of a Node After a node is created, some fixed labels exist and cannot be deleted. For details about these labels, see :ref:`Table 1 `. +.. note:: + + Do not manually change the inherent labels that are automatically added to a node. If the manually changed value conflicts with the system value, the system value prevails. + .. _cce_10_0004__table83962234533: -.. table:: **Table 1** Inherent label of a node +.. table:: **Table 1** Inherent labels of a node +-----------------------------------------------------+-------------------------------------------------------------+ | Key | Description | diff --git a/umn/source/nodes/managing_node_taints.rst b/umn/source/nodes/management_nodes/managing_node_taints.rst similarity index 72% rename from umn/source/nodes/managing_node_taints.rst rename to umn/source/nodes/management_nodes/managing_node_taints.rst index 4d41705..3c00acb 100644 --- a/umn/source/nodes/managing_node_taints.rst +++ b/umn/source/nodes/management_nodes/managing_node_taints.rst @@ -12,7 +12,7 @@ Taints A taint is a key-value pair associated with an effect. The following effects are available: -- NoSchedule: No pod will be able to schedule onto the node unless it has a matching toleration. Existing pods will not be evicted from the node. +- NoSchedule: No pod will be scheduled onto the node unless it has a matching toleration. Existing pods will not be evicted from the node. - PreferNoSchedule: Kubernetes prevents pods that cannot tolerate this taint from being scheduled onto the node. - NoExecute: If the pod has been running on a node, the pod will be evicted from the node. If the pod has not been running on a node, the pod will not be scheduled onto the node. @@ -71,10 +71,24 @@ On the CCE console, you can also manage taints of a node in batches. #. After the taint is added, check the added taint in node data. +System Taints +------------- + +When some issues occurred on a node, Kubernetes automatically adds a taint to the node. The built-in taints are as follows: + +- node.kubernetes.io/not-ready: The node is not ready. The node **Ready** value is **False**. +- node.kubernetes.io/unreachable: The node controller cannot access the node. The node **Ready** value is **Unknown**. +- node.kubernetes.io/memory-pressure: The node memory is approaching the upper limit. +- node.kubernetes.io/disk-pressure: The node disk space is approaching the upper limit. +- node.kubernetes.io/pid-pressure: The node PIDs are approaching the upper limit. +- node.kubernetes.io/network-unavailable: The node network is unavailable. +- node.kubernetes.io/unschedulable: The node cannot be scheduled. +- node.cloudprovider.kubernetes.io/uninitialized: If an external cloud platform driver is specified when kubelet is started, kubelet adds a taint to the current node and marks it as unavailable. After a controller of **cloud-controller-manager** initializes the node, kubelet will delete the taint. + Node Scheduling Settings ------------------------ -To configure scheduling settings, log in to the CCE console, click the cluster, choose **Nodes** in the navigation pane, and click **More** > **Disable Scheduling** in the **Operation** column of a node in the node list. +To configure scheduling, log in to the CCE console, click the cluster, choose **Nodes** in the navigation pane, and click **More** > **Disable Scheduling** in the **Operation** column of a node in the node list. In the dialog box that is displayed, click **OK** to set the node to be unschedulable. @@ -87,9 +101,7 @@ This operation will add a taint to the node. You can use kubectl to view the con Taints: node.kubernetes.io/unschedulable:NoSchedule ... -On the CCE console, perform the same operations again to remove the taint and set the node to be schedulable. - -.. _cce_10_0352__section2047442210417: +On the CCE console, remove the taint and set the node to be schedulable. Tolerations ----------- @@ -98,7 +110,7 @@ Tolerations are applied to pods, and allow (but do not require) the pods to sche Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node. This marks that the node should not accept any pods that do not tolerate the taints. -Here's an example of a pod that uses tolerations: +Example: .. code-block:: diff --git a/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst b/umn/source/nodes/management_nodes/performing_rolling_upgrade_for_nodes.rst similarity index 85% rename from umn/source/nodes/performing_rolling_upgrade_for_nodes.rst rename to umn/source/nodes/management_nodes/performing_rolling_upgrade_for_nodes.rst index e766cdc..2442207 100644 --- a/umn/source/nodes/performing_rolling_upgrade_for_nodes.rst +++ b/umn/source/nodes/management_nodes/performing_rolling_upgrade_for_nodes.rst @@ -12,13 +12,13 @@ In a rolling upgrade, a new node is created, existing workloads are migrated to .. _cce_10_0276__fig1689610598118: -.. figure:: /_static/images/en-us_image_0000001568822733.png +.. figure:: /_static/images/en-us_image_0000001695737085.png :alt: **Figure 1** Workload migration **Figure 1** Workload migration -Notes and Constraints ---------------------- +Constraints +----------- - The original node and the target node to which the workload is to be migrated must be in the same cluster. - The cluster must be of v1.13.10 or later. @@ -31,7 +31,7 @@ Scenario 1: The Original Node Is in DefaultPool Create a node pool. For details, see :ref:`Creating a Node Pool `. -#. Click the name of the node pool. The IP address of the new node is displayed in the node list. +#. On the node pool list page, click **View Node** in the **Operation** column of the target node pool. The IP address of the new node is displayed in the node list. 3. Install and configure kubectl. For details, see :ref:`Connecting to a Cluster Using kubectl `. @@ -74,14 +74,14 @@ Scenario 2: The Original Node Is Not in DefaultPool #. .. _cce_10_0276__li1992616214312: - Copy the node pool and add nodes to it. + Copy the node pool and add nodes to it. For details, see :ref:`Copying a Node Pool `. #. Click **View Node** in the **Operation** column of the node pool. The IP address of the new node is displayed in the node list. 3. Migrate the workload. - a. Click **Edit** on the right of original node pool and set **Taints**. - b. Enter the key and value of the taint. The options of **Effect** are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. Select **NoExecute** and click **confirm to add**. + a. Click **Edit** on the right of original node pool and configure **Taints**. + b. Enter the key and value of a taint. The options of **Effect** are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. Select **NoExecute** and click **Add**. - **NoSchedule**: Pods that do not tolerate this taint are not scheduled on the node; existing pods are not evicted from the node. - **PreferNoSchedule**: Kubernetes tries to avoid scheduling pods that do not tolerate this taint onto the node. @@ -89,7 +89,7 @@ Scenario 2: The Original Node Is Not in DefaultPool .. note:: - If you need to reset the taint, delete the configured taint. + To reset the taint, delete the configured one. c. Click **OK**. d. In the navigation pane of the CCE console, choose **Workloads** > **Deployments**. In the workload list, the status of the workload to be migrated changes from **Running** to **Unready**. If the workload status changes to **Running** again, the migration is successful. @@ -98,7 +98,7 @@ Scenario 2: The Original Node Is Not in DefaultPool During workload migration, if node affinity is configured for the workload, the workload keeps displaying a message indicating that the workload is not ready. In this case, click the workload name to go to the workload details page. On the **Scheduling Policies** tab page, delete the affinity configuration of the original node and configure the affinity and anti-affinity policies of the new node. For details, see :ref:`Scheduling Policy (Affinity/Anti-affinity) `. - After the workload is successfully migrated, you can view that the workload is migrated to the node created in :ref:`1 ` on the **Pods** tab page of the workload details page. + After the workload is migrated, you can view that the workload is migrated to the node created in :ref:`1 ` on the **Pods** tab page of the workload details page. 4. Delete the original node. diff --git a/umn/source/nodes/removing_a_node.rst b/umn/source/nodes/management_nodes/removing_a_node.rst similarity index 92% rename from umn/source/nodes/removing_a_node.rst rename to umn/source/nodes/management_nodes/removing_a_node.rst index 045fdb0..d79d290 100644 --- a/umn/source/nodes/removing_a_node.rst +++ b/umn/source/nodes/management_nodes/removing_a_node.rst @@ -14,12 +14,12 @@ Removing a node will not delete the server corresponding to the node. You are ad After a node is removed from the cluster, the node is still running. -Notes and Constraints ---------------------- +Constraints +----------- -- Nodes can be removed only when the cluster is in the **Available** or **Unavailable** state. -- A CCE node can be removed only when it is in the **Active**, **Abnormal**, or **Error** state. -- A CCE node in the Active state can have its OS re-installed and CCE components cleared after it is removed. +- Nodes can be removed only when the cluster is in the **Available** or **Unavailable** status. +- A CCE node can be removed only when it is in the **Active**, **Abnormal**, or **Error** status. +- A CCE node in the **Active** status can have its OS re-installed and CCE components cleared after it is removed. - If the OS fails to be re-installed after the node is removed, manually re-install the OS. After the re-installation, log in to the node and run the clearance script to clear CCE components. For details, see :ref:`Handling Failed OS Reinstallation `. Precautions @@ -33,7 +33,7 @@ Precautions Procedure --------- -#. Log in to the CCE console and click the cluster name to access the cluster. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. Choose **Nodes** from the navigation pane and choose **More** > **Remove** in the **Operation** column of the target node. diff --git a/umn/source/nodes/resetting_a_node.rst b/umn/source/nodes/management_nodes/resetting_a_node.rst similarity index 87% rename from umn/source/nodes/resetting_a_node.rst rename to umn/source/nodes/management_nodes/resetting_a_node.rst index 34b1af0..834d8b9 100644 --- a/umn/source/nodes/resetting_a_node.rst +++ b/umn/source/nodes/management_nodes/resetting_a_node.rst @@ -17,8 +17,8 @@ Constraints - For CCE clusters and CCE Turbo clusters, the version must be v1.13 or later to support node resetting. -Notes ------ +Precautions +----------- - Only worker nodes can be reset. If the node is still unavailable after the resetting, delete the node and create a new one. - **Resetting a node will reinstall the node OS and interrupt workload services running on the node. Therefore, perform this operation during off-peak hours.** @@ -31,15 +31,15 @@ Notes Procedure --------- -The new console allows you to reset nodes in batches. You can also use private images to reset nodes in batches. +The new console allows you to reset nodes in batches. You can also use a private image to reset nodes in batches. -#. Log in to the CCE console. +#. Log in to the CCE console and click the cluster name to access the cluster console. -#. Click the cluster name and access the cluster details page, choose **Nodes** in the navigation pane, and select one or multiple nodes to be reset in the list on the right. Choose **More** > **Reset Node**. +#. Click the cluster name to access the cluster console. Choose **Nodes** in the navigation pane, and select one or multiple nodes to be reset in the list. Choose **More** > **Reset Node**. #. In the displayed dialog box, click **Next**. - - For nodes in the DefaultPool, the parameter setting page is displayed. Set the parameters by referring to :ref:`4 `. + - For nodes in the DefaultPool node pool, the parameter setting page is displayed. Set the parameters by referring to :ref:`4 `. - For a node you create in a node pool, resetting the node does not support parameter configuration. You can directly use the configuration image of the node pool to reset the node. #. .. _cce_10_0003__li1646785611239: @@ -53,11 +53,11 @@ The new console allows you to reset nodes in batches. You can also use private i +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | +===================================+==========================================================================================================================================================================================+ - | Specifications | Node specifications cannot be modified when you reset a node. | + | Specifications | Specifications cannot be modified when you reset a node. | +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Container Engine | CCE clusters support Docker and containerd in some scenarios. | | | | - | | - VPC network clusters of v1.23 and later versions support containerd. Container tunnel network clusters of v1.23.2-r0 and later versions support containerd. | + | | - VPC network clusters of v1.23 and later versions support containerd. Tunnel network clusters of v1.23.2-r0 and later versions support containerd. | | | - For a CCE Turbo cluster, both **Docker** and **containerd** are supported. For details, see :ref:`Mapping between Node OSs and Container Engines `. | +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | OS | **Public image**: Select an OS for the node. | @@ -77,17 +77,17 @@ The new console allows you to reset nodes in batches. You can also use private i .. table:: **Table 2** Configuration parameters - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+===========================================================================================================================================================================================================================+ - | System Disk | Directly use the system disk of the cloud server. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Data Disk | **At least one data disk is required** for the container runtime and kubelet. **The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.** | - | | | - | | Click **Expand** to define the disk space occupied by the container runtime to store the working directories, container image data, and image metadata. For details, see :ref:`Data Disk Space Allocation `. | - | | | - | | For other data disks, a raw disk is created without any processing by default. You can also click **Expand** to mount the data disk to a specified directory. | - +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+====================================================================================================================================================================================================================================================================================================+ + | System Disk | Directly use the system disk of the cloud server. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Data Disk | **At least one data disk is required** for the container runtime and kubelet. **The data disk cannot be deleted or uninstalled. Otherwise, the node will be unavailable.** | + | | | + | | Click **Expand** and select **Allocate Disk Space** to define the disk space occupied by the container runtime to store the working directories, container image data, and image metadata. For details about how to allocate data disk space, see :ref:`Data Disk Space Allocation `. | + | | | + | | For other data disks, a raw disk is created without any processing by default. You can also click **Expand** and select **Mount Disk** to mount the data disk to a specified directory. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Advanced Settings** @@ -102,13 +102,13 @@ The new console allows you to reset nodes in batches. You can also use private i +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Resource Tag | You can add resource tags to classify resources. | | | | - | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use these tags to improve tagging and resource migration efficiency. | + | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are available to all service resources that support tags. You can use these tags to improve tagging and resource migration efficiency. | | | | | | CCE will automatically create the "CCE-Dynamic-Provisioning-Node=\ *node id*" tag. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Taint | This parameter is left blank by default. You can add taints to set anti-affinity for the node. A maximum of 10 taints are allowed for each node. Each taint contains the following parameters: | + | Taint | This field is left blank by default. You can add taints to configure anti-affinity for the node. A maximum of 20 taints are allowed for each node. Each taint contains the following parameters: | | | | - | | - **Key**: A key must contain 1 to 63 characters starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | + | | - **Key**: A key must contain 1 to 63 characters, starting with a letter or digit. Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed. A DNS subdomain name can be used as the prefix of a key. | | | - **Value**: A value must start with a letter or digit and can contain a maximum of 63 characters, including letters, digits, hyphens (-), underscores (_), and periods (.). | | | - **Effect**: Available options are **NoSchedule**, **PreferNoSchedule**, and **NoExecute**. | | | | diff --git a/umn/source/nodes/stopping_a_node.rst b/umn/source/nodes/management_nodes/stopping_a_node.rst similarity index 82% rename from umn/source/nodes/stopping_a_node.rst rename to umn/source/nodes/management_nodes/stopping_a_node.rst index b1c95c4..2263dde 100644 --- a/umn/source/nodes/stopping_a_node.rst +++ b/umn/source/nodes/management_nodes/stopping_a_node.rst @@ -10,8 +10,8 @@ Scenario After a node in the cluster is stopped, services on the node are also stopped. Before stopping a node, ensure that discontinuity of the services on the node will not result in adverse impacts. -Notes and Constraints ---------------------- +Constraints +----------- - Deleting a node will lead to pod migration, which may affect services. Therefore, delete nodes during off-peak hours. - Unexpected risks may occur during node deletion. Back up related data in advance. @@ -21,14 +21,14 @@ Notes and Constraints Procedure --------- -#. Log in to the CCE console and click the cluster name to access the cluster. +#. Log in to the CCE console and click the cluster name to access the cluster console. #. In the navigation pane, choose **Nodes**. In the right pane, click the name of the node to be stopped. -#. In the upper right corner of the ECS details page, click **Stop** in the instance status area. In the displayed dialog box, click **Yes**. +#. In the upper right corner of the ECS details page, click **Stop**. In the displayed dialog box, click **Yes**. - .. figure:: /_static/images/en-us_image_0000001518062704.png + .. figure:: /_static/images/en-us_image_0000001647417648.png :alt: **Figure 1** ECS details page **Figure 1** ECS details page diff --git a/umn/source/nodes/synchronizing_data_with_cloud_servers.rst b/umn/source/nodes/management_nodes/synchronizing_data_with_cloud_servers.rst similarity index 55% rename from umn/source/nodes/synchronizing_data_with_cloud_servers.rst rename to umn/source/nodes/management_nodes/synchronizing_data_with_cloud_servers.rst index 8e9c031..4660cc8 100644 --- a/umn/source/nodes/synchronizing_data_with_cloud_servers.rst +++ b/umn/source/nodes/management_nodes/synchronizing_data_with_cloud_servers.rst @@ -8,17 +8,14 @@ Synchronizing Data with Cloud Servers Scenario -------- -Each node in a cluster is a cloud server or physical machine. After a cluster node is created, you can change the cloud server name or specifications as required. +Each node in a cluster is a cloud server or physical machine. After a cluster node is created, you can change the cloud server name or specifications as required. Modifying node specifications will affect services. Perform the operation on nodes one by one. -Some information about CCE nodes is maintained independently from the ECS console. After you change the name, EIP, or specifications of an ECS on the ECS console, you need to **synchronize the ECS information** to the corresponding node on the CCE console. After the synchronization, information on both consoles is consistent. +Some information of CCE nodes is maintained independently from the ECS console. After you change the name, EIP, or specifications of an ECS on the ECS console, synchronize the ECS with the target node on the CCE console. After the synchronization, information on both consoles is consistent. -Notes and Constraints ---------------------- +Constraints +----------- - Data, including the VM status, ECS names, number of CPUs, size of memory, ECS specifications, and public IP addresses, can be synchronized. - - If an ECS name is specified as the Kubernetes node name, the change of the ECS name cannot be synchronized to the CCE console. - - Data, such as the OS and image ID, cannot be synchronized. (Such parameters cannot be modified on the ECS console.) Procedure @@ -26,12 +23,12 @@ Procedure #. Log in to the CCE console. -#. Click the cluster name and access the cluster console. Choose **Nodes** in the navigation pane. +#. Click the cluster name to access the cluster console. Choose **Nodes** in the navigation pane. #. Choose **More** > **Sync Server Data** next to the node. - .. figure:: /_static/images/en-us_image_0000001517743520.png + .. figure:: /_static/images/en-us_image_0000001695737349.png :alt: **Figure 1** Synchronizing server data **Figure 1** Synchronizing server data diff --git a/umn/source/nodes/node_o_and_m/data_disk_space_allocation.rst b/umn/source/nodes/node_o_and_m/data_disk_space_allocation.rst new file mode 100644 index 0000000..6fa4f6c --- /dev/null +++ b/umn/source/nodes/node_o_and_m/data_disk_space_allocation.rst @@ -0,0 +1,120 @@ +:original_name: cce_10_0341.html + +.. _cce_10_0341: + +Data Disk Space Allocation +========================== + +This section describes how to allocate data disk space to nodes so that you can configure the data disk space accordingly. + +Allocating Data Disk Space +-------------------------- + +When creating a node, configure data disks for the node. You can also click **Expand** and customize the data disk space allocation for the node. + +- :ref:`Allocate Disk Space `: + + CCE divides the data disk space for two parts by default. One part is used to store the Docker/containerd working directories, container images, and image metadata. The other is reserved for kubelet and emptyDir volumes. The available container engine space affects image pulls and container startup and running. + + - Container engine and container image space (90% by default): stores the container runtime working directories, container image data, and image metadata. + - kubelet and emptyDir space (10% by default): stores pod configuration files, secrets, and mounted storage such as emptyDir volumes. + +- :ref:`Allocate Pod Basesize `: indicates the basesize of a pod. You can set an upper limit for the disk space occupied by each workload pod (including the space occupied by container images). This setting prevents the pods from taking all the disk space available, which may cause service exceptions. It is recommended that the value is smaller than or equal to 80% of the container engine space. This parameter is related to the node OS and container storage rootfs and is not supported in some scenarios. + +.. _cce_10_0341__section10653143445411: + +Allocating Disk Space +--------------------- + +For a node using a non-shared data disk (100 GB for example), the division of the disk space varies depending on the container storage Rootfs type **Device Mapper** or **OverlayFS**. For details about the container storage Rootfs corresponding to different OSs, see :ref:`Mapping Between OS and Container Storage Rootfs `. + +- **Rootfs (Device Mapper)** + + By default, the container engine and image space, occupying 90% of the data disk, can be divided into the following two parts: + + - The **/var/lib/docker** directory is used as the Docker working directory and occupies 20% of the container engine and container image space by default. (Space size of the **/var/lib/docker** directory = **Data disk space x 90% x 20%**) + + - The thin pool is used to store container image data, image metadata, and container data, and occupies 80% of the container engine and container image space by default. (Thin pool space = **Data disk space x 90% x 80%**) + + The thin pool is dynamically mounted. You can view it by running the **lsblk** command on a node, but not the **df -h** command. + + + .. figure:: /_static/images/en-us_image_0000001647576500.png + :alt: **Figure 1** Space allocation for container engines of Device Mapper + + **Figure 1** Space allocation for container engines of Device Mapper + +- **Rootfs (OverlayFS)** + + No separate thin pool. The entire container engine and container image space (90% of the data disk by default) are in the **/var/lib/docker** directory. + + + .. figure:: /_static/images/en-us_image_0000001647417268.png + :alt: **Figure 2** Space allocation for container engines of OverlayFS + + **Figure 2** Space allocation for container engines of OverlayFS + +.. _cce_10_0341__section12119191161518: + +Allocating Basesize for Pods +---------------------------- + +The customized pod container space (basesize) is related to the node OS and container storage Rootfs. For details about the container storage Rootfs, see :ref:`Mapping Between OS and Container Storage Rootfs `. + +- Device Mapper supports custom pod basesize. The default value is 10 GB. +- In OverlayFS mode, the pod container space is not limited by default. + + .. note:: + + In the case of using Docker on EulerOS 2.9 nodes, **basesize** will not take effect if **CAP_SYS_RESOURCE** or **privileged** is configured for a container. + +When configuring **basesize**, consider the maximum number of pods on a node. The container engine space should be greater than the total disk space used by containers. Formula: **the container engine space and container image space (90% by default)** > **Number of containers** x **basesize**. Otherwise, the container engine space allocated to the node may be insufficient and the container cannot be started. + +For nodes that support **basesize**, when Device Mapper is used, although you can limit the size of the **/home** directory of a single container (to 10 GB by default), all containers on the node still share the thin pool of the node for storage. They are not completely isolated. When the sum of the thin pool space used by certain containers reaches the upper limit, other containers cannot run properly. + +In addition, after a file is deleted in the **/home** directory of the container, the thin pool space occupied by the file is not released immediately. Therefore, even if **basesize** is set to 10 GB, the thin pool space occupied by files keeps increasing until 10 GB when files are created in the container. The space released after file deletion will be reused but after a while. If **the number of containers on the node multiplied by basesize** is greater than the thin pool space size of the node, there is a possibility that the thin pool space has been used up. + +.. _cce_10_0341__section1473612279214: + +Mapping Between OS and Container Storage Rootfs +----------------------------------------------- + +.. table:: **Table 1** Node OSs and container engines in CCE clusters + + +-----------------------+--------------------------+------------------------------------------------------------------------------------------------------------------------+ + | OS | Container Storage Rootfs | Customized Basesize | + +=======================+==========================+========================================================================================================================+ + | EulerOS 2.5 | Device Mapper | Supported only when the container engine is Docker. The default value is 10 GB. | + +-----------------------+--------------------------+------------------------------------------------------------------------------------------------------------------------+ + | EulerOS 2.9 | OverlayFS | Supported only by clusters of v1.19.16, v1.21.3, v1.23.3, and later. The container basesize is not limited by default. | + | | | | + | | | Not supported when th cluster versions are earlier than v1.19.16, v1.21.3, and v1.23.3. | + +-----------------------+--------------------------+------------------------------------------------------------------------------------------------------------------------+ + | Ubuntu 22.04 | OverlayFS | Not supported. | + +-----------------------+--------------------------+------------------------------------------------------------------------------------------------------------------------+ + +.. table:: **Table 2** Node OSs and container engines in CCE Turbo clusters + + +-----------------------+----------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ + | OS | Container Storage Rootfs | Customized Basesize | + +=======================+============================+======================================================================================================================================+ + | Ubuntu 22.04 | OverlayFS | Not supported. | + +-----------------------+----------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ + | EulerOS 2.9 | ECS VMs use OverlayFS. | Supported only when Rootfs is set to OverlayFS and the container engine is Docker. The container basesize is not limited by default. | + | | | | + | | ECS PMs use Device Mapper. | Supported when Rootfs is set to Device Mapper and the container engine is Docker. The default value is 10 GB. | + +-----------------------+----------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ + +Garbage Collection Policies for Container Images +------------------------------------------------ + +When the container engine space is insufficient, image garbage collection is triggered. + +The policy for garbage collecting images takes two factors into consideration: **HighThresholdPercent** and **LowThresholdPercent**. Disk usage above the high threshold (default: 85%) will trigger garbage collection. The garbage collection will delete least recently used images until the low threshold (default: 80%) has been met. + +Recommended Configuration for the Container Engine Space +-------------------------------------------------------- + +- The container engine space should be greater than the total disk space used by containers. Formula: **Container engine space** > **Number of containers** x **basesize** +- You are advised to create and delete files of containerized services in local storage volumes (such as emptyDir and hostPath volumes) or cloud storage directories mounted to the containers. In this way, the thin pool space is not occupied. emptyDir volumes occupy the kubelet space. Therefore, properly plan the size of the kubelet space. +- You can deploy services on nodes that use the OverlayFS (for details, see :ref:`Mapping Between OS and Container Storage Rootfs `) so that the disk space occupied by files created or deleted in containers can be released immediately. diff --git a/umn/source/nodes/node_o_and_m/index.rst b/umn/source/nodes/node_o_and_m/index.rst new file mode 100644 index 0000000..f85ce86 --- /dev/null +++ b/umn/source/nodes/node_o_and_m/index.rst @@ -0,0 +1,20 @@ +:original_name: cce_10_0704.html + +.. _cce_10_0704: + +Node O&M +======== + +- :ref:`Node Resource Reservation Policy ` +- :ref:`Data Disk Space Allocation ` +- :ref:`Maximum Number of Pods That Can Be Created on a Node ` +- :ref:`Migrating Nodes from Docker to containerd ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + node_resource_reservation_policy + data_disk_space_allocation + maximum_number_of_pods_that_can_be_created_on_a_node + migrating_nodes_from_docker_to_containerd diff --git a/umn/source/nodes/node_overview/maximum_number_of_pods_that_can_be_created_on_a_node.rst b/umn/source/nodes/node_o_and_m/maximum_number_of_pods_that_can_be_created_on_a_node.rst similarity index 55% rename from umn/source/nodes/node_overview/maximum_number_of_pods_that_can_be_created_on_a_node.rst rename to umn/source/nodes/node_o_and_m/maximum_number_of_pods_that_can_be_created_on_a_node.rst index e69fd99..b700424 100644 --- a/umn/source/nodes/node_overview/maximum_number_of_pods_that_can_be_created_on_a_node.rst +++ b/umn/source/nodes/node_o_and_m/maximum_number_of_pods_that_can_be_created_on_a_node.rst @@ -5,50 +5,32 @@ Maximum Number of Pods That Can Be Created on a Node ==================================================== -The maximum number of pods that can be created on a node is determined by the following parameters: +Calculation of the Maximum Number of Pods on a Node +--------------------------------------------------- -- Number of container IP addresses that can be allocated on a node (alpha.cce/fixPoolMask): Set this parameter when creating a CCE cluster. This parameter is available only when **Network Model** is **VPC network**. - -- Maximum number of pods of a node (maxPods): Set this parameter when creating a node. It is a configuration item of kubelet. - -- .. _cce_10_0348__li5286959123611: - - Number of ENIs of a CCE Turbo cluster node: In a CCE Turbo cluster, ECS nodes use sub-ENIs and BMS nodes use ENIs. The maximum number of pods that can be created on a node depends on the number of ENIs that can be used by the node. - -The maximum number of pods that can be created on a node depends on the minimum value of these parameters. +The maximum number of pods that can be created on a node is calculated based on the cluster type: - For a cluster using the container tunnel network model, the value depends only on :ref:`the maximum number of pods on a node `. -- For clusters using the VPC network model, the value depends on :ref:`the maximum number of pods on a node ` and :ref:`the number of container IP addresses that can be allocated to the node `. It is recommended that the maximum number of pods on a node be less than or equal to the number of container IP addresses that can be allocated to the node. Otherwise, pods may fail to be scheduled. -- For a cluster (CCE Turbo cluster) using the Cloud Native Network 2.0 model, the value depends on :ref:`the maximum number of pods on a node ` and :ref:`the number of NICs on a CCE Turbo cluster node `. - -Container Network vs. Host Network ----------------------------------- - -When creating a pod, you can select the container network or host network for the pod. - -- .. _cce_10_0348__li13739132619599: - - Container network (default): **Each pod is assigned an IP address by the cluster networking add-ons, which occupies the IP addresses of the container network**. - -- Host network: The pod uses the host network (**hostNetwork: true** needs to be configured for the pod) and occupies the host port. The pod IP address is the host IP address. The pod does not occupy the IP addresses of the container network. To use the host network, you must confirm whether the container ports conflict with the host ports. Do not use the host network unless you know exactly which host port is used by which container. +- For clusters using the VPC network model, the value depends on :ref:`the maximum number of pods on a node ` and :ref:`the minimum number of container IP addresses that can be allocated to a node `. It is recommended that the maximum number of pods on a node be less than or equal to the number of container IP addresses that can be allocated to the node. Otherwise, pods may fail to be scheduled. +- For CCE Turbo clusters using the Cloud Native Network 2.0 model, the value depends on :ref:`the maximum number of pods on a node ` and :ref:`the minimum number of ENIs on a CCE Turbo cluster node `. It is recommended that the maximum number of pods on a node be less than or equal to the number of ENIs on the node. Otherwise, pods may fail to be scheduled. .. _cce_10_0348__section10770192193714: Number of Container IP Addresses That Can Be Allocated on a Node ---------------------------------------------------------------- -If you select **VPC network** for **Network Model** when creating a CCE cluster, you also need to set the number of container IP addresses that can be allocated to each node. +If you select **VPC network** for **Network Model** when creating a CCE cluster, you also need to set the number of container IP addresses that can be allocated to each node (alpha.cce/fixPoolMask). If the pod uses the host network (**hostNetwork: true**), the pod does not occupy the IP address of the allocatable container network. For details, see :ref:`Container Network vs. Host Network `. -This parameter affects the maximum number of pods that can be created on a node. Each pod occupies an IP address (when the :ref:`container network ` is used). If the number of available IP addresses is insufficient, pods cannot be created. +This parameter affects the maximum number of pods that can be created on a node. Each pod occupies an IP address (when the :ref:`container network ` is used). If the number of available IP addresses is insufficient, pods cannot be created. If the pod uses the host network (**hostNetwork: true**), the pod does not occupy the IP address of the allocatable container network. -By default, a node occupies three container IP addresses (network address, gateway address, and broadcast address). Therefore, the number of container IP addresses that can be allocated to a node equals the number of selected container IP addresses minus 3. For example, in the preceding figure, **the number of container IP addresses that can be allocated to a node is 125 (128 - 3)**. +By default, a node occupies three container IP addresses (network address, gateway address, and broadcast address). Therefore, the number of container IP addresses that can be allocated to a node equals the number of selected container IP addresses minus 3. .. _cce_10_0348__section16296174054019: Maximum Number of Pods on a Node -------------------------------- -When creating a node, you can configure the maximum number of pods that can be created on the node. This parameter is a configuration item of kubelet and determines the maximum number of pods that can be created by kubelet. +When creating a node, you can configure the maximum number of pods (maxPods) that can be created on the node. This parameter is a configuration item of kubelet and determines the maximum number of pods that can be created by kubelet. .. important:: @@ -62,19 +44,32 @@ When creating a node, you can configure the maximum number of pods that can be c .. table:: **Table 1** Default maximum number of pods on a node - ============== ======================================== - Memory Default Maximum Number of Pods on a Node - ============== ======================================== - 4G 20 - 8G 40 - 16G 60 - 32G 80 - 64 GB or above 110 - ============== ======================================== + =============== ========= + Memory Max. Pods + =============== ========= + 4 GiB 20 + 8 GiB 40 + 16 GiB 60 + 32 GiB 80 + 64 GiB or above 110 + =============== ========= .. _cce_10_0348__section15702175115573: Number of Node ENIs (CCE Turbo Clusters) ---------------------------------------- -In a CCE Turbo cluster, ECS nodes use sub-ENIs and BMS nodes use ENIs. The maximum number of pods that can be created on a node depends on the number of ENIs that can be used by the node. +In a CCE Turbo cluster, ECSs use sub-ENIs. The maximum number of pods that can be created on a node depends on the number of ENIs that can be used by the node. + +.. _cce_10_0348__section12428143711548: + +Container Network vs. Host Network +---------------------------------- + +When creating a pod, you can select the container network or host network for the pod. + +- .. _cce_10_0348__li13739132619599: + + Container network (default): **Each pod is assigned an IP address by the cluster networking add-ons, which occupies the IP addresses of the container network**. + +- Host network: The pod uses the host network (**hostNetwork: true** needs to be configured for the pod) and occupies the host port. The pod IP address is the host IP address. The pod does not occupy the IP addresses of the container network. To use the host network, you must confirm whether the container ports conflict with the host ports. Do not use the host network unless you know exactly which host port is used by which container. diff --git a/umn/source/nodes/node_o_and_m/migrating_nodes_from_docker_to_containerd.rst b/umn/source/nodes/node_o_and_m/migrating_nodes_from_docker_to_containerd.rst new file mode 100644 index 0000000..94ff67b --- /dev/null +++ b/umn/source/nodes/node_o_and_m/migrating_nodes_from_docker_to_containerd.rst @@ -0,0 +1,57 @@ +:original_name: cce_10_0601.html + +.. _cce_10_0601: + +Migrating Nodes from Docker to containerd +========================================= + +Context +------- + +Kubernetes has removed dockershim from v1.24 and does not support Docker by default. CCE will continue to support Docker in v1.25 but just till v1.27. The following steps show you how to migrate nodes from Docker to containerd. + +Prerequisites +------------- + +- At least one cluster that supports containerd nodes has been created. For details, see :ref:`Mapping between Node OSs and Container Engines `. +- There is a Docker node or Docker node pool in your cluster. + +Precautions +----------- + +- Theoretically, migration during container running will interrupt services for a short period of time. Therefore, it is strongly recommended that the services to be migrated have been deployed as multi-instance. In addition, you are advised to test the migration impact in the test environment to minimize potential risks. +- containerd cannot build images. Do not use the **docker build** command to build images on containerd nodes. For other differences between Docker and containerd, see :ref:`Container Engine `. + +Migrating a Node +---------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane, choose **Nodes**. In the node list, select one or more nodes to be reset and choose **More** > **Reset Node**. + +#. Set **Container Engine** to **containerd**. You can adjust other parameters as required or retain them as set during creation. + +#. If the node status is **Installing**, the node is being reset. + + When the node status is **Running**, you can see that the node version is switched to containerd. You can log in to the node and run containerd commands such as **crictl** to view information about the containers running on the node. + +Migrating a Node Pool +--------------------- + +You can :ref:`copy a node pool `, set the container engine of the new node pool to containerd, and keep other configurations the same as those of the original Docker node pool. + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane, choose **Nodes**. On the **Node Pools** tab page, locate the Docker node pool to be copied and choose **More** > **Copy** in the **Operation** column. + +#. On the **Compute Settings** area, set **Container Engine** to **containerd** and modify other parameters as required. + +#. Scale the number of created containerd node pools to the number of original Docker node pools and delete nodes from the Docker node pools one by one. + + Rolling migration is preferred. That is, add some containerd nodes and then delete some Docker nodes until the number of nodes in the new containerd node pool is the same as that in the original Docker node pool. + + .. note:: + + If you have set node affinity for the workloads deployed on the original Docker nodes or node pool, set affinity policies for the workloads to run on the new containerd nodes or node pool. + +#. After the migration, delete the original Docker node pool. diff --git a/umn/source/nodes/node_o_and_m/node_resource_reservation_policy.rst b/umn/source/nodes/node_o_and_m/node_resource_reservation_policy.rst new file mode 100644 index 0000000..899b44b --- /dev/null +++ b/umn/source/nodes/node_o_and_m/node_resource_reservation_policy.rst @@ -0,0 +1,127 @@ +:original_name: cce_10_0178.html + +.. _cce_10_0178: + +Node Resource Reservation Policy +================================ + +Some node resources are used to run mandatory Kubernetes system components and resources to make the node as part of your cluster. Therefore, the total number of node resources and the amount of allocatable node resources for your cluster are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components. + +To ensure node stability, a certain number of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications. + +CCE calculates the resources that can be allocated to user nodes as follows: + +**Allocatable resources = Total amount - Reserved amount - Eviction threshold** + +The memory eviction threshold is fixed at 100 MiB. + +.. note:: + + **Total amount** indicates the available memory of the ECS, excluding the memory used by system components. Therefore, the total amount is slightly less than the memory of the node flavor. + +When the memory consumed by all pods on a node increases, the following behaviors may occur: + +#. When the available memory of the node is lower than the eviction threshold, kubelet is triggered to evict the pod. For details about the eviction threshold in Kubernetes, see `Node-pressure Eviction `__. +#. If a node triggers an OS memory insufficiency event (OOM) before kubelet reclaims memory, the system terminates the container. However, different from pod eviction, kubelet restarts the container based on the RestartPolicy of the pod. + +Rules v1 for Reserving Node Memory +---------------------------------- + +.. note:: + + For clusters of versions earlier than **v1.21.4-r0** and **v1.23.3-r0**, the v1 model is used for node memory reservation. For clusters of **v1.21.4-r0**, **v1.23.3-r0**, or later, the node memory reservation model is optimized to v2. For details, see :ref:`Rules v2 for Reserving Node Memory `. + +You can use the following formula calculate how much memory you should reserve for running containers on a node: + +Total reserved amount = :ref:`Reserved memory for system components ` + :ref:`Reserved memory for kubelet to manage pods ` + +.. _cce_10_0178__table19962121035915: + +.. table:: **Table 1** Reservation rules for system components + + +----------------------+-------------------------------------------------------------------------+ + | Total Memory (TM) | Reserved Memory for System Components | + +======================+=========================================================================+ + | TM <= 8 GB | 0 MB | + +----------------------+-------------------------------------------------------------------------+ + | 8 GB < TM <= 16 GB | [(TM - 8 GB) x 1024 x 10%] MB | + +----------------------+-------------------------------------------------------------------------+ + | 16 GB < TM <= 128 GB | [8 GB x 1024 x 10% + (TM - 16 GB) x 1024 x 6%] MB | + +----------------------+-------------------------------------------------------------------------+ + | TM > 128 GB | (8 GB x 1024 x 10% + 112 GB x 1024 x 6% + (TM - 128 GB) x 1024 x 2%) MB | + +----------------------+-------------------------------------------------------------------------+ + +.. _cce_10_0178__table124614211528: + +.. table:: **Table 2** Reservation rules for kubelet + + +-------------------+---------------------------------+-------------------------------------------------+ + | Total Memory (TM) | Number of Pods | Reserved Memory for kubelet | + +===================+=================================+=================================================+ + | TM <= 2 GB | None | TM x 25% | + +-------------------+---------------------------------+-------------------------------------------------+ + | TM > 2 GB | 0 < Max. pods on a node <= 16 | 700 MB | + +-------------------+---------------------------------+-------------------------------------------------+ + | | 16 < Max. pods on a node <= 32 | [700 + (Max. pods on a node - 16) x 18.75] MB | + +-------------------+---------------------------------+-------------------------------------------------+ + | | 32 < Max. pods on a node <= 64 | [1024 + (Max. pods on a node - 32) x 6.25] MB | + +-------------------+---------------------------------+-------------------------------------------------+ + | | 64 < Max. pods on a node <= 128 | [1230 + (Max. pods on a node - 64) x 7.80] MB | + +-------------------+---------------------------------+-------------------------------------------------+ + | | Max. pods on a node > 128 | [1740 + (Max. pods on a node - 128) x 11.20] MB | + +-------------------+---------------------------------+-------------------------------------------------+ + +.. important:: + + For a small-capacity node, adjust the maximum number of instances based on the site requirements. Alternatively, when creating a node on the CCE console, you can adjust the maximum number of instances for the node based on the node specifications. + +.. _cce_10_0178__section156741258145010: + +Rules v2 for Reserving Node Memory +---------------------------------- + +For clusters of **v1.21.4-r0**, **v1.23.3-r0**, or later, the node memory reservation model is optimized to v2 and can be dynamically adjusted using the node pool parameters **kube-reserved-mem** and **system-reserved-mem**. For details, see :ref:`Managing a Node Pool `. + +The total reserved node memory of the v2 model is equal to the sum of that reserved for the OS and that reserved for CCE to manage pods. + +Reserved memory includes basic and floating parts. For the OS, the floating memory depends on the node specifications. For CCE, the floating memory depends on the number of pods on a node. + +.. table:: **Table 3** Rules for reserving node memory v2 + + +-----------------+--------------------------------------------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Reserved for | Basic/Floating | Reservation | Used by | + +=================+========================================================+======================+=====================================================================================================================================================================================================+ + | OS | Basic | 400 MB (fixed) | OS service components such as sshd and systemd-journald. | + +-----------------+--------------------------------------------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Floating (depending on the node memory) | 25 MB/GB | Kernel | + +-----------------+--------------------------------------------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CCE | Basic | 500 MB (fixed) | Container engine components, such as kubelet and kube-proxy, when the node is unloaded | + +-----------------+--------------------------------------------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | | Floating (depending on the number of pods on the node) | Docker: 20 MB/pod | Container engine components when the number of pods increases | + | | | | | + | | | containerd: 5 MB/pod | .. note:: | + | | | | | + | | | | When the v2 model reserves memory for a node by default, the default maximum number of pods is estimated based on the memory. For details, see :ref:`Table 1 `. | + +-----------------+--------------------------------------------------------+----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +Rules for Reserving Node CPU +---------------------------- + +.. table:: **Table 4** Node CPU reservation rules + + +----------------------------+------------------------------------------------------------------------+ + | Total CPU Cores (Total) | Reserved CPU Cores | + +============================+========================================================================+ + | Total <= 1 core | Total x 6% | + +----------------------------+------------------------------------------------------------------------+ + | 1 core < Total <= 2 cores | 1 core x 6% + (Total - 1 core) x 1% | + +----------------------------+------------------------------------------------------------------------+ + | 2 cores < Total <= 4 cores | 1 core x 6% + 1 core x 1% + (Total - 2 cores) x 0.5% | + +----------------------------+------------------------------------------------------------------------+ + | Total > 4 cores | 1 core x 6% + 1 core x 1% + 2 cores x 0.5% + (Total - 4 cores) x 0.25% | + +----------------------------+------------------------------------------------------------------------+ + +Rules for CCE to Reserve Data Disks on Nodes +-------------------------------------------- + +CCE uses Logical Volume Manager (LVM) to manage disks. LVM creates a metadata area on a disk to store logical and physical volumes, occupying 4 MiB space. Therefore, the actual available disk space of a node is equal to the disk size minus 4 MiB. diff --git a/umn/source/nodes/node_overview/precautions_for_using_a_node.rst b/umn/source/nodes/node_overview.rst similarity index 95% rename from umn/source/nodes/node_overview/precautions_for_using_a_node.rst rename to umn/source/nodes/node_overview.rst index 852cbb3..88fdd31 100644 --- a/umn/source/nodes/node_overview/precautions_for_using_a_node.rst +++ b/umn/source/nodes/node_overview.rst @@ -1,9 +1,9 @@ -:original_name: cce_10_0461.html +:original_name: cce_10_0180.html -.. _cce_10_0461: +.. _cce_10_0180: -Precautions for Using a Node -============================ +Node Overview +============= Introduction ------------ @@ -16,12 +16,12 @@ A container cluster consists of a set of worker machines, called nodes, that run CCE uses high-performance Elastic Cloud Servers (ECSs) as nodes to build highly available Kubernetes clusters. -.. _cce_10_0461__section1667513391595: +.. _cce_10_0180__section1667513391595: Supported Node Specifications ----------------------------- -Different regions support different node flavors, and node flavors may be changed. You are advised to log in to the CCE console and check whether the required node flavors are supported on the page for creating nodes. +Different regions support different node flavors, and node flavors may be changed. Log in to the CCE console and check whether the required node flavors are supported on the page for creating nodes. Underlying File Storage System of Docker ---------------------------------------- @@ -70,7 +70,7 @@ A lifecycle indicates the node statuses recorded from the time when the node is +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------+ | Deleting | Intermediate state | The node is being deleted. | | | | | - | | | If this state stays for a long time, an exception occurs. | + | | | If this state stays for a long time, an exception occurred. | +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------+ | Stopped | Stable state | The node is stopped properly. | | | | | diff --git a/umn/source/nodes/node_overview/container_engine.rst b/umn/source/nodes/node_overview/container_engine.rst deleted file mode 100644 index 41bd442..0000000 --- a/umn/source/nodes/node_overview/container_engine.rst +++ /dev/null @@ -1,86 +0,0 @@ -:original_name: cce_10_0462.html - -.. _cce_10_0462: - -Container Engine -================ - -Introduction to Container Engines ---------------------------------- - -Container engines, one of the most important components of Kubernetes, manage the lifecycle of images and containers. The kubelet interacts with a container runtime through the Container Runtime Interface (CRI). - -.. _cce_10_0462__section159298451879: - -Mapping between Node OSs and Container Engines ----------------------------------------------- - -.. table:: **Table 1** Node OSs and container engines in CCE clusters - - +--------------+----------------+-------------------------------------------------+-----------------------------------------------------+-------------------+ - | OS | Kernel Version | Container Engine | Container Storage Rootfs | Container Runtime | - +==============+================+=================================================+=====================================================+===================+ - | CentOS 7.x | 3.x | Docker | Clusters of v1.19.16 and earlier use Device Mapper. | runC | - | | | | | | - | | | Clusters of v1.23 and later support containerd. | Clusters of v1.19.16 and later use OverlayFS. | | - +--------------+----------------+-------------------------------------------------+-----------------------------------------------------+-------------------+ - | EulerOS 2.5 | 3.x | Docker | Device Mapper | runC | - +--------------+----------------+-------------------------------------------------+-----------------------------------------------------+-------------------+ - | EulerOS 2.9 | 4.x | Docker | OverlayFS | runC | - | | | | | | - | | | Clusters of v1.23 and later support containerd. | | | - +--------------+----------------+-------------------------------------------------+-----------------------------------------------------+-------------------+ - | Ubuntu 22.04 | 4.x | Docker | OverlayFS | runC | - | | | | | | - | | | containerd | | | - +--------------+----------------+-------------------------------------------------+-----------------------------------------------------+-------------------+ - -.. table:: **Table 2** Node OSs and container engines in CCE Turbo clusters - - +---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+ - | Node Type | OS | Kernel Version | Container Engine | Container Storage Rootfs | Container Runtime | - +===========================+==============+================+=================================================+==========================+===================+ - | Elastic Cloud Server (VM) | CentOS 7.x | 3.x | Docker | OverlayFS | runC | - +---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+ - | | EulerOS 2.5 | 3.x | Docker | OverlayFS | runC | - +---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+ - | | EulerOS 2.9 | 4.x | Docker | OverlayFS | runC | - | | | | | | | - | | | | Clusters of v1.23 and later support containerd. | | | - +---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+ - | | Ubuntu 22.04 | 4.x | Docker | OverlayFS | runC | - | | | | | | | - | | | | containerd | | | - +---------------------------+--------------+----------------+-------------------------------------------------+--------------------------+-------------------+ - -Differences in Tracing ----------------------- - -- Docker (Kubernetes 1.23 and earlier versions): - - kubelet --> docker shim (in the kubelet process) --> docker --> containerd - -- Docker (community solution for Kubernetes v1.24 or later): - - kubelet --> cri-dockerd (kubelet uses CRI to connect to cri-dockerd) --> docker--> containerd - -- containerd: - - kubelet --> cri plugin (in the containerd process) --> containerd - -Although Docker has added functions such as swarm cluster, docker build, and Docker APIs, it also introduces bugs. Compared with containerd, Docker has one more layer of calling. **Therefore, containerd is more resource-saving and secure.** - -Container Engine Version Description ------------------------------------- - -- Docker - - - EulerOS/CentOS: docker 18.9.0, a Docker version customized for CCE. Security vulnerabilities will be fixed in a timely manner. - - Ubuntu 22.04: docker-ce 20.10.21 (community version). - - .. note:: - - - You are advised to use the containerd engine for Ubuntu nodes. - - The open source docker-ce of the Ubuntu 18.04 node may trigger bugs when concurrent exec operations are performed (for example, multiple exec probes are configured). You are advised to use HTTP/TCP probes. - -- containerd: 1.6.14 diff --git a/umn/source/nodes/node_overview/data_disk_space_allocation.rst b/umn/source/nodes/node_overview/data_disk_space_allocation.rst deleted file mode 100644 index 030b4a2..0000000 --- a/umn/source/nodes/node_overview/data_disk_space_allocation.rst +++ /dev/null @@ -1,97 +0,0 @@ -:original_name: cce_10_0341.html - -.. _cce_10_0341: - -Data Disk Space Allocation -========================== - -This section describes how to allocate data disk space. - -When creating a node, you need to configure a data disk whose capacity is greater than or equal to 100GB for the node. You can click **Expand** to customize the data disk space allocation. - -- :ref:`Allocate Disk Space `: CCE divides the data disk space for container engines and pods. The container engine space stores the Docker/containerd working directories, container images, and image metadata. The pod space stores kubelet components and emptyDir volumes. The available container engine space affects image download and container startup and running. - - - Container engine and container image space (90% by default): functions as the container runtime working directory and stores container image data and image metadata. - - kubelet component and emptyDir volume space (10% by default): stores pod configuration files, secrets, and mounted storage such as emptyDir volumes. - -- :ref:`Allocate Pod Basesize `: indicates the base size of a container, that is, the upper limit of the disk space occupied by each workload pod (including the space occupied by container images). This setting prevents the pods from taking all the disk space available, which may cause service exceptions. It is recommended that the value be smaller than or equal to 80% of the container engine space. This parameter is related to the node OS and container storage rootfs and is not supported in some scenarios. - -.. _cce_10_0341__section10653143445411: - -Setting Container Engine Space ------------------------------- - -A data disk, 100 GB for example, is divided as follows (depending on the container storage rootfs): - -You can log in to the node and run the **docker info** command to view the storage engine type. - -.. code-block:: - - # docker info - Containers: 20 - Running: 17 - Paused: 0 - Stopped: 3 - Images: 16 - Server Version: 18.09.0 - Storage Driver: devicemapper - -- **Rootfs (Device Mapper)** - - By default, 90% of the data disk is the container engine and container image space, which can be divided into the following two parts: - - - The **/var/lib/docker** directory is the Docker working directory and occupies 20% of the container runtime space by default. (Space size of the **/var/lib/docker** directory = **Data disk space x 90% x 20%**) - - - The thin pool stores container image data, image metadata, and container data, and occupies 80% of the container runtime space by default. (Thin pool space = **Data disk space x 90% x 80%**) - - The thin pool is dynamically mounted. You can view it by running the **lsblk** command on a node, but not the **df -h** command. - - |image1| - -- **Rootfs (OverlayFS)** - - No separate thin pool. The entire container engine and container image space (90% of the data disk by default) are in the **/var/lib/docker** directory. - - |image2| - -Using rootfs for container storage in CCE - -- CCE cluster: EulerOS 2.5 nodes use Device Mapper and EulerOS 2.9 nodes use OverlayFS. CentOS 7.x nodes in clusters earlier than v1.19.16 use Device Mapper, and use OverlayFS in clusters of v1.19.16 and later. -- CCE Turbo cluster: BMSs use Device Mapper. ECSs use OverlayFS. - -.. _cce_10_0341__section12119191161518: - -Allocating Basesize for Pods ----------------------------- - -The capability of customizing pod basesize is related to the node OS and container storage rootfs. You can log in to the node and run the **docker info** command to view the container storage rootfs. - -- Device Mapper supports custom pod basesize. The default value is 10 GB. -- When OverlayFS is used, **basesize** is not limited by default. In clusters of latest versions (1.19.16, 1.21.3, 1.23.3, and later), EulerOS 2.9 supports **basesize** if the Docker engine is used. Other OSs do not support **basesize**. - - .. note:: - - In the case of using Docker on EulerOS 2.9 nodes, **basesize** will not take effect if **CAP_SYS_RESOURCE** or **privileged** is configured for a container. - -When configuring **basesize**, consider the maximum number of pods on a node. The container engine space should be greater than the total disk space used by containers. Formula: **the container engine space and container image space (90% by default)** > **Number of containers** x **basesize**. Otherwise, the container engine space allocated to the node may be insufficient and the container cannot be started. - -For nodes that support **basesize**, when Device Mapper is used, although you can limit the size of the **/home** directory of a single container (to 10 GB by default), all containers on the node still share the thin pool of the node for storage. They are not completely isolated. When the sum of the thin pool space used by certain containers reaches the upper limit, other containers cannot run properly. - -In addition, after a file is deleted in the **/home** directory of the container, the thin pool space occupied by the file is not released immediately. Therefore, even if **basesize** is set to 10 GB, the thin pool space occupied by files keeps increasing until 10 GB when files are created in the container. The space released after file deletion will be reused but after a while. If **the number of containers on the node multiplied by basesize** is greater than the thin pool space size of the node, there is a possibility that the thin pool space has been used up. - -Garbage Collection Policies for Container Images ------------------------------------------------- - -When the container engine space is insufficient, image garbage collection is triggered. - -The policy for garbage collecting images takes two factors into consideration: **HighThresholdPercent** and **LowThresholdPercent**. Disk usage above the high threshold (default: 85%) will trigger garbage collection. The garbage collection will delete least recently used images until the low threshold (default: 80%) has been met. - -Recommended Configuration for the Container Engine Space --------------------------------------------------------- - -- The container engine space should be greater than the total disk space used by containers. Formula: **Container engine space** > **Number of containers** x **basesize** -- You are advised to create and delete files of containerized services in local storage volumes (such as emptyDir and hostPath volumes) or cloud storage directories mounted to the containers. In this way, the thin pool space is not occupied. emptyDir volumes occupy the kubelet space. Therefore, properly plan the size of the kubelet space. -- If OverlayFS is used by in CCE clusters, you can deploy services on these nodes so that the disk space occupied by files created or deleted in containers can be released immediately. - -.. |image1| image:: /_static/images/en-us_image_0000001517902940.png -.. |image2| image:: /_static/images/en-us_image_0000001517743364.png diff --git a/umn/source/nodes/node_overview/formula_for_calculating_the_reserved_resources_of_a_node.rst b/umn/source/nodes/node_overview/formula_for_calculating_the_reserved_resources_of_a_node.rst deleted file mode 100644 index b85546a..0000000 --- a/umn/source/nodes/node_overview/formula_for_calculating_the_reserved_resources_of_a_node.rst +++ /dev/null @@ -1,129 +0,0 @@ -:original_name: cce_10_0178.html - -.. _cce_10_0178: - -Formula for Calculating the Reserved Resources of a Node -======================================================== - -Some of the resources on the node need to run some necessary Kubernetes system components and resources to make the node as part of your cluster. Therefore, the total number of node resources and the number of assignable node resources in Kubernetes are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components. - -To ensure node stability, a certain amount of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications. - -CCE calculates the resources that can be allocated to user nodes as follows: - -**Allocatable resources = Total amount - Reserved amount - Eviction threshold** - -The memory eviction threshold is fixed at 100 MiB. - -When the memory consumed by all pods on a node increases, the following behaviors may occur: - -#. When the available memory on a node is lower than the eviction threshold, kubelet is triggered to evict pods. For details about Kubernetes eviction threshold, see `Node-pressure Eviction `__. -#. If a node triggers an OS Out-Of-Memory (OOM) event before kubelet reclaims memory, the system terminates the container. However, kubelet does not evict the pod, but restarts the container based on the RestartPolicy of the pod. - -Rules for Reserving Node Memory (v1) ------------------------------------- - -For clusters of **v1.21.4-r0**, **v1.23.3-r0**, or later, the node memory reservation model is optimized to V2. For details, see :ref:`Rules for Reserving Node Memory (v2) `. - -You can use the following formula calculate how much memory you should reserve for running containers on a node: - -Total reserved amount = Reserved memory for system components + Reserved memory for kubelet to manage pods - -.. table:: **Table 1** Reservation rules for system components - - +------------------------+-----------------------------------------------------------------------------+ - | Total Memory (TM) | Reserved Memory for System Components | - +========================+=============================================================================+ - | TM <= 8 GiB | 0 MiB | - +------------------------+-----------------------------------------------------------------------------+ - | 8 GiB < TM <= 16 GiB | [(TM - 8 GiB) x 1024 x 10%] MiB | - +------------------------+-----------------------------------------------------------------------------+ - | 16 GiB < TM <= 128 GiB | [8 GiB x 1024 x 10% + (TM - 16 GiB) x 1024 x 6%] MiB | - +------------------------+-----------------------------------------------------------------------------+ - | TM > 128 GiB | (8 GiB x 1024 x 10% + 112 GiB x 1024 x 6% + (TM - 128 GiB) x 1024 x 2%) MiB | - +------------------------+-----------------------------------------------------------------------------+ - -.. table:: **Table 2** Reservation rules for kubelet - - +-------------------+---------------------------------+--------------------------------------------------+ - | Total Memory (TM) | Number of Pods | Reserved Memory for kubelet | - +===================+=================================+==================================================+ - | TM <= 2 GiB | ``-`` | TM x 25% | - +-------------------+---------------------------------+--------------------------------------------------+ - | TM > 2 GiB | 0 < Max. pods on a node <= 16 | 700 MB | - +-------------------+---------------------------------+--------------------------------------------------+ - | | 16 < Max. pods on a node <= 32 | [700 + (Max. pods on a node - 16) x 18.75] MiB | - +-------------------+---------------------------------+--------------------------------------------------+ - | | 32 < Max. pods on a node <= 64 | [1024 + (Max. pods on a node - 32) x 6.25] MiB | - +-------------------+---------------------------------+--------------------------------------------------+ - | | 64 < Max. pods on a node <= 128 | [1230 + (Max. pods on a node - 64) x 7.80] MiB | - +-------------------+---------------------------------+--------------------------------------------------+ - | | Max. pods on a node > 128 | [1740 + (Max. pods on a node - 128) x 11.20] MiB | - +-------------------+---------------------------------+--------------------------------------------------+ - -.. important:: - - For a small-capacity node, adjust the maximum number of instances based on the site requirements. Alternatively, when creating a node on the CCE console, you can adjust the maximum number of instances for the node based on the node specifications. - -.. _cce_10_0178__section156741258145010: - -Rules for Reserving Node Memory (v2) ------------------------------------- - -For clusters of **v1.21.4-r0**, **v1.23.3-r0**, or later, the node memory reservation model is optimized to V2 and can be dynamically adjusted using the node pool parameters **kube-reserved-mem** and **system-reserved-mem**. For details, see :ref:`Managing a Node Pool `. - -The total reserved node memory of the V2 model is equal to the sum of that reserved for the OS and that reserved for CCE to manage pods. - -Reserved memory includes basic and floating parts. For the OS, the floating memory depends on the node specifications. For CCE, the floating memory depends on the number of pods on a node. - -.. table:: **Table 3** Rules for reserving node memory (v2) - - +-----------------+--------------------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Reserved for | Basic/Floating | Reservation | Used by | - +=================+========================================================+=======================+========================================================================================================================================================================================================================================+ - | OS | Basic | 400 MiB (fixed) | OS service components such as sshd and systemd-journald. | - +-----------------+--------------------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Floating (depending on the node memory) | 25 MiB/GiB | Kernel | - +-----------------+--------------------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | CCE | Basic | 500 MiB (fixed) | Container engine components, such as kubelet and kube-proxy, when the node is unloaded | - +-----------------+--------------------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | | Floating (depending on the number of pods on the node) | Docker: 20 MiB/pod | Container engine components when the number of pods increases | - | | | | | - | | | containerd: 5 MiB/pod | .. note:: | - | | | | | - | | | | When the v2 model reserves memory for a node by default, the default maximum number of pods is estimated based on the memory. For details, see :ref:`Default Maximum Number of Pods on a Node `. | - +-----------------+--------------------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Rules for Reserving Node CPU ----------------------------- - -.. table:: **Table 4** Node CPU reservation rules - - +----------------------------+------------------------------------------------------------------------+ - | Total CPU Cores (Total) | Reserved CPU Cores | - +============================+========================================================================+ - | Total <= 1 core | Total x 6% | - +----------------------------+------------------------------------------------------------------------+ - | 1 core < Total <= 2 cores | 1 core x 6% + (Total - 1 core) x 1% | - +----------------------------+------------------------------------------------------------------------+ - | 2 cores < Total <= 4 cores | 1 core x 6% + 1 core x 1% + (Total - 2 cores) x 0.5% | - +----------------------------+------------------------------------------------------------------------+ - | Total > 4 cores | 1 core x 6% + 1 core x 1% + 2 cores x 0.5% + (Total - 4 cores) x 0.25% | - +----------------------------+------------------------------------------------------------------------+ - -.. _cce_10_0178__section1057416013173: - -Default Maximum Number of Pods on a Node ----------------------------------------- - -.. table:: **Table 5** Default maximum number of pods on a node - - =============== ============================== - Memory Default Maximum Number of Pods - =============== ============================== - 4 GiB 20 - 8 GiB 40 - 16 GiB 60 - 32 GiB 80 - 64 GiB or above 110 - =============== ============================== diff --git a/umn/source/nodes/node_overview/index.rst b/umn/source/nodes/node_overview/index.rst deleted file mode 100644 index 506163b..0000000 --- a/umn/source/nodes/node_overview/index.rst +++ /dev/null @@ -1,24 +0,0 @@ -:original_name: cce_10_0180.html - -.. _cce_10_0180: - -Node Overview -============= - -- :ref:`Precautions for Using a Node ` -- :ref:`Container Engine ` -- :ref:`Kata Containers and Common Containers ` -- :ref:`Maximum Number of Pods That Can Be Created on a Node ` -- :ref:`Formula for Calculating the Reserved Resources of a Node ` -- :ref:`Data Disk Space Allocation ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - precautions_for_using_a_node - container_engine - kata_containers_and_common_containers - maximum_number_of_pods_that_can_be_created_on_a_node - formula_for_calculating_the_reserved_resources_of_a_node - data_disk_space_allocation diff --git a/umn/source/cloud_trace_service_cts/cce_operations_supported_by_cts.rst b/umn/source/observability/cts_logs/cce_operations_supported_by_cts.rst similarity index 100% rename from umn/source/cloud_trace_service_cts/cce_operations_supported_by_cts.rst rename to umn/source/observability/cts_logs/cce_operations_supported_by_cts.rst diff --git a/umn/source/cloud_trace_service_cts/index.rst b/umn/source/observability/cts_logs/index.rst similarity index 82% rename from umn/source/cloud_trace_service_cts/index.rst rename to umn/source/observability/cts_logs/index.rst index 7149ada..03d970a 100644 --- a/umn/source/cloud_trace_service_cts/index.rst +++ b/umn/source/observability/cts_logs/index.rst @@ -2,8 +2,8 @@ .. _cce_10_0024: -Cloud Trace Service (CTS) -========================= +CTS Logs +======== - :ref:`CCE Operations Supported by CTS ` - :ref:`Querying CTS Logs ` diff --git a/umn/source/cloud_trace_service_cts/querying_cts_logs.rst b/umn/source/observability/cts_logs/querying_cts_logs.rst similarity index 88% rename from umn/source/cloud_trace_service_cts/querying_cts_logs.rst rename to umn/source/observability/cts_logs/querying_cts_logs.rst index 19ea48d..58ee43e 100644 --- a/umn/source/cloud_trace_service_cts/querying_cts_logs.rst +++ b/umn/source/observability/cts_logs/querying_cts_logs.rst @@ -42,7 +42,7 @@ Procedure #. Click |image2| on the left of a trace to expand its details, as shown below. - .. figure:: /_static/images/en-us_image_0000001569022781.png + .. figure:: /_static/images/en-us_image_0000001695896201.png :alt: **Figure 1** Expanding trace details **Figure 1** Expanding trace details @@ -50,10 +50,10 @@ Procedure #. Click **View Trace** in the **Operation** column. The trace details are displayed. - .. figure:: /_static/images/en-us_image_0000001517743372.png + .. figure:: /_static/images/en-us_image_0000001695736933.png :alt: **Figure 2** Viewing event details **Figure 2** Viewing event details -.. |image1| image:: /_static/images/en-us_image_0000001569182497.gif -.. |image2| image:: /_static/images/en-us_image_0000001569182505.png +.. |image1| image:: /_static/images/en-us_image_0000001647417272.gif +.. |image2| image:: /_static/images/en-us_image_0000001695896213.png diff --git a/umn/source/observability/index.rst b/umn/source/observability/index.rst new file mode 100644 index 0000000..765588f --- /dev/null +++ b/umn/source/observability/index.rst @@ -0,0 +1,18 @@ +:original_name: cce_10_0705.html + +.. _cce_10_0705: + +Observability +============= + +- :ref:`Logging ` +- :ref:`Monitoring ` +- :ref:`CTS Logs ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + logging/index + monitoring/index + cts_logs/index diff --git a/umn/source/logging/index.rst b/umn/source/observability/logging/index.rst similarity index 74% rename from umn/source/logging/index.rst rename to umn/source/observability/logging/index.rst index 7c22d1c..1e6f4a4 100644 --- a/umn/source/logging/index.rst +++ b/umn/source/observability/logging/index.rst @@ -5,12 +5,12 @@ Logging ======= -- :ref:`Log Management Overview ` +- :ref:`Overview ` - :ref:`Using ICAgent to Collect Container Logs ` .. toctree:: :maxdepth: 1 :hidden: - log_management_overview + overview using_icagent_to_collect_container_logs diff --git a/umn/source/logging/log_management_overview.rst b/umn/source/observability/logging/overview.rst similarity index 95% rename from umn/source/logging/log_management_overview.rst rename to umn/source/observability/logging/overview.rst index 1ba037d..e964849 100644 --- a/umn/source/logging/log_management_overview.rst +++ b/umn/source/observability/logging/overview.rst @@ -2,8 +2,8 @@ .. _cce_10_0557: -Log Management Overview -======================= +Overview +======== CCE allows you to configure policies for collecting, managing, and analyzing workload logs periodically to prevent logs from being over-sized. diff --git a/umn/source/logging/using_icagent_to_collect_container_logs.rst b/umn/source/observability/logging/using_icagent_to_collect_container_logs.rst similarity index 88% rename from umn/source/logging/using_icagent_to_collect_container_logs.rst rename to umn/source/observability/logging/using_icagent_to_collect_container_logs.rst index 8f9814d..a46ef31 100644 --- a/umn/source/logging/using_icagent_to_collect_container_logs.rst +++ b/umn/source/observability/logging/using_icagent_to_collect_container_logs.rst @@ -7,8 +7,8 @@ Using ICAgent to Collect Container Logs CCE works with AOM to collect workload logs. When creating a node, CCE installs the ICAgent for you (the DaemonSet named **icagent** in the kube-system namespace of the cluster). After the ICAgent collects workload logs and reports them to AOM, you can view workload logs on the CCE or AOM console. -Notes and Constraints ---------------------- +Constraints +----------- The ICAgent only collects **\*.log**, **\*.trace**, and **\*.out** text log files. @@ -22,24 +22,24 @@ Using ICAgent to Collect Logs The following uses Nginx as an example. Log policies vary depending on workloads. - .. figure:: /_static/images/en-us_image_0000001569022957.png + .. figure:: /_static/images/en-us_image_0000001691644354.png :alt: **Figure 1** Adding a log policy **Figure 1** Adding a log policy -#. Set **Storage Type** to **Host Path** or **Container Path**. +#. Set **Volume Type** to **Host Path** or **Container Path**. .. table:: **Table 1** Configuring log policies +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | +===================================+===========================================================================================================================================================================================================================================================================================================================================================================================================================+ - | Storage Type | - **Host Path** (hostPath): A host path is mounted to the specified container path (mount path). In the node host path, you can view the container logs output into the mount path. | + | Volume Type | - **Host Path** (hostPath): A host path is mounted to the specified container path (mount path). In the node host path, you can view the container logs output into the mount path. | | | - **Container Path** (emptyDir): A temporary path of the node is mounted to the specified path (mount path). Log data that exists in the temporary path but is not reported by the collector to AOM will disappear after the pod is deleted. | +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Host Path | Enter a host path, for example, **/var/paas/sys/log/nginx**. | +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Container Path | Container path (for example, **/tmp**) to which the storage resources will be mounted. | + | Mount Path | Container path (for example, **/tmp**) to which the storage resources will be mounted. | | | | | | .. important:: | | | | @@ -63,9 +63,22 @@ Using ICAgent to Collect Logs | | - **PodUID/ContainerName**: ID of a pod or name of a container. | | | - **PodName/ContainerName**: name of a pod or container. | +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Collection Path | A collection path narrows down the scope of collection to specified logs. | + | | | + | | - If no collection path is specified, log files in **.log**, **.trace**, and **.out** formats will be collected from the specified path. | + | | - **/Path/**/** indicates that all log files in **.log**, **.trace**, and **.out** formats will be recursively collected from the specified path and all subdirectories at 5 levels deep. | + | | - \* in log file names indicates a fuzzy match. | + | | | + | | Example: The collection path **/tmp/**/test*.log** indicates that all **.log** files prefixed with **test** will be collected from **/tmp** and subdirectories at 5 levels deep. | + | | | + | | .. caution:: | + | | | + | | CAUTION: | + | | Ensure that the ICAgent version is 5.12.22 or later. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Log Dump | Log dump refers to rotating log files on a local host. | | | | - | | - **Enabled**: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped immediately. A new **.zip** file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 **.zip** files. When the number of **.zip** files exceeds 20, earlier **.zip** files will be deleted. After the dump is complete, the log file in AOM will be cleared. | + | | - **Enabled**: AOM scans log files every minute. When a log file exceeds 50 MB, it is dumped. A new **.zip** file is generated in the directory where the log file locates. For a log file, AOM stores only the latest 20 **.zip** files. When the number of **.zip** files exceeds 20, earlier **.zip** files will be deleted. | | | - **Disabled**: AOM does not dump log files. | | | | | | .. note:: | @@ -238,4 +251,4 @@ You can also run the **kubectl logs** command to view the standard output of a c kubectl logs pod_name -c container_name -n namespace (one-off query) kubectl logs -f -n namespace (real-time query in tail -f mode) -.. |image1| image:: /_static/images/en-us_image_0000001569182673.png +.. |image1| image:: /_static/images/en-us_image_0000001695737369.png diff --git a/umn/source/monitoring_and_alarm/index.rst b/umn/source/observability/monitoring/index.rst similarity index 58% rename from umn/source/monitoring_and_alarm/index.rst rename to umn/source/observability/monitoring/index.rst index 8632ca8..b3cf617 100644 --- a/umn/source/monitoring_and_alarm/index.rst +++ b/umn/source/observability/monitoring/index.rst @@ -2,15 +2,15 @@ .. _cce_10_0110: -Monitoring and Alarm -==================== +Monitoring +========== - :ref:`Monitoring Overview ` -- :ref:`Custom Monitoring ` +- :ref:`Monitoring Custom Metrics on AOM ` .. toctree:: :maxdepth: 1 :hidden: monitoring_overview - custom_monitoring + monitoring_custom_metrics_on_aom diff --git a/umn/source/observability/monitoring/monitoring_custom_metrics_on_aom.rst b/umn/source/observability/monitoring/monitoring_custom_metrics_on_aom.rst new file mode 100644 index 0000000..34d1cd3 --- /dev/null +++ b/umn/source/observability/monitoring/monitoring_custom_metrics_on_aom.rst @@ -0,0 +1,261 @@ +:original_name: cce_10_0201.html + +.. _cce_10_0201: + +Monitoring Custom Metrics on AOM +================================ + +CCE allows you to upload custom metrics to AOM. The ICAgent on a node periodically calls the metric monitoring API configured on a workload to read monitoring data and then uploads the data to AOM. + + +.. figure:: /_static/images/en-us_image_0000001695736981.png + :alt: **Figure 1** Using ICAgent to collect monitoring metrics + + **Figure 1** Using ICAgent to collect monitoring metrics + +The custom metric API of a workload can be configured when the workload is created. The following procedure uses an Nginx application as an example to describe how to report custom metrics to AOM. + +#. :ref:`Preparing an Application ` + + Prepare an application image. The application must provide a metric monitoring API for ICAgent to collect data, and the monitoring data must :ref:`comply with the prometheus specifications `. + +#. :ref:`Deploying Applications and Converting Nginx Metrics ` + + Use the application image to deploy a workload in a cluster. Custom monitoring metrics are automatically reported. + +#. :ref:`Verification ` + + Go to AOM to check whether the custom metrics are successfully collected. + +Constraints +----------- + +- The ICAgent is compatible with the monitoring data specifications of `Prometheus `__. The custom metrics provided by pods can be collected by the ICAgent only when they meet the monitoring data specifications of Prometheus. For details, see :ref:`Prometheus Monitoring Data Collection `. +- The ICAgent supports only `Gauge `__ metrics. +- The interval for the ICAgent to call the custom metric API is 1 minute, which cannot be changed. + +.. _cce_10_0201__section173671127160: + +Prometheus Monitoring Data Collection +------------------------------------- + +Prometheus periodically calls the metric monitoring API (**/metrics** by default) of an application to obtain monitoring data. The application needs to provide the metric monitoring API for Prometheus to call, and the monitoring data must meet the following specifications of Prometheus: + +.. code-block:: + + # TYPE nginx_connections_active gauge + nginx_connections_active 2 + # TYPE nginx_connections_reading gauge + nginx_connections_reading 0 + +Prometheus provides clients in various languages. For details about the clients, see `Prometheus CLIENT LIBRARIES `__. For details about how to develop an exporter, see `WRITING EXPORTERS `__. The Prometheus community provides various third-party exporters that can be directly used. For details, see `EXPORTERS AND INTEGRATIONS `__. + +.. _cce_10_0201__section14984815298: + +Preparing an Application +------------------------ + +User-developed applications must provide a metric monitoring API for ICAgent to collect data, and the monitoring data must comply with the Prometheus specifications. For details, see :ref:`Prometheus Monitoring Data Collection `. + +This document uses Nginx as an example to describe how to collect monitoring data. There is a module named **ngx_http_stub_status_module** in Nginx, which provides basic monitoring functions. You can configure the **nginx.conf** file to provide an interface for external systems to access Nginx monitoring data. + +#. Log in to a Linux VM that can access to the Internet and run Docker commands. + +#. Create an **nginx.conf** file. Add the server configuration under **http** to enable Nginx to provide an interface for the external systems to access the monitoring data. + + .. code-block:: + + user nginx; + worker_processes auto; + + error_log /var/log/nginx/error.log warn; + pid /var/run/nginx.pid; + + events { + worker_connections 1024; + } + + http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + access_log /var/log/nginx/access.log main; + sendfile on; + #tcp_nopush on; + keepalive_timeout 65; + #gzip on; + include /etc/nginx/conf.d/*.conf; + + server { + listen 8080; + server_name localhost; + location /stub_status { + stub_status on; + access_log off; + } + } + } + +#. Use this configuration to create an image and a Dockerfile file. + + .. code-block:: + + vi Dockerfile + + The content of Dockerfile is as follows: + + .. code-block:: + + FROM nginx:1.21.5-alpine + ADD nginx.conf /etc/nginx/nginx.conf + EXPOSE 80 + CMD ["nginx", "-g", "daemon off;"] + +#. Use this Dockerfile to build an image and upload it to SWR. The image name is **nginx:exporter**. + + a. In the navigation pane, choose **My Images** and then click **Upload Through Client**\ in the upper right corner. On the page displayed, click **Generate a temporary login command** and click |image1| to copy the command. + + b. Run the login command copied in the previous step on the node. If the login is successful, the message "Login Succeeded" is displayed. + + c. Run the following command to build an image named nginx. The image version is exporter. + + .. code-block:: + + docker build -t nginx:exporter . + + d. Tag the image and upload it to the image repository. Change the image repository address and organization name based on your requirements. + + .. code-block:: + + docker tag nginx:exporter {swr-address}/{group}/nginx:exporter + docker push {swr-address}/{group}/nginx:exporter + +#. View application metrics. + + a. Use **nginx:exporter** to create a workload. + + b. :ref:`Access the container ` and use http://:8080/stub_status to obtain nginx monitoring data. **** indicates the IP address of the container. Information similar to the following is displayed. + + .. code-block:: + + # curl http://127.0.0.1:8080/stub_status + Active connections: 3 + server accepts handled requests + 146269 146269 212 + Reading: 0 Writing: 1 Waiting: 2 + +.. _cce_10_0201__section1539954011362: + +Deploying Applications and Converting Nginx Metrics +--------------------------------------------------- + +The data format of the monitoring data provided by **nginx:exporter** does not meet the requirements of Prometheus. Convert the data format to the format required by Prometheus. To convert the format of Nginx metrics, use `nginx-prometheus-exporter `__, as shown in the following figure. + + +.. figure:: /_static/images/en-us_image_0000001695896253.png + :alt: **Figure 2** Using exporter to convert the data format + + **Figure 2** Using exporter to convert the data format + +Deploy **nginx:exporter** and **nginx-prometheus-exporter** in the same pod. + +.. code-block:: + + kind: Deployment + apiVersion: apps/v1 + metadata: + name: nginx-exporter + namespace: default + spec: + replicas: 1 + selector: + matchLabels: + app: nginx-exporter + template: + metadata: + labels: + app: nginx-exporter + annotations: + metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"prometheus","path":"/metrics","port":"9113","names":""}]' + spec: + containers: + - name: container-0 + image: 'nginx:exporter' # Replace it with the address of the image you uploaded to SWR. + resources: + limits: + cpu: 250m + memory: 512Mi + requests: + cpu: 250m + memory: 512Mi + - name: container-1 + image: 'nginx/nginx-prometheus-exporter:0.9.0' + command: + - nginx-prometheus-exporter + args: + - '-nginx.scrape-uri=http://127.0.0.1:8080/stub_status' + imagePullSecrets: + - name: default-secret + +.. note:: + + The **nginx/nginx-prometheus-exporter:0.9.0** image needs to be pulled from the public network. Therefore, a public IP address needs to be bound to each node in the cluster. + +nginx-prometheus-exporter requires a startup command. **nginx-prometheus-exporter -nginx.scrape-uri=http://127.0.0.1:8080/stub_status** is used to obtain Nginx monitoring data. + +In addition, add an annotation **metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"prometheus","path":"/metrics","port":"9113","names":""}]'** to the pod. + +.. _cce_10_0201__section42551081185: + +Verification +------------ + +After an application is deployed, you can access Nginx to construct some access data and check whether the corresponding monitoring data can be obtained in AOM. + +#. Obtain the pod name of Nginx. + + .. code-block:: + + $ kubectl get pod + NAME READY STATUS RESTARTS AGE + nginx-exporter-78859765db-6j8sw 2/2 Running 0 4m + +#. Log in to the container and run commands to access Nginx. + + .. code-block:: + + $ kubectl exec -it nginx-exporter-78859765db-6j8sw -- /bin/sh + Defaulting container name to container-0. + Use 'kubectl describe pod/nginx-exporter-78859765db-6j8sw -n default' to see all of the containers in this pod. + / # curl http://localhost + + + + Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and + working. Further configuration is required.

+ +

For online documentation and support please refer to + nginx.org.
+ Commercial support is available at + nginx.com.

+ +

Thank you for using nginx.

+ + + / # + +#. Log in to AOM. In the navigation pane, choose **Monitoring** > **Metric Monitoring** to view Nginx-related metrics, for example, **nginx_connections_active**. + +.. |image1| image:: /_static/images/en-us_image_0000001695896249.png diff --git a/umn/source/observability/monitoring/monitoring_overview.rst b/umn/source/observability/monitoring/monitoring_overview.rst new file mode 100644 index 0000000..155efe5 --- /dev/null +++ b/umn/source/observability/monitoring/monitoring_overview.rst @@ -0,0 +1,210 @@ +:original_name: cce_10_0182.html + +.. _cce_10_0182: + +Monitoring Overview +=================== + +CCE works with AOM to comprehensively monitor clusters. When a node is created, the ICAgent (the DaemonSet named **icagent** in the kube-system namespace of the cluster) of AOM is installed by default. The ICAgent collects monitoring data of underlying resources and workloads running on the cluster. It also collects monitoring data of custom metrics of the workload. + +- Resource metrics + + Basic resource monitoring includes CPU, memory, and disk monitoring. For details, see :ref:`Resource Metrics `. You can view these metrics of clusters, nodes, and workloads on the CCE or AOM console. + +- Custom metrics + + The ICAgent collects custom metrics of applications and uploads them to AOM. For details, see :ref:`Monitoring Custom Metrics on AOM `. + +.. _cce_10_0182__section205486212251: + +Resource Metrics +---------------- + +On the CCE console, you can view the following metrics. + +- :ref:`Viewing Cluster Monitoring Data ` +- :ref:`Viewing Monitoring Data of Worker Nodes ` +- :ref:`Viewing Workload Monitoring Data ` +- :ref:`Viewing Pod Monitoring Data ` + +On the AOM console, you can view host metrics and container metrics. + +.. _cce_10_0182__section1932383618498: + +Viewing Cluster Monitoring Data +------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. CCE allows you to view the monitoring data of all nodes. Choose **Clusters** from the navigation pane. Click the cluster name, and information like **CPU Metrics** and **Memory** of all nodes (excluding master nodes) in the last hour, the **Status**, **AZ** are displayed. + + .. table:: **Table 1** Cluster monitoring metrics + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Metric | Description | + +===================================+===========================================================================================================================================================================================+ + | CPU Allocation (%) | A metric indicates the percentage of CPUs allocated to workloads. | + | | | + | | **CPU Allocation (%)** = Sum of CPU quotas requested by running pods in the cluster/Sum of CPU quotas that can be allocated from all nodes (excluding master nodes) to workloads | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Memory Allocation (%) | A metric indicates the percentage of memory allocated to workloads. | + | | | + | | **Memory Allocation (%)** = Sum of memory quotas requested by running pods in the cluster/Sum of memory quotas that can be allocated from all nodes (excluding master nodes) to workloads | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CPU Usage (%) | A metric indicates the CPU usage of the cluster. | + | | | + | | This metric is the average CPU usage of all nodes (excluding master nodes) in a cluster. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Memory Usage (%) | A metric indicates the memory usage of your cluster. | + | | | + | | This metric is the average memory usage of all nodes (excluding master nodes) in a cluster. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + .. note:: + + Allocatable node resources (CPU or memory) = Total amount - Reserved amount - Eviction thresholds. For details, see :ref:`Node Resource Reservation Policy `. + +.. _cce_10_0182__section965517431154: + +Viewing Monitoring Data of Worker Nodes +--------------------------------------- + +CCE also allows you to view monitoring data of a single node. + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Choose **Nodes** in the navigation pane. On the right of the page, click **Monitor** of the target node to view the monitoring data. +#. You can select statistical **Dimension** and choose time range to view the monitoring data. The data is provided by AOM. You can view the monitoring data of a node, including the CPU, memory, disk, networking, and GPU. + + .. table:: **Table 2** Node monitoring metrics + + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Metric | Description | + +===================================+====================================================================================================================================================================================================+ + | CPU Usage (%) | A metric indicates the CPU usage of the node. | + | | | + | | **CPU Usage (%)** = Used CPU cores/Total number of CPU cores | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Used CPU Cores (cores) | A metric indicates the number of used CPU cores. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Physical Memory Usage (%) | A metric indicates the physical memory usage of the node | + | | | + | | **Physical Memory Usage (%)** = (Physical memory capacity - Available physical memory)/Physical memory capacity | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Available Physical Memory (GiB) | A metric indicates the unused physical memory of the node. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Disk Usage (%) | A metric indicates the disk usage of the file system on the data disk of the node. It is calculated based on the file partition. For details, see :ref:`Data Disk Space Allocation `. | + | | | + | | **Disk Usage (%)** = (Disk capacity - Available disk space)/Disk capacity | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Available Disk Space (GiB) | A metric indicates the unused disk space. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Downlink Rate (BPS) (KB/s) | A metric indicates the speed at which data is downloaded from the Internet to the node. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Uplink Rate (BPS) (KB/s) | A metric indicates the speed at which data is uploaded from the node to the Internet. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | GPU Usage (%) | A metric indicates the GPU usage of the node. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | GPU Memory Usage (%) | A metric indicates the percentage of the used GPU memory to the GPU memory capacity. | + | | | + | | **GPU Memory Usage (%)** = Used GPU memory/GPU memory capacity | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Used GPU Memory (GiB) | A metric indicates the used GPU memory. | + +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0182__section2221948202013: + +Viewing Workload Monitoring Data +-------------------------------- + +CCE allows you to view monitoring data of a single workload. + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Choose **Workloads** in the navigation pane. On the right of the page, click **Monitor** of the target workload. In the window that slides out from the right, the workload monitoring data is displayed. +#. You can select statistical **Dimension** and choose time range to view the monitoring data. The data is provided by AOM. You can view the monitoring data of a workload, including the CPU, memory, networking, and GPU. + + .. note:: + + If there are multiple pods exist in the workload, the monitoring data may vary according to the statistical **Dimension**. For example, if you select **Maximum** or **Minimum** for **Dimension**, the value of each monitoring data is the maximum or minimum value of all pods under the workload. If **Average** is selected, the value of each monitoring data is the average value of all pods under the workload. + + .. table:: **Table 3** Workload monitoring metrics + + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Metric | Description | + +===================================+===================================================================================================================================================================================+ + | CPU Usage (%) | A metric indicates the CPU usage of the workload. | + | | | + | | **CPU Usage (%)** = Used CPU cores/Total number of CPU cores of all running pods (If no limit is configured, the total number of the node's CPU cores is used.) | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Used CPU Cores (cores) | A metric indicates the number of used CPU cores. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Physical Memory Usage (%) | A metric indicates the physical memory usage of the workload. | + | | | + | | **Physical Memory Usage (%)** = Used physical memory/Total number of CPU cores of all running pods (If no limit is configured, the total number of the node's CPU cores is used.) | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Used Physical Memory (GiB) | A metric indicates the amount of the used physical memory. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Disk Read Rate | A metric indicates the data volume read from a disk per second. The unit is KB/s. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Disk Write Rate | A metric indicates the data volume written to a disk per second. The unit is KB/s. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Downlink Rate (BPS) (KB/s) | A metric indicates the speed at which data is downloaded from the Internet. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Uplink Rate (BPS) (KB/s) | A metric indicates the speed at which data is uploaded from the node to the Internet | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | GPU Usage (%) | A metric indicates the GPU usage of the workload. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | GPU Memory Usage (%) | A metric indicates the percentage of the used GPU memory to the GPU memory capacity. | + | | | + | | **GPU Memory Usage (%)** = Used GPU memory/GPU memory capacity | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Used GPU Memory (GiB) | A metric indicates the used GPU memory. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0182__section1799803015267: + +Viewing Pod Monitoring Data +--------------------------- + +CCE allows you to view the monitoring date of your pods. + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Choose **Workloads** from the navigation pane. Then click the workload name of the target workload to list the pods. +#. Click **Monitor** of the target pod to view the monitoring data. +#. You can select statistical **Dimension** and choose time range to view the monitoring data. The data is provided by AOM. You can view the monitoring data of a pod, including the CPU, memory, disk, networking, and GPU. + + .. note:: + + If multiple containers exist in a single pod, the monitoring data may vary according to the statistical **Dimension**. For example, if you select **Maximum** or **Minimum** for **Dimension**, the value of each monitoring data is the maximum or minimum value of all containers under the pod. If **Average** is selected, the value of each monitoring data is the average value of all containers in the pod. + + .. table:: **Table 4** Pod monitoring metrics + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Metric | Description | + +===================================+===============================================================================================================================================================================================================================+ + | CPU Usage (%) | A metric indicates the CPU usage of the pod. | + | | | + | | **CPU Usage (%)** = Used CPU cores/Total number of limited CPU cores of all running containers in the pod (If the limited CPU cores of all running containers are not specified, the number of the node's CPU cores is used.) | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Used CPU Cores (cores) | A metric indicates the number of used CPU cores. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Physical Memory Usage (%) | A metric indicates the physical memory usage of the pod. | + | | | + | | **Physical Memory Usage (%)** = Used physical memory/Sum of physical memory limits of all running containers in the pod (If not specified, the value of the node's physical memory is used.) | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Used Physical Memory (GiB) | A metric indicates the amount of the used physical memory. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Disk Read Rate | A metric indicates the data volume read from a disk per second. The unit is KB/s. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Disk Write Rate | A metric indicates the data volume written to a disk per second. The unit is KB/s. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Downlink Rate (BPS) (KB/s) | A metric indicates the speed at which data is downloaded from the Internet. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Uplink Rate (BPS) (KB/s) | A metric indicates the speed at which data is uploaded from the node to the Internet. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | GPU Usage (%) | A metric indicates the GPU usage of the pod. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | GPU Memory Usage (%) | A metric indicates the percentage of the used GPU memory to the GPU memory capacity. | + | | | + | | **GPU Memory Usage (%)** = Used GPU memory/GPU memory capacity | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Used GPU Memory (GiB) | A metric indicates the used GPU memory of the pod. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/permissions_management/cluster_permissions_iam-based.rst b/umn/source/permissions/cluster_permissions_iam-based.rst similarity index 90% rename from umn/source/permissions_management/cluster_permissions_iam-based.rst rename to umn/source/permissions/cluster_permissions_iam-based.rst index 75e0914..10dbaae 100644 --- a/umn/source/permissions_management/cluster_permissions_iam-based.rst +++ b/umn/source/permissions/cluster_permissions_iam-based.rst @@ -9,7 +9,8 @@ CCE cluster-level permissions are assigned based on **IAM system policies** and .. caution:: - **Cluster permissions** are configured only for cluster-related resources (such as clusters and nodes). You must also configure :ref:`namespace permissions ` to operate Kubernetes resources (such as workloads and Services). + - Cluster permissions are granted for users to operate cluster-related resources only (such as clusters and nodes). To operate Kubernetes resources like workloads and Services, you must be granted the :ref:`namespace permissions ` at the same time. + - When viewing a cluster on the CCE console, the information displayed depends on the namespace permissions. If you have no namespace permissions, you cannot view the resources in the cluster. For details, see :ref:`Permission Dependency of the CCE Console `. Prerequisites ------------- @@ -25,7 +26,7 @@ Process Flow ------------ -.. figure:: /_static/images/en-us_image_0000001517743544.png +.. figure:: /_static/images/en-us_image_0000001647417636.png :alt: **Figure 1** Process of assigning CCE permissions **Figure 1** Process of assigning CCE permissions @@ -48,8 +49,8 @@ Process Flow Log in to the management console as the user you created, and verify that the user has the assigned permissions. - - Log in to the management console, switch to the CCE console, and buy a cluster. If you fail to do so (assuming that only the CCEReadOnlyAccess permission is assigned), the permission control policy takes effect. - - Switch to the console of any other service. If a message appears indicating that you do not have the required permissions to access the service, the **CCEReadOnlyAccess** policy takes effect. + - Log in to the management console, switch to the CCE console, and buy a cluster. If you fail to do so (assuming that only the **CCEReadOnlyAccess** permission is assigned), the **CCEReadOnlyAccess** policy takes effect. + - Switch to the console of any other service. If a message appears indicating that you do not have the required permissions for accessing the service, the **CCEReadOnlyAccess** policy takes effect. System-defined Roles -------------------- @@ -177,4 +178,4 @@ When RBAC and IAM policies co-exist, the backend authentication logic for open A Using clusterCert to obtain the cluster kubeconfig: cceadm/teadmin -.. |image1| image:: /_static/images/en-us_image_0000001518062684.png +.. |image1| image:: /_static/images/en-us_image_0000001695896569.png diff --git a/umn/source/permissions_management/example_designing_and_configuring_permissions_for_users_in_a_department.rst b/umn/source/permissions/example_designing_and_configuring_permissions_for_users_in_a_department.rst similarity index 85% rename from umn/source/permissions_management/example_designing_and_configuring_permissions_for_users_in_a_department.rst rename to umn/source/permissions/example_designing_and_configuring_permissions_for_users_in_a_department.rst index 87bca61..888d218 100644 --- a/umn/source/permissions_management/example_designing_and_configuring_permissions_for_users_in_a_department.rst +++ b/umn/source/permissions/example_designing_and_configuring_permissions_for_users_in_a_department.rst @@ -22,7 +22,7 @@ Permission Design The following uses company X as an example. -Generally, a company has multiple departments or projects, and each department has multiple members. Therefore, you need to design how permissions are to be assigned to different groups and projects, and set a user name for each member to facilitate subsequent user group and permissions configuration. +Generally, a company has multiple departments or projects, and each department has multiple members. Design how permissions are to be assigned to different groups and projects, and set a user name for each member to facilitate subsequent user group and permissions configuration. The following figure shows the organizational structure of a department in a company and the permissions to be assigned to each member: @@ -31,7 +31,7 @@ The following figure shows the organizational structure of a department in a com Director: David --------------- -David is a department director of company X. To assign him all CCE permissions (both cluster and namespace permissions), you need to create the **cce-admin** user group for David on the IAM console and assign the CCE Administrator role. +David is a department director of company X. To assign him all CCE permissions (both cluster and namespace permissions), create the **cce-admin** user group for David on the IAM console and assign the CCE Administrator role. .. note:: @@ -67,14 +67,14 @@ Development Team Leader: Robert In the previous steps, Robert has been assigned the read-only permission on all clusters and namespaces. Now, assign the administrator permissions on all namespaces to Robert. -Therefore, you need to assign the administrator permissions on all namespaces in all clusters to Robert. +Therefore, assign the administrator permissions on all namespaces in all clusters to Robert. O&M Engineer: William --------------------- -In the previous steps, William has been assigned the read-only permission on all clusters and namespaces. He also requires the cluster management permissions. Therefore, you can log in to the IAM console, create a user group named **cce-sre-b4** and assign CCE FullAccess to William. +In the previous steps, William has been assigned the read-only permission on all clusters and namespaces. He also requires the cluster management permissions in his region. Therefore, you can log in to the IAM console, create a user group named **cce-sre-b4** and assign CCE FullAccess to William for his region. -Now, William has the cluster management permissions and the read-only permission on all namespaces. +Now, William has the cluster management permissions for his region and the read-only permission on all namespaces. Development Engineers: Linda and Peter -------------------------------------- @@ -83,4 +83,4 @@ In the previous steps, Linda and Peter have been assigned the read-only permissi By now, all the required permissions are assigned to the department members. -.. |image1| image:: /_static/images/en-us_image_0000001569182569.jpg +.. |image1| image:: /_static/images/en-us_image_0000001695737145.jpg diff --git a/umn/source/permissions_management/index.rst b/umn/source/permissions/index.rst similarity index 94% rename from umn/source/permissions_management/index.rst rename to umn/source/permissions/index.rst index a6d7451..efd15e6 100644 --- a/umn/source/permissions_management/index.rst +++ b/umn/source/permissions/index.rst @@ -2,8 +2,8 @@ .. _cce_10_0164: -Permissions Management -====================== +Permissions +=========== - :ref:`Permissions Overview ` - :ref:`Cluster Permissions (IAM-based) ` diff --git a/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst b/umn/source/permissions/namespace_permissions_kubernetes_rbac-based.rst similarity index 75% rename from umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst rename to umn/source/permissions/namespace_permissions_kubernetes_rbac-based.rst index 6de8c98..fca183b 100644 --- a/umn/source/permissions_management/namespace_permissions_kubernetes_rbac-based.rst +++ b/umn/source/permissions/namespace_permissions_kubernetes_rbac-based.rst @@ -19,7 +19,7 @@ You can regulate users' or user groups' access to Kubernetes resources in a sing Role and ClusterRole specify actions that can be performed on specific resources. RoleBinding and ClusterRoleBinding bind roles to specific users, user groups, or ServiceAccounts. Illustration: -.. figure:: /_static/images/en-us_image_0000001517743636.png +.. figure:: /_static/images/en-us_image_0000001647577104.png :alt: **Figure 1** Role binding **Figure 1** Role binding @@ -38,9 +38,9 @@ On the CCE console, you can assign permissions to a user or user group to access Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based) --------------------------------------------------------------------------------- -Users with different cluster permissions (assigned using IAM) have different namespace permissions (assigned using Kubernetes RBAC). :ref:`Table 1 ` lists the namespace permissions of different users. +Users with different cluster permissions (assigned using IAM) have different namespace permissions (assigned using Kubernetes RBAC). :ref:`Table 1 ` lists the namespace permissions of different users. -.. _cce_10_0189__en-us_topic_0000001199181174_table886210176509: +.. _cce_10_0189__cce_10_0187_table886210176509: .. table:: **Table 1** Differences in namespace permissions @@ -56,11 +56,10 @@ Users with different cluster permissions (assigned using IAM) have different nam | IAM user with the Tenant Guest role | Requires Kubernetes RBAC authorization. | +-------------------------------------------------------------+-----------------------------------------+ -Notes ------ +Precautions +----------- -- Kubernetes RBAC authorization can be used for clusters of v1.11.7-r2 and later. Ensure that you have deployed a supported cluster version. For details about upgrading a cluster, see :ref:`Performing Replace or Rolling Upgrade `. -- After you create a cluster of v1.11.7-r2 or later, CCE automatically assigns the cluster-admin permission to you, which means you have full control on all resources in all namespaces in the cluster. The ID of a federated user changes upon each login and logout. Therefore, the user with the permissions is displayed as deleted. In this case, do not delete the permissions. Otherwise, the authentication fails. You are advised to grant the cluster-admin permission to a user group on CCE and add federated users to the user group. +- After you create a cluster, CCE automatically assigns the cluster-admin permission to you, which means you have full control on all resources in all namespaces in the cluster. The ID of a federated user changes upon each login and logout. Therefore, the user with the permissions is displayed as deleted. In this case, do not delete the permissions. Otherwise, the authentication fails. You are advised to grant the cluster-admin permission to a user group on CCE and add federated users to the user group. - A user with the Security Administrator role has all IAM permissions except role switching. For example, an account in the admin user group has this role by default. Only these users can assign permissions on the **Permissions** page on the CCE console. Configuring Namespace Permissions (on the Console) @@ -91,11 +90,11 @@ Using kubectl to Configure Namespace Permissions .. note:: - When you access a cluster using kubectl, CCE uses the kubeconfig.json file generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a kubeconfig.json file vary from user to user. The permissions that a user has are listed in :ref:`Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based) `. + When you access a cluster using kubectl, CCE uses **kubeconfig.json** generated on the cluster for authentication. This file contains user information, based on which CCE determines which Kubernetes resources can be accessed by kubectl. The permissions recorded in a kubeconfig.json file vary from user to user. The permissions that a user has are listed in :ref:`Cluster Permissions (IAM-based) and Namespace Permissions (Kubernetes RBAC-based) `. -In addition to cluster-admin, admin, edit, and view, you can define Roles and RoleBindings to configure the permissions to add, delete, modify, and query resources, such as pods, Deployments, and Services, in the namespace. +In addition to cluster-admin, admin, edit, and view, you can define Roles and RoleBindings to configure the permissions to add, delete, modify, and obtain resources, such as pods, Deployments, and Services, in the namespace. -The procedure for creating a Role is very simple. To be specific, specify a namespace and then define rules. The rules in the following example are to allow GET and LIST operations on pods in the default namespace. +The procedure for creating a Role is very simple. To be specific, specify a namespace and then define rules. The rules in the following example are to allow GET and LIST operations on pods in the **default** namespace. .. code-block:: @@ -115,7 +114,7 @@ The procedure for creating a Role is very simple. To be specific, specify a name For details, see `Using RBAC Authorization `__. -After creating a Role, you can bind the Role to a specific user, which is called RoleBinding. The following is an example. +After creating a Role, you can bind the Role to a specific user, which is called RoleBinding. The following shows an example: .. code-block:: @@ -138,10 +137,10 @@ After creating a Role, you can bind the Role to a specific user, which is called The **subjects** section binds a Role with an IAM user so that the IAM user can obtain the permissions defined in the Role, as shown in the following figure. -.. figure:: /_static/images/en-us_image_0000001518222732.png - :alt: **Figure 2** A RoleBinding binds the Role to the user. +.. figure:: /_static/images/en-us_image_0000001647577100.png + :alt: **Figure 2** Binding a Role to a user - **Figure 2** A RoleBinding binds the Role to the user. + **Figure 2** Binding a Role to a user You can also specify a user group in the **subjects** section. In this case, all users in the user group obtain the permissions defined in the Role. @@ -168,7 +167,7 @@ Use the IAM user user-example to connect to the cluster and obtain the pod infor NAME READY STATUS RESTARTS AGE nginx-658dff48ff-7rkph 1/1 Running 0 4d9h -Try querying Deployments and Services in the namespace. The output shows **user-example** does not have the required permissions. Try querying the pods in namespace kube-system. The output shows **user-example** does not have the required permissions, either. This indicates that the IAM user **user-example** has only the GET and LIST Pod permissions in the default namespace, which is the same as expected. +Try querying Deployments and Services in the namespace. The output shows that **user-example** does not have the required permissions. Try querying the pods in namespace kube-system. The output shows that **user-example** does not have the required permissions, either. This indicates that the IAM user **user-example** has only the GET and LIST Pod permissions in the default namespace, which is the same as expected. .. code-block:: @@ -184,7 +183,7 @@ Try querying Deployments and Services in the namespace. The output shows **user- Example: Assigning Cluster Administrator Permissions (cluster-admin) -------------------------------------------------------------------- -You can use the cluster-admin role to assign all permissions on a cluster. This role contains the permissions for cluster resources (such as PVs and StorageClasses). +You can use the cluster-admin role to assign all permissions on a cluster. This role contains the permissions for all cluster resources. In the following example kubectl output, a ClusterRoleBinding has been created and binds the cluster-admin role to the user group **cce-role-group**. @@ -230,9 +229,56 @@ Connect to the cluster as an authorized user. If the PVs and StorageClasses can Example: Assigning Namespace O&M Permissions (admin) ---------------------------------------------------- -**admin** has all permissions on namespaces. You can grant this role to a user or user group to manage one or all namespaces. +The admin role has the read and write permissions on most namespace resources. You can grant the admin permission on all namespaces to a user or user group. -In the following example kubectl output, a RoleBinding has been created, the admin role is bound to the user group **cce-role-group**, and the target namespace is the default namespace. +In the following example kubectl output, a RoleBinding has been created and binds the admin role to the user group **cce-role-group**. + +.. code-block:: + + # kubectl get rolebinding + NAME ROLE AGE + clusterrole_admin_group0c96fad22880f32a3f84c009862af6f7 ClusterRole/admin 18s + # kubectl get rolebinding clusterrole_admin_group0c96fad22880f32a3f84c009862af6f7 -oyaml + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + annotations: + CCE.com/IAM: "true" + creationTimestamp: "2021-06-24T01:30:08Z" + name: clusterrole_admin_group0c96fad22880f32a3f84c009862af6f7 + resourceVersion: "36963685" + selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/default/rolebindings/clusterrole_admin_group0c96fad22880f32a3f84c009862af6f7 + uid: 6c6f46a6-8584-47da-83f5-9eef1f7b75d6 + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: admin + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: 0c96fad22880f32a3f84c009862af6f7 + +Connect to the cluster as an authorized user. If the PVs and StorageClasses can be queried but a namespace cannot be created, the permission configuration takes effect. + +.. code-block:: + + # kubectl get pv + No resources found + # kubectl get sc + NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE + csi-disk everest-csi-provisioner Delete Immediate true 75d + csi-disk-topology everest-csi-provisioner Delete WaitForFirstConsumer true 75d + csi-nas everest-csi-provisioner Delete Immediate true 75d + csi-obs everest-csi-provisioner Delete Immediate false 75d + # kubectl apply -f namespaces.yaml + Error from server (Forbidden): namespaces is forbidden: User "0c97ac3cb280f4d91fa7c0096739e1f8" cannot create resource "namespaces" in API group "" at the cluster scope + +Example: Assigning Namespace Developer Permissions (edit) +--------------------------------------------------------- + +The edit role has the read and write permissions on most namespace resources. You can grant the edit permission on all namespaces to a user or user group. + +In the following example kubectl output, a RoleBinding has been created, the edit role is bound to the user group **cce-role-group**, and the target namespace is the default namespace. .. code-block:: @@ -254,13 +300,13 @@ In the following example kubectl output, a RoleBinding has been created, the adm roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole - name: admin + name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: 0c96fad22880f32a3f84c009862af6f7 -Connect to a cluster as an authorized user. In this example, you can create and query resources in the default namespace, but cannot query resources in the kube-system namespace or cluster resources. +Connect to the cluster as an authorized user. In this example, you can create and obtain resources in the default namespace, but cannot query resources in the kube-system namespace or cluster resources. .. code-block:: diff --git a/umn/source/permissions_management/permission_dependency_of_the_cce_console.rst b/umn/source/permissions/permission_dependency_of_the_cce_console.rst similarity index 93% rename from umn/source/permissions_management/permission_dependency_of_the_cce_console.rst rename to umn/source/permissions/permission_dependency_of_the_cce_console.rst index d8e50c0..4701109 100644 --- a/umn/source/permissions_management/permission_dependency_of_the_cce_console.rst +++ b/umn/source/permissions/permission_dependency_of_the_cce_console.rst @@ -5,10 +5,10 @@ Permission Dependency of the CCE Console ======================================== -Some CCE permissions policies depend on the policies of other cloud services. To view or use other cloud resources on the CCE console, you need to enable the system policy access control feature of IAM and assign dependency policies for the other cloud services. +Some CCE permissions policies depend on the policies of other cloud services. To view or use other cloud resources on the CCE console, enable the system policy access control feature of IAM and assign dependency policies for the other cloud services. - Dependency policies are assigned based on the CCE FullAccess or CCE ReadOnlyAccess policy you configure. -- Only users and user groups with namespace permissions can gain the view access to resources in clusters of v1.11.7-r2 and later. +- Only users and user groups with namespace permissions can gain the view access to resources in clusters. - If a user is granted the view access to all namespaces of a cluster, the user can view all namespace resources (except secrets) in the cluster. To view secrets in the cluster, the user must gain the **admin** or **edit** role in all namespaces of the cluster. - HPA policies take effect only after the cluster-admin permissions are configured for the namespace. @@ -32,7 +32,7 @@ To grant an IAM user the permissions to view or use resources of other cloud ser +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Console Function | Dependent Services | Roles or Policies Required | +=====================================+==========================================+=====================================================================================================================================================================================================================================================================+ - | Dashboard | Application Operations Management (AOM) | - An IAM user with CCE Administrator assigned can use this function only after AOM FullAccess policy is assigned. | + | Cluster overview | Application Operations Management (AOM) | - An IAM user with CCE Administrator assigned can use this function only after AOM FullAccess policy is assigned. | | | | - IAM users with IAM ReadOnlyAccess, CCE FullAccess, or CCE ReadOnlyAccess assigned can directly use this function. | +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Workload management | Elastic Load Balance (ELB) | Except in the following cases, the user does not require any additional role to create workloads. | @@ -43,27 +43,28 @@ To grant an IAM user the permissions to view or use resources of other cloud ser | | | - To use OBS, you must have OBS Administrator globally assigned. | | | NAT Gateway | | | | | .. note:: | - | | OBS | | + | | Object Storage Service (OBS) | | | | | Because of the cache, it takes about 13 minutes for the RBAC policy to take effect after being granted to users, enterprise projects, and user groups. After an OBS-related system policy is granted, it takes about 5 minutes for the policy to take effect. | - | | SFS | | + | | Scalable File Service (SFS) | | | | | - To use SFS, you must have SFS FullAccess assigned. | +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Cluster management | Application Operations Management (AOM) | - Auto scale-out or scale-up requires the AOM FullAccess policy. | +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Node management | Elastic Cloud Server (ECS) | If the permission assigned to an IAM user is CCE Administrator, creating or deleting a node requires the ECS FullAccess or ECS Administrator policy and the VPC Administrator policy. | +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Network management | Elastic Load Balance (ELB) | Except in the following cases, the user does not require any additional role to create a Service. | + | Networking | Elastic Load Balance (ELB) | Except in the following cases, the user does not require any additional role to create a Service. | | | | | | | NAT Gateway | - To create a Service using ELB, you must have ELB FullAccess or ELB Administrator plus VPC Administrator assigned. | | | | - To create a Service using NAT Gateway, you must have NAT Administrator assigned. | +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Storage management | OBS | - To use OBS, you must have OBS Administrator globally assigned. | + | Container storage | Object Storage Service (OBS) | - To use OBS, you must have OBS Administrator globally assigned. | | | | | - | | SFS | .. note:: | + | | Scalable File Service (SFS) | .. note:: | | | | | - | | | Because of the cache, it takes about 13 minutes for the RBAC policy to take effect after being granted to users, enterprise projects, and user groups. After an OBS-related system policy is granted, it takes about 5 minutes for the policy to take effect. | + | | SFS Turbo | Because of the cache, it takes about 13 minutes for the RBAC policy to take effect after being granted to users, enterprise projects, and user groups. After an OBS-related system policy is granted, it takes about 5 minutes for the policy to take effect. | | | | | | | | - To use SFS, you must have SFS FullAccess assigned. | + | | | - Using SFS Turbo requires the SFS Turbo Admin role. | | | | | | | | The CCE Administrator role is required for importing storage devices. | +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -75,9 +76,9 @@ To grant an IAM user the permissions to view or use resources of other cloud ser +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Permissions management | None | - For cloud accounts, no additional policy/role is required. | | | | - IAM users with CCE Administrator or global Security Administrator assigned can use this function. | - | | | - IAM users with the CCE FullAccess or CCE ReadOnlyAccess permission can access the namespace. In addition, the IAM users must have the :ref:`administrator permission (cluster-admin) ` on the namespace. | + | | | - IAM users with the CCE FullAccess or CCE ReadOnlyAccess permission can access the namespace. In addition, the IAM users must have the :ref:`administrator permissions (cluster-admin) ` on the namespace. | +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Configuration center | None | - Creating ConfigMaps does not require any additional policy. | + | ConfigMaps and Secrets | None | - Creating ConfigMaps does not require any additional policy. | | | | - Viewing secrets requires that the cluster-admin, admin, or edit permission be configured for the namespace. The DEW KeypairFullAccess or DEW KeypairReadOnlyAccess policy must be assigned for dependent services. | +-------------------------------------+------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Help center | None | None | diff --git a/umn/source/permissions_management/permissions_overview.rst b/umn/source/permissions/permissions_overview.rst similarity index 93% rename from umn/source/permissions_management/permissions_overview.rst rename to umn/source/permissions/permissions_overview.rst index f7f618a..7ea7649 100644 --- a/umn/source/permissions_management/permissions_overview.rst +++ b/umn/source/permissions/permissions_overview.rst @@ -18,18 +18,16 @@ CCE permissions are described as follows: - **Cluster-level permissions**: Cluster-level permissions management evolves out of the system policy authorization feature of IAM. IAM users in the same user group have the same permissions. On IAM, you can configure system policies to describe which IAM user groups can perform which operations on cluster resources. For example, you can grant user group A to create and delete cluster X, add a node, or install an add-on, while granting user group B to view information about cluster X. - Cluster-level permissions involve CCE non-Kubernetes APIs and support fine-grained IAM policies. + Cluster-level permissions involve non-Kubernetes APIs in CCE clusters and support fine-grained IAM policies. - **Namespace-level permissions**: You can regulate users' or user groups' access to Kubernetes resources in a single namespace based on their Kubernetes RBAC roles. CCE has also been enhanced based on open-source capabilities. It supports RBAC authorization based on IAM user or user group, and RBAC authentication on access to APIs using IAM tokens. Namespace-level permissions involve CCE Kubernetes APIs and are enhanced based on the Kubernetes RBAC capabilities. Namespace-level permissions can be granted to IAM users or user groups for authentication and authorization, but are independent of fine-grained IAM policies. - Starting from version 1.11.7-r2, CCE clusters allow you to configure namespace permissions. Clusters earlier than v1.11.7-r2 have all namespace permissions by default. - In general, you configure CCE permissions in two scenarios. The first is creating and managing clusters and related resources, such as nodes. The second is creating and using Kubernetes resources in the cluster, such as workloads and Services. -.. figure:: /_static/images/en-us_image_0000001569182621.png +.. figure:: /_static/images/en-us_image_0000001647576892.png :alt: **Figure 1** Illustration on CCE permissions **Figure 1** Illustration on CCE permissions diff --git a/umn/source/permissions/pod_security/configuring_a_pod_security_policy.rst b/umn/source/permissions/pod_security/configuring_a_pod_security_policy.rst new file mode 100644 index 0000000..a8c76d7 --- /dev/null +++ b/umn/source/permissions/pod_security/configuring_a_pod_security_policy.rst @@ -0,0 +1,225 @@ +:original_name: cce_10_0275.html + +.. _cce_10_0275: + +Configuring a Pod Security Policy +================================= + +A pod security policy (PSP) is a cluster-level resource that controls sensitive security aspects of the pod specification. The `PodSecurityPolicy `__ object in Kubernetes defines a group of conditions that a pod must comply with to be accepted by the system, as well as the default values of related fields. + +By default, the PSP access control component is enabled for clusters of v1.17.17 and a global default PSP named **psp-global** is created. You can modify the default policy (but not delete it). You can also create a PSP and bind it to the RBAC configuration. + +.. note:: + + - In addition to the global default PSP, the system configures independent PSPs for system components in namespace kube-system. Modifying the psp-global configuration does not affect pod creation in namespace kube-system. + - PodSecurityPolicy was deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25. You can use pod security admission as a substitute for PodSecurityPolicy. For details, see :ref:`Configuring Pod Security Admission `. + +Modifying the Global Default PSP +-------------------------------- + +Before modifying the global default PSP, ensure that a CCE cluster has been created and connected by using kubectl. + +#. Run the following command: + + **kubectl edit psp psp-global** + +#. Modify the required parameters, as shown in :ref:`Table 1 `. + + .. _cce_10_0275__table1928122594918: + + .. table:: **Table 1** PSP configuration + + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Item | Description | + +===================================+================================================================================================================================================================================================+ + | privileged | Starts the privileged container. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | hostPID | Uses the host namespace. | + | | | + | hostIPC | | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | hostNetwork | Uses the host network and port. | + | | | + | hostPorts | | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | volumes | Specifies the type of the mounted volume that can be used. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | allowedHostPaths | Specifies the host path to which a hostPath volume can be mounted. The **pathPrefix** field specifies the host path prefix group to which a hostPath volume can be mounted. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | allowedFlexVolumes | Specifies the FlexVolume driver that can be used. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | fsGroup | Configures the supplemental group ID used by the mounted volume in the pod. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | readOnlyRootFilesystem | Pods can only be started using a read-only root file system. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | runAsUser | Specifies the user ID, primary group ID, and supplemental group ID for starting containers in a pod. | + | | | + | runAsGroup | | + | | | + | supplementalGroups | | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | allowPrivilegeEscalation | Specifies whether **allowPrivilegeEscalation** can be set to **true** in a pod. This configuration controls the use of Setuid and whether programs can use additional privileged system calls. | + | | | + | defaultAllowPrivilegeEscalation | | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | defaultAddCapabilities | Controls the Linux capabilities used in pods. | + | | | + | requiredDropCapabilities | | + | | | + | allowedCapabilities | | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | seLinux | Controls the configuration of seLinux used in pods. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | allowedProcMountTypes | Controls the ProcMountTypes that can be used by pods. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | annotations | Configures AppArmor or Seccomp used by containers in a pod. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | forbiddenSysctls | Controls the configuration of Sysctl used by containers in a pod. | + | | | + | allowedUnsafeSysctls | | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0275__section155111941177: + +Example of Enabling Unsafe Sysctls in Pod Security Policy +--------------------------------------------------------- + +You can configure allowed-unsafe-sysctls for a node pool. For CCE clusters of **v1.17.17** and later versions, add configurations in **allowedUnsafeSysctls** of the pod security policy to make the configuration take effect. For details, see :ref:`Table 1 `. + +In addition to modifying the global pod security policy, you can add new pod security policies. For example, enable the **net.core.somaxconn** unsafe sysctls. The following is an example of adding a pod security policy: + +.. code-block:: + + apiVersion: policy/v1beta1 + kind: PodSecurityPolicy + metadata: + annotations: + seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' + name: sysctl-psp + spec: + allowedUnsafeSysctls: + - net.core.somaxconn + allowPrivilegeEscalation: true + allowedCapabilities: + - '*' + fsGroup: + rule: RunAsAny + hostIPC: true + hostNetwork: true + hostPID: true + hostPorts: + - max: 65535 + min: 0 + privileged: true + runAsGroup: + rule: RunAsAny + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - '*' + --- + kind: ClusterRole + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: sysctl-psp + rules: + - apiGroups: + - "*" + resources: + - podsecuritypolicies + resourceNames: + - sysctl-psp + verbs: + - use + + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: sysctl-psp + roleRef: + kind: ClusterRole + name: sysctl-psp + apiGroup: rbac.authorization.k8s.io + subjects: + - kind: Group + name: system:authenticated + apiGroup: rbac.authorization.k8s.io + +Restoring the Original PSP +-------------------------- + +If you have modified the default pod security policy and want to restore the original pod security policy, perform the following operations. + +#. Create a policy description file named **policy.yaml**. **policy.yaml** is an example file name. You can rename it as required. + + **vi policy.yaml** + + The content of the description file is as follows: + + .. code-block:: + + apiVersion: policy/v1beta1 + kind: PodSecurityPolicy + metadata: + name: psp-global + annotations: + seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' + spec: + privileged: true + allowPrivilegeEscalation: true + allowedCapabilities: + - '*' + volumes: + - '*' + hostNetwork: true + hostPorts: + - min: 0 + max: 65535 + hostIPC: true + hostPID: true + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'RunAsAny' + fsGroup: + rule: 'RunAsAny' + + --- + kind: ClusterRole + apiVersion: rbac.authorization.k8s.io/v1 + metadata: + name: psp-global + rules: + - apiGroups: + - "*" + resources: + - podsecuritypolicies + resourceNames: + - psp-global + verbs: + - use + + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp-global + roleRef: + kind: ClusterRole + name: psp-global + apiGroup: rbac.authorization.k8s.io + subjects: + - kind: Group + name: system:authenticated + apiGroup: rbac.authorization.k8s.io + +#. Run the following command: + + **kubectl apply -f policy.yaml** diff --git a/umn/source/permissions_management/pod_security/configuring_pod_security_admission.rst b/umn/source/permissions/pod_security/configuring_pod_security_admission.rst similarity index 91% rename from umn/source/permissions_management/pod_security/configuring_pod_security_admission.rst rename to umn/source/permissions/pod_security/configuring_pod_security_admission.rst index c775122..ebfab26 100644 --- a/umn/source/permissions_management/pod_security/configuring_pod_security_admission.rst +++ b/umn/source/permissions/pod_security/configuring_pod_security_admission.rst @@ -5,7 +5,7 @@ Configuring Pod Security Admission ================================== -Before using `Pod Security Admission `__, you need to understand Kubernetes `Pod Security Standards `__. These standards define different isolation levels for pods. They let you define how you want to restrict the behavior of pods in a clear, consistent fashion. Kubernetes offers a built-in pod security admission controller to enforce the pod security standards. Pod security restrictions are applied at the namespace level when pods are created. +Before using `pod security admission `__, understand Kubernetes `Pod Security Standards `__. These standards define different isolation levels for pods. They let you define how you want to restrict the behavior of pods in a clear, consistent fashion. Kubernetes offers a built-in pod security admission controller to enforce the pod security standards. Pod security restrictions are applied at the namespace level when pods are created. The pod security standard defines three security policy levels: @@ -18,7 +18,7 @@ The pod security standard defines three security policy levels: +============+================================================================================================================================================================================================================+ | privileged | Unrestricted policy, providing the widest possible level of permissions, typically aimed at system- and infrastructure-level workloads managed by privileged, trusted users, such as CNIs and storage drivers. | +------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | baseline | Minimally restrictive policy which prevents known privilege escalations, typically targeted at non-critical workloads. This policy disables capabilities such as hostNetwork and hostPID. | + | baseline | Minimally restrictive policy that prevents known privilege escalations, typically targeted at non-critical workloads. This policy disables capabilities such as hostNetwork and hostPID. | +------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | restricted | Heavily restricted policy, following current Pod hardening best practices. | +------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -30,7 +30,7 @@ Setting security context: `Configure a Security Context for a Pod or Container < Pod Security Admission Labels ----------------------------- -Kubernetes defines three types of labels for Pod Security Admission (see :ref:`Table 2 `). You can set these labels in a namespace to define the pod security standard level to be used. However, do not change the pod security standard level in system namespaces such as kube-system. Otherwise, pods in the system namespace may be faulty. +Kubernetes defines three types of labels for pod security admission (see :ref:`Table 2 `). You can set these labels in a namespace to define the pod security standard level to be used. However, do not change the pod security standard level in system namespaces such as kube-system. Otherwise, pods in the system namespace may be faulty. .. _cce_10_0466__table198561415448: @@ -96,6 +96,8 @@ If pods are deployed in the preceding namespace, the following security restrict #. Restrictions related to the baseline policy are verified (audit mode + baseline level). That is, if the pod or container violates the policy, the corresponding event is recorded into the audit log. #. Restrictions related to the restricted policy are verified (warn mode + restricted level). That is, if the pod or container violates the policy, the user will receive an alarm when creating the pod. +.. _cce_10_0466__section7164192319226: + Migrating from Pod Security Policy to Pod Security Admission ------------------------------------------------------------ @@ -108,8 +110,8 @@ If you use pod security policies in a cluster earlier than v1.25 and need to rep #. PSP lets you bind different policies to different service accounts. This approach has many pitfalls and is not recommended, but if you require this feature anyway you will need to use a third-party webhook instead. #. Do not apply pod security admission to namespaces where CCE components, such as kube-system, kube-public, and kube-node-lease, are deployed. Otherwise, CCE components and add-on functions will be abnormal. -Reference ---------- +Documentation +------------- - `Pod Security Admission `__ - `Mapping PodSecurityPolicies to Pod Security Standards `__ diff --git a/umn/source/permissions_management/pod_security/index.rst b/umn/source/permissions/pod_security/index.rst similarity index 100% rename from umn/source/permissions_management/pod_security/index.rst rename to umn/source/permissions/pod_security/index.rst diff --git a/umn/source/permissions_management/service_account_token_security_improvement.rst b/umn/source/permissions/service_account_token_security_improvement.rst similarity index 89% rename from umn/source/permissions_management/service_account_token_security_improvement.rst rename to umn/source/permissions/service_account_token_security_improvement.rst index 13b846e..7997de9 100644 --- a/umn/source/permissions_management/service_account_token_security_improvement.rst +++ b/umn/source/permissions/service_account_token_security_improvement.rst @@ -7,7 +7,7 @@ Service Account Token Security Improvement In clusters earlier than v1.21, a token is obtained by mounting the secret of the service account to a pod. Tokens obtained this way are permanent. This approach is no longer recommended starting from version 1.21. Service accounts will stop auto creating secrets in clusters from version 1.25. -In clusters of version 1.21 or later, you can use the `TokenRequest `__ API to obtain the token and use the projected volume to mount the token to the pod. Such tokens are valid for a fixed period (one hour by default). Before expiration, Kubelet refreshes the token to ensure that the pod always uses a valid token. When the mounting pod is deleted, the token automatically becomes invalid. This approach is implemented by the `BoundServiceAccountTokenVolume `__ feature to improve the token security of the service account. Kubernetes clusters of v1.21 and later enables this approach by default. +In clusters of version 1.21 or later, you can use the `TokenRequest `__ API to obtain the token and use the projected volume to mount the token to the pod. Such tokens are valid for a fixed period (one hour by default). Before expiration, Kubelet refreshes the token to ensure that the pod always uses a valid token. When the mounting pod is deleted, the token automatically becomes invalid. This approach is implemented by the `BoundServiceAccountTokenVolume `__ feature to improve the token security of the service account. Clusters of v1.21 or later enable this approach by default. For smooth transition, the community extends the token validity period to one year by default. After one year, the token becomes invalid, and clients that do not support certificate reloading cannot access the API server. It is recommended that clients of earlier versions be upgraded as soon as possible. Otherwise, service faults may occur. @@ -30,12 +30,12 @@ For details, visit https://github.com/kubernetes/enhancements/tree/master/keps/s Diagnosis --------- -Run the following steps to check your CCE clusters of v1.21 and later: +Perform the following steps to check your CCE clusters of v1.21 or later: -#. Use kubectl to connect to the cluster and run the **kubectl get --raw "/metrics" \| grep stale** command to query the metrics. Check the metric named **serviceaccount_stale_tokens_total**. +#. Use kubectl to connect to the cluster and run the **kubectl get --raw "/metrics" \| grep stale** command to obtain the metrics. Check the metric named **serviceaccount_stale_tokens_total**. If the value is greater than 0, some workloads in the cluster may be using an earlier client-go version. In this case, check whether this problem occurs in your deployed applications. If yes, upgrade client-go to the version specified by the community as soon as possible. The version must be at least two major versions of the CCE cluster. For example, if your cluster version is 1.23, the Kubernetes dependency library version must be at least 1.19. |image1| -.. |image1| image:: /_static/images/en-us_image_0000001518062816.png +.. |image1| image:: /_static/images/en-us_image_0000001647577164.png diff --git a/umn/source/permissions_management/pod_security/configuring_a_pod_security_policy.rst b/umn/source/permissions_management/pod_security/configuring_a_pod_security_policy.rst deleted file mode 100644 index 6d618e1..0000000 --- a/umn/source/permissions_management/pod_security/configuring_a_pod_security_policy.rst +++ /dev/null @@ -1,171 +0,0 @@ -:original_name: cce_10_0275.html - -.. _cce_10_0275: - -Configuring a Pod Security Policy -================================= - -A pod security policy (PSP) is a cluster-level resource that controls sensitive security aspects of the pod specification. The PodSecurityPolicy object in Kubernetes defines a group of conditions that a pod must comply with to be accepted by the system, as well as the default values of related fields. - -By default, the PSP access control component is enabled for clusters of v1.17.17 and a global default PSP named **psp-global** is created. You can modify the default policy (but not delete it). You can also create a PSP and bind it to the RBAC configuration. - -.. note:: - - - In addition to the global default PSP, the system configures independent PSPs for system components in namespace kube-system. Modifying the psp-global configuration does not affect pod creation in namespace kube-system. - - In Kubernetes 1.25, PSP has been removed and replaced by Pod Security Admission. For details, see :ref:`Configuring Pod Security Admission `. - -Modifying the Global Default PSP --------------------------------- - -Before modifying the global default PSP, ensure that a CCE cluster has been created and connected by using kubectl. - -#. Run the following command: - - **kubectl edit psp psp-global** - -#. Modify the parameters as required. For details, see `PodSecurityPolicy `__. - -.. _cce_10_0275__section155111941177: - -Example of Enabling Unsafe Sysctls in Pod Security Policy ---------------------------------------------------------- - -You can configure allowed-unsafe-sysctls for a node pool. For CCE **v1.17.17** and later versions, add configurations in **allowedUnsafeSysctls** of the pod security policy to make the configuration take effect. For details, see `PodSecurityPolicy `__. - -In addition to modifying the global pod security policy, you can add new pod security policies. For example, enable the **net.core.somaxconn** unsafe sysctls. The following is an example of adding a pod security policy: - -.. code-block:: - - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - annotations: - seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' - name: sysctl-psp - spec: - allowedUnsafeSysctls: - - net.core.somaxconn - allowPrivilegeEscalation: true - allowedCapabilities: - - '*' - fsGroup: - rule: RunAsAny - hostIPC: true - hostNetwork: true - hostPID: true - hostPorts: - - max: 65535 - min: 0 - privileged: true - runAsGroup: - rule: RunAsAny - runAsUser: - rule: RunAsAny - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - volumes: - - '*' - --- - kind: ClusterRole - apiVersion: rbac.authorization.k8s.io/v1 - metadata: - name: sysctl-psp - rules: - - apiGroups: - - "*" - resources: - - podsecuritypolicies - resourceNames: - - sysctl-psp - verbs: - - use - - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: sysctl-psp - roleRef: - kind: ClusterRole - name: sysctl-psp - apiGroup: rbac.authorization.k8s.io - subjects: - - kind: Group - name: system:authenticated - apiGroup: rbac.authorization.k8s.io - -Restoring the Original PSP --------------------------- - -If you have modified the default pod security policy and want to restore the original pod security policy, perform the following operations. - -#. Create a policy description file named **policy.yaml**. **policy.yaml** is an example file name. You can rename it as required. - - **vi policy.yaml** - - The content of the description file is as follows: - - .. code-block:: - - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: psp-global - annotations: - seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' - spec: - privileged: true - allowPrivilegeEscalation: true - allowedCapabilities: - - '*' - volumes: - - '*' - hostNetwork: true - hostPorts: - - min: 0 - max: 65535 - hostIPC: true - hostPID: true - runAsUser: - rule: 'RunAsAny' - seLinux: - rule: 'RunAsAny' - supplementalGroups: - rule: 'RunAsAny' - fsGroup: - rule: 'RunAsAny' - - --- - kind: ClusterRole - apiVersion: rbac.authorization.k8s.io/v1 - metadata: - name: psp-global - rules: - - apiGroups: - - "*" - resources: - - podsecuritypolicies - resourceNames: - - psp-global - verbs: - - use - - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp-global - roleRef: - kind: ClusterRole - name: psp-global - apiGroup: rbac.authorization.k8s.io - subjects: - - kind: Group - name: system:authenticated - apiGroup: rbac.authorization.k8s.io - -#. Run the following command: - - **kubectl apply -f policy.yaml** diff --git a/umn/source/product_bulletin/index.rst b/umn/source/product_bulletin/index.rst index b67e229..147518f 100644 --- a/umn/source/product_bulletin/index.rst +++ b/umn/source/product_bulletin/index.rst @@ -5,7 +5,7 @@ Product Bulletin ================ -- :ref:`Kubernetes Version Support Mechanism ` +- :ref:`Kubernetes Version Policy ` - :ref:`CCE Cluster Version Release Notes ` - :ref:`OS Patch Notes for Cluster Nodes ` - :ref:`Security Vulnerability Responses ` @@ -14,7 +14,7 @@ Product Bulletin :maxdepth: 1 :hidden: - kubernetes_version_support_mechanism + kubernetes_version_policy cce_cluster_version_release_notes os_patch_notes_for_cluster_nodes security_vulnerability_responses/index diff --git a/umn/source/product_bulletin/kubernetes_version_policy.rst b/umn/source/product_bulletin/kubernetes_version_policy.rst new file mode 100644 index 0000000..be5b357 --- /dev/null +++ b/umn/source/product_bulletin/kubernetes_version_policy.rst @@ -0,0 +1,66 @@ +:original_name: cce_bulletin_0033.html + +.. _cce_bulletin_0033: + +Kubernetes Version Policy +========================= + +CCE provides highly scalable, high-performance, enterprise-class Kubernetes clusters. As the Kubernetes community periodically releases Kubernetes versions, CCE will release cluster Open Beta Test (OBT) and commercially used versions accordingly. This section describes the Kubernetes version policy of CCE clusters. + +Lifecycle of CCE Cluster Versions +--------------------------------- + ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ +| Kubernetes Version | Status | Community Release In | Commercial Use of CCE Clusters | EOS of CCE Clusters | ++====================+=========================================================================+======================+================================+=====================+ +| v1.25 | In commercial use\ :sup:`:ref:`a `` | August 2022 | March 2023 | March 2025 | ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ +| v1.23 | In commercial use\ :sup:`:ref:`a `` | December 2021 | September 2022 | September 2024 | ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ +| v1.21 | In commercial use\ :sup:`:ref:`b `` | April 2021 | April 2022 | April 2024 | ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ +| v1.19 | In commercial use\ :sup:`:ref:`b `` | August 2020 | March 2021 | September 2023 | ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ +| v1.17 | End of service (EOS) | December 2019 | July 2020 | January 2023 | ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ +| v1.15 | EOS | June 2019 | December 2019 | September 2022 | ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ +| v1.13 | EOS | December 2018 | June 2019 | March 2022 | ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ +| v1.11 | EOS | August 2018 | October 2018 | March 2021 | ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ +| v1.9 | EOS | December 2017 | March 2018 | December 2020 | ++--------------------+-------------------------------------------------------------------------+----------------------+--------------------------------+---------------------+ + +.. note:: + + The CCE console supports clusters of the latest two commercially used versions: + + - .. _cce_bulletin_0033__li1996032514227: + + a: Clusters created using the console or APIs + + - .. _cce_bulletin_0033__li1896032515222: + + b: Clusters created only using APIs + +Phases of CCE Cluster Versions +------------------------------ + +- In commercial use: The cluster version has been fully verified and is stable and reliable. You can use clusters of this version in the production environment, and the CCE SLA is valid for such clusters. +- EOS: After the cluster version EOS, CCE does not support the creation of new clusters or provide technical support including new feature updates, vulnerability or issue fixes, new patches, work order guidance, and online checks for the EOS cluster version. The CCE SLA is not valid for such clusters. + +CCE Cluster Versions +-------------------- + +- .. _cce_bulletin_0033__en-us_topic_0261805755_li19299112592014: + + Cluster version: The format is *x.y*, where *x* indicates the major Kubernetes version and *y* indicates the minor Kubernetes version. For details, see the `Kubernetes community documentation `__. + +- Patch version: The format is *x.y.z-r(n)*, where *x.y* indicates the :ref:`CCE cluster version `, *z* indicates the minor CCE version, and -r(*n*) indicates the patch version. + + +.. figure:: /_static/images/en-us_image_0000001690672798.png + :alt: **Figure 1** Cluster version + + **Figure 1** Cluster version diff --git a/umn/source/product_bulletin/kubernetes_version_support_mechanism.rst b/umn/source/product_bulletin/kubernetes_version_support_mechanism.rst deleted file mode 100644 index e2d1bfc..0000000 --- a/umn/source/product_bulletin/kubernetes_version_support_mechanism.rst +++ /dev/null @@ -1,56 +0,0 @@ -:original_name: cce_bulletin_0003.html - -.. _cce_bulletin_0003: - -Kubernetes Version Support Mechanism -==================================== - -This section explains versioning in CCE, and the policies for Kubernetes version support. - -Version Description -------------------- - -**Version number**: The format is **x.y.z**, where **x.y** is the major version and **z** is the minor version. If the version number is followed by **-r**, the version is a patch version, for example, v1.15.6-r1. - -|image1| - -Version Requirements --------------------- - -.. important:: - - **Offline**: After a version is brought offline, a cluster of this version cannot be created on the CCE console and no new features will be released for the clusters of this version. - - **Obsolete**: CCE will no longer provide support for this version, including release of new functions, community bug fixes, vulnerability management, and upgrade. - -CCE releases only odd major Kubernetes versions, such as v1.25, v1.23, and v1.21. The specific version support policies in different scenarios are as follows: - -- Cluster creation - - CCE allows you to create clusters of two latest major Kubernetes versions, for example, v1.25 and v1.23. When v1.25 is commercially available, support for earlier versions (such as v1.21) will be removed. In this case, you will not be able to create clusters of v1.21 on the CCE console. - -- Cluster maintenance - - CCE maintains clusters of four major Kubernetes versions at most, such as v1.25, v1.23, v1.21, and v1.19. For example, after v1.25 is commercially available, support for v1.17 will be removed. - - |image2| - -- Cluster upgrade - - CCE allows you to upgrade clusters of **three major versions** at the same time. Clusters of 1.19 and later versions can be upgraded skipping one major version at most (for example, from 1.19 directly to 1.23). Each version is maintained for one year. For example, after v1.25 is available, support for earlier versions (such as v1.17) will be removed. You are advised to upgrade your Kubernetes cluster before the maintenance period ends. - - - Cluster version upgrade: After the latest major version (for example, v1.25) is available, CCE allows you to upgrade clusters to the last stable version of the second-latest major version, for example, v1.23. For details, see :ref:`Upgrade Overview `. - - Cluster patch upgrade: For existing clusters running on the live network, if there are major Kubernetes issues or vulnerabilities, CCE will perform the patch upgrade on these clusters in the background. Users are unaware of the patch upgrade. If the patch upgrade has adverse impact on user services, CCE will release a notice one week in advance. - -Version Release Cycle ---------------------- - -Kubernetes releases a major version in about four months. CCE will provide support to mirror the new Kubernetes version in about seven months after the version release. - -Version Constraints -------------------- - -After a cluster is upgraded, it cannot be rolled back to the source version. - -.. |image1| image:: /_static/images/en-us_image_0000001460905374.png -.. |image2| image:: /_static/images/en-us_image_0000001461224886.png diff --git a/umn/source/product_bulletin/os_patch_notes_for_cluster_nodes.rst b/umn/source/product_bulletin/os_patch_notes_for_cluster_nodes.rst index aa6534c..750fb1d 100644 --- a/umn/source/product_bulletin/os_patch_notes_for_cluster_nodes.rst +++ b/umn/source/product_bulletin/os_patch_notes_for_cluster_nodes.rst @@ -8,39 +8,33 @@ OS Patch Notes for Cluster Nodes Nodes in Hybrid Clusters ------------------------ -CCE nodes in Hybrid clusters can run on EulerOS 2.5, EulerOS 2.9, CentOS 7.7 and Ubuntu 22.04. The following table lists the supported patches for these OSs. +CCE nodes in Hybrid clusters can run on EulerOS 2.5, EulerOS 2.9and Ubuntu 22.04. You are not advised to use the CentOS 7.7 image to create nodes because the OS maintenance has stopped. + +The following table lists the supported patches for these OSs. .. table:: **Table 1** Node OS patches - +--------------------------+-----------------+-------------------------------------------+ - | OS | Cluster Version | Latest Kernel | - +==========================+=================+===========================================+ - | EulerOS release 2.5 | v1.25 | 3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | | v1.23 | 3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | | v1.21 | 3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | | v1.19 | 3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | EulerOS release 2.9 | v1.25 | 4.18.0-147.5.1.6.h766.eulerosv2r9.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | | v1.23 | 4.18.0-147.5.1.6.h766.eulerosv2r9.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | | v1.21 | 4.18.0-147.5.1.6.h766.eulerosv2r9.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | | v1.19 | 4.18.0-147.5.1.6.h766.eulerosv2r9.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | CentOS Linux release 7.7 | v1.25 | 3.10.0-1160.76.1.el7.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | | v1.23 | 3.10.0-1160.76.1.el7.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | | v1.21 | 3.10.0-1160.76.1.el7.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | | v1.19 | 3.10.0-1160.76.1.el7.x86_64 | - +--------------------------+-----------------+-------------------------------------------+ - | Ubuntu 22.04 | v1.25 | 5.15.0-53-generic | - +--------------------------+-----------------+-------------------------------------------+ + +---------------------+-----------------+-------------------------------------------+ + | OS | Cluster Version | Latest Kernel | + +=====================+=================+===========================================+ + | EulerOS release 2.5 | v1.25 | 3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64 | + +---------------------+-----------------+-------------------------------------------+ + | | v1.23 | 3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64 | + +---------------------+-----------------+-------------------------------------------+ + | | v1.21 | 3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64 | + +---------------------+-----------------+-------------------------------------------+ + | | v1.19 | 3.10.0-862.14.1.5.h687.eulerosv2r7.x86_64 | + +---------------------+-----------------+-------------------------------------------+ + | EulerOS release 2.9 | v1.25 | 4.18.0-147.5.1.6.h766.eulerosv2r9.x86_64 | + +---------------------+-----------------+-------------------------------------------+ + | | v1.23 | 4.18.0-147.5.1.6.h766.eulerosv2r9.x86_64 | + +---------------------+-----------------+-------------------------------------------+ + | | v1.21 | 4.18.0-147.5.1.6.h766.eulerosv2r9.x86_64 | + +---------------------+-----------------+-------------------------------------------+ + | | v1.19 | 4.18.0-147.5.1.6.h766.eulerosv2r9.x86_64 | + +---------------------+-----------------+-------------------------------------------+ + | Ubuntu 22.04 | v1.25 | 5.15.0-53-generic | + +---------------------+-----------------+-------------------------------------------+ .. table:: **Table 2** Mappings between BMS node OS versions and cluster versions @@ -54,34 +48,26 @@ CCE nodes in Hybrid clusters can run on EulerOS 2.5, EulerOS 2.9, CentOS 7.7 and .. table:: **Table 3** Mappings between OS versions and network model - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | OS Version | Cluster Version | VPC Network | Tunnel Network | Cloud Native Network 2.0 | - +==========================+=================+=============+================+==========================+ - | Ubuntu 22.04 | v1.25 | Y | x | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | CentOS Linux release 7.7 | v1.25 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | | v1.23 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | | v1.21 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | | v1.19 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | EulerOS release 2.9 | v1.25 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | | v1.23 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | | v1.21 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | | v1.19 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | EulerOS release 2.5 | v1.25 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | | v1.23 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | | v1.21 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ - | | v1.19 | Y | Y | Y | - +--------------------------+-----------------+-------------+----------------+--------------------------+ + +---------------------+-----------------+-------------+----------------+--------------------------+ + | OS Version | Cluster Version | VPC Network | Tunnel Network | Cloud Native Network 2.0 | + +=====================+=================+=============+================+==========================+ + | Ubuntu 22.04 | v1.25 | Y | x | Y | + +---------------------+-----------------+-------------+----------------+--------------------------+ + | EulerOS release 2.9 | v1.25 | Y | Y | Y | + +---------------------+-----------------+-------------+----------------+--------------------------+ + | | v1.23 | Y | Y | Y | + +---------------------+-----------------+-------------+----------------+--------------------------+ + | | v1.21 | Y | Y | Y | + +---------------------+-----------------+-------------+----------------+--------------------------+ + | | v1.19 | Y | Y | Y | + +---------------------+-----------------+-------------+----------------+--------------------------+ + | EulerOS release 2.5 | v1.25 | Y | Y | Y | + +---------------------+-----------------+-------------+----------------+--------------------------+ + | | v1.23 | Y | Y | Y | + +---------------------+-----------------+-------------+----------------+--------------------------+ + | | v1.21 | Y | Y | Y | + +---------------------+-----------------+-------------+----------------+--------------------------+ + | | v1.19 | Y | Y | Y | + +---------------------+-----------------+-------------+----------------+--------------------------+ The OS patches and verification results will be updated from time to time. You can update the operating system based on your needs. diff --git a/umn/source/workloads/volcano_scheduling/hybrid_deployment_of_online_and_offline_jobs.rst b/umn/source/scheduling/cloud_native_hybrid_deployment/dynamic_resource_oversubscription.rst similarity index 61% rename from umn/source/workloads/volcano_scheduling/hybrid_deployment_of_online_and_offline_jobs.rst rename to umn/source/scheduling/cloud_native_hybrid_deployment/dynamic_resource_oversubscription.rst index 3f1ff75..55570c7 100644 --- a/umn/source/workloads/volcano_scheduling/hybrid_deployment_of_online_and_offline_jobs.rst +++ b/umn/source/scheduling/cloud_native_hybrid_deployment/dynamic_resource_oversubscription.rst @@ -2,19 +2,8 @@ .. _cce_10_0384: -Hybrid Deployment of Online and Offline Jobs -============================================ - -Online and Offline Jobs ------------------------ - -Jobs can be classified into online jobs and offline jobs based on whether services are always online. - -- **Online job**: Such jobs run for a long time, with regular traffic surges, tidal resource requests, and high requirements on SLA, such as advertising and e-commerce services. -- **Offline jobs**: Such jobs run for a short time, have high computing requirements, and can tolerate high latency, such as AI and big data services. - -Resource Oversubscription and Hybrid Deployment ------------------------------------------------ +Dynamic Resource Oversubscription +================================= Many services see surges in traffic. To ensure performance and stability, resources are often requested at the maximum needed. However, the surges may ebb very shortly and resources, if not released, are wasted in non-peak hours. Especially for online jobs that request a large quantity of resources to ensure SLA, resource utilization can be as low as it gets. @@ -23,13 +12,17 @@ Resource oversubscription is the process of making use of idle requested resourc Hybrid deployment of online and offline jobs in a cluster can better utilize cluster resources. -.. figure:: /_static/images/en-us_image_0000001568902489.png +.. figure:: /_static/images/en-us_image_0000001647576720.png :alt: **Figure 1** Resource oversubscription **Figure 1** Resource oversubscription -Oversubscription for Hybrid Deployment --------------------------------------- +Features +-------- + +.. note:: + + After dynamic resource oversubscription and elastic scaling are enabled in a node pool, oversubscribed resources change rapidly because the resource usage of high-priority applications changes in real time. To prevent frequent node scale-ins and scale-outs, do not consider oversubscribed resources when evaluating node scale-ins. Hybrid deployment is supported, and CPU and memory resources can be oversubscribed. The key features are as follows: @@ -60,60 +53,57 @@ Hybrid deployment is supported, and CPU and memory resources can be oversubscrib - Resource oversubscription and hybrid deployment: - If only hybrid deployment is used, you need to configure the label **volcano.sh/colocation=true** for the node and delete the node label **volcano.sh/oversubscription** or set its value to **false**. + If only hybrid deployment is used, configure the label **volcano.sh/colocation=true** for the node and delete the node label **volcano.sh/oversubscription** or set its value to **false**. If the label **volcano.sh/colocation=true** is configured for a node, hybrid deployment is enabled. If the label **volcano.sh/oversubscription=true** is configured, resource oversubscription is enabled. The following table lists the available feature combinations after hybrid deployment or resource oversubscription is enabled. - +--------------------------------------------------------+----------------------------------------------------------------------+-------------------------------+----------------------------------------------------------------------------------------+ - | Hybrid Deployment Enabled (volcano.sh/colocation=true) | Resource oversubscription Enabled (volcano.sh/oversubscription=true) | Use Oversubscribed Resources? | Conditions for Evicting Offline Pods | - +========================================================+======================================================================+===============================+========================================================================================+ - | No | No | No | None | - +--------------------------------------------------------+----------------------------------------------------------------------+-------------------------------+----------------------------------------------------------------------------------------+ - | Yes | No | No | The node resource usage exceeds the high threshold. | - +--------------------------------------------------------+----------------------------------------------------------------------+-------------------------------+----------------------------------------------------------------------------------------+ - | No | Yes | Yes | The node resource usage exceeds the high threshold, and the node request exceeds 100%. | - +--------------------------------------------------------+----------------------------------------------------------------------+-------------------------------+----------------------------------------------------------------------------------------+ - | Yes | Yes | Yes | The node resource usage exceeds the high threshold. | - +--------------------------------------------------------+----------------------------------------------------------------------+-------------------------------+----------------------------------------------------------------------------------------+ + +--------------------------------------------------------+----------------------------------------------------------------------+------------------------------+----------------------------------------------------------------------------------------+ + | Hybrid Deployment Enabled (volcano.sh/colocation=true) | Resource oversubscription Enabled (volcano.sh/oversubscription=true) | Use Oversubscribed Resources | Conditions for Evicting Offline Pods | + +========================================================+======================================================================+==============================+========================================================================================+ + | No | No | No | None | + +--------------------------------------------------------+----------------------------------------------------------------------+------------------------------+----------------------------------------------------------------------------------------+ + | Yes | No | No | The node resource usage exceeds the high threshold. | + +--------------------------------------------------------+----------------------------------------------------------------------+------------------------------+----------------------------------------------------------------------------------------+ + | No | Yes | Yes | The node resource usage exceeds the high threshold, and the node request exceeds 100%. | + +--------------------------------------------------------+----------------------------------------------------------------------+------------------------------+----------------------------------------------------------------------------------------+ + | Yes | Yes | Yes | The node resource usage exceeds the high threshold. | + +--------------------------------------------------------+----------------------------------------------------------------------+------------------------------+----------------------------------------------------------------------------------------+ -Constraints ------------ +kubelet Oversubscription +------------------------ -**Specifications** +.. important:: -- Kubernetes version: + **Specifications** - - v1.19: v1.19.16-r4 or later - - v1.21: v1.21.7-r0 or later - - v1.23: v1.23.5-r0 or later - - v1.25 or later + - Cluster Version -- Cluster type: CCE or CCE Turbo -- Node OS: EulerOS 2.9 (kernel-4.18.0-147.5.1.6.h729.6.eulerosv2r9.x86_64) -- Node type: ECS -- volcano add-on version: 1.7.0 or later + - v1.19: v1.19.16-r4 or later + - v1.21: v1.21.7-r0 or later + - v1.23: v1.23.5-r0 or later + - v1.25 or later -**Constraints** + - Cluster Type: CCE or CCE Turbo + - Node OS: EulerOS 2.9 (kernel-4.18.0-147.5.1.6.h729.6.eulerosv2r9.x86_64) + - Node Type: ECS + - The volcano add-on version: 1.7.0 or later -- Before enabling the volcano oversubscription plug-in, ensure that the overcommit plug-in is not enabled. -- Modifying the label of an oversubscribed node does not affect the running pods. -- Running pods cannot be converted between online and offline services. To convert services, you need to rebuild pods. -- If the label **volcano.sh/oversubscription=true** is configured for a node in the cluster, the **oversubscription** configuration must be added to the volcano add-on. Otherwise, the scheduling of oversubscribed nodes will be abnormal. Ensure that you have correctly configure labels because the scheduler does not check the add-on and node configurations. For details about the labels, see :ref:`Configuring Oversubscription Labels for Scheduling `. -- To disable oversubscription, perform the following operations: + **Constraints** - - Remove the **volcano.sh/oversubscription** label from the oversubscribed node. - - Set **over-subscription-resource** to **false**. - - Modify the configmap of the volcano scheduler named **volcano-scheduler-configmap** and remove the oversubscription add-on. + - Before enabling oversubscription, ensure that the overcommit add-on is not enabled on volcano. + - Modifying the label of an oversubscribed node does not affect the running pods. + - Running pods cannot be converted between online and offline services. To convert services, you need to rebuild pods. + - If the label **volcano.sh/oversubscription=true** is configured for a node in the cluster, the **oversubscription** configuration must be added to the volcano add-on. Otherwise, the scheduling of oversold nodes will be abnormal. Ensure that you have correctly configure labels because the scheduler does not check the add-on and node configurations. For details about the labels, see :ref:`Table 1 `. + - To disable oversubscription, perform the following operations: -- If **cpu-manager-policy** is set to static core binding on a node, do not assign the QoS class of Guaranteed to offline pods. If core binding is required, change the pods to online pods. Otherwise, offline pods may occupy the CPUs of online pods, causing online pod startup failures, and offline pods fail to be started although they are successfully scheduled. -- If **cpu-manager-policy** is set to static core binding on a node, do not bind cores to all online pods. Otherwise, online pods occupy all CPU or memory resources, leaving a small number of oversubscribed resources. + - Remove the **volcano.sh/oversubscription** label from the oversubscribed node. + - Set **over-subscription-resource** to **false**. + - Modify the configmap of the volcano scheduler named **volcano-scheduler-configmap** and remove the oversubscription add-on. -.. _cce_10_0384__section1940910414220: + - If **cpu-manager-policy** is set to static core binding on a node, do not assign the QoS class of Guaranteed to offline pods. If core binding is required, change the pods to online pods. Otherwise, offline pods may occupy the CPUs of online pods, causing online pod startup failures, and offline pods fail to be started although they are successfully scheduled. + - If **cpu-manager-policy** is set to static core binding on a node, do not bind cores to all online pods. Otherwise, online pods occupy all CPU or memory resources, leaving a small number of oversubscribed resources. -Configuring Oversubscription Labels for Scheduling --------------------------------------------------- - -If the label **volcano.sh/oversubscription=true** is configured for a node in the cluster, the **oversubscription** configuration must be added to the volcano add-on. Otherwise, the scheduling of oversubscribed nodes will be abnormal. For details about the related configuration, see :ref:`Table 1 `. +If the label **volcano.sh/oversubscription=true** is configured for a node in the cluster, the **oversubscription** configuration must be added to the volcano add-on. Otherwise, the scheduling of oversold nodes will be abnormal. For details about the related configuration, see :ref:`Table 1 `. Ensure that you have correctly configure labels because the scheduler does not check the add-on and node configurations. @@ -133,14 +123,11 @@ Ensure that you have correctly configure labels because the scheduler does not c | No | Yes | Not triggered or failed. Avoid this configuration. | +----------------------------+--------------------------------+----------------------------------------------------+ -Using Hybrid Deployment ------------------------ - #. Configure the volcano add-on. a. Use kubectl to connect to the cluster. - b. Install the volcano plug-in and add the **oversubscription** plug-in to **volcano-scheduler-configmap**. Ensure that the plug-in configuration does not contain the **overcommit** plug-in. If **- name: overcommit** exists, delete this configuration. + b. Install the volcano add-on and add the oversubscription add-on to **volcano-scheduler-configmap**. Ensure that the add-on configuration does not contain the overcommit add-on. If **- name: overcommit** exists, delete this configuration. In addition, set **enablePreemptable** and **enableJobStarving** of the gang add-on to **false** and configure a preemption action. .. code-block:: @@ -148,10 +135,12 @@ Using Hybrid Deployment apiVersion: v1 data: volcano-scheduler.conf: | - actions: "enqueue, allocate, backfill" + actions: "enqueue, allocate, preempt" # Configure a preemption action. tiers: - plugins: - name: gang + enablePreemptable: false + enableJobStarving: false - name: priority - name: conformance - name: oversubscription @@ -171,7 +160,9 @@ Using Hybrid Deployment a. Create a node pool. b. Choose **More** > **Manage** in the **Operation** column of the created node pool. - c. In the **Manage Component** window that is displayed, set **over-subscription-resource** under **kubelet** to **true** and click **OK**. + c. In the **Manage Components** window that is displayed, set **over-subscription-resource** under **kubelet** to **true** and click **OK**. + + |image1| #. Set the node oversubscription label. @@ -231,9 +222,33 @@ Using Hybrid Deployment | | The default value is **cpu,memory**. | +-------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------+ -#. Deploy online and offline jobs. +#. Create resources at a high- and low-priorityClass, respectively. - The **volcano.sh/qos-level** label needs to be added to annotation to distinguish offline jobs. The value is an integer ranging from -7 to 7. If the value is less than 0, the job is an offline job. If the value is greater than or equal to 0, the job is a high-priority job, that is, online job. You do not need to set this label for online jobs. For both online and offline jobs, set **schedulerName** to **volcano** to enable the Volcano scheduler. + .. code-block:: + + cat < 4h58m v1.19.16-r2-CCE22.5.1 192.168.0.3 Ready 148m v1.19.16-r2-CCE22.5.1 - - 192.168.0.173 is an oversubscribed node (with the **volcano.sh/oversubscirption=true** label). - - 192.168.0.3 is a non-oversubscribed node (without the **volcano.sh/oversubscirption=true** label). + - 192.168.0.173 is an oversubscribed node (with the **volcano.sh/oversubscription=true** label). + - 192.168.0.3 is a non-oversubscribed node (without the **volcano.sh/oversubscription=true** label). .. code-block:: @@ -343,9 +360,10 @@ The following uses an example to describe how to deploy online and offline jobs labels: app: offline annotations: - volcano.sh/qos-level: "-1" #Offline job label + volcano.sh/qos-level: "-1" # Offline job label spec: - schedulerName: volcano # The Volcano scheduler is used. + schedulerName: volcano # The volcano scheduler is used. + priorityClassName: testing # Configure the testing priorityClass. containers: - name: container-1 image: nginx:latest @@ -390,7 +408,8 @@ The following uses an example to describe how to deploy online and offline jobs labels: app: online spec: - schedulerName: volcano # The Volcano scheduler is used. + schedulerName: volcano # The volcano scheduler is used. + priorityClassName: production # Configure the production priorityClass. containers: - name: container-1 image: resource_consumer:latest @@ -435,7 +454,7 @@ The following uses an example to describe how to deploy online and offline jobs labels: app: online spec: - affinity: # Submit an online job to an oversubscribed node. + affinity: # Submit an online job to an oversubscribed node. nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: @@ -444,7 +463,8 @@ The following uses an example to describe how to deploy online and offline jobs operator: In values: - 192.168.0.173 - schedulerName: volcano # The Volcano scheduler is used. + schedulerName: volcano # The volcano scheduler is used. + priorityClassName: production # Configure the production priorityClass. containers: - name: container-1 image: resource_consumer:latest @@ -504,132 +524,6 @@ The following uses an example to describe how to deploy online and offline jobs online-6f44bb68bd-b8z9p 1/1 Running 0 24m 192.168.10.18 192.168.0.173 online-6f44bb68bd-g6xk8 1/1 Running 0 24m 192.168.10.69 192.168.0.173 -#. Log in to the CCE console and click the cluster name to access the cluster console. - -#. In the navigation pane on the left, choose **Nodes**. Click the **Node Pools** tab. When creating or updating a node pool, enable hybrid deployment of online and offline services in **Advanced Settings**. - -#. In the navigation pane on the left, choose **Add-ons**. Click **Install** under volcano. In the **Advanced Settings** area, set **colocation_enable** to **true** to enable hybrid deployment of online and offline services. For details about the installation, see :ref:`volcano `. - - If the volcano add-on has been installed, click **Edit** to view or modify the parameter **colocation_enable**. - -#. Enable CPU Burst. - - After confirming that the volcano add-on is working, run the following command to edit the parameter **configmap** of **volcano-agent-configuration** in the namespace **kube-system**. If **enable** is set to **true**, CPU Burst is enabled. If **enable** is set to **false**, CPU Burst is disabled. - - .. code-block:: - - kubectl edit configmap -nkube-system volcano-agent-configuration - - For example: - - .. code-block:: - - cpuBurstConfig: - enable: true - -#. Deploy a workload in a node pool where hybrid deployment has been enabled. Take Nginx as an example. Set **cpu** under **requests** to **2** and **cpu** under **limits** to **4**, and create a Service that can be accessed in the cluster for the workload. - - .. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: nginx - namespace: default - spec: - replicas: 2 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - annotations: - volcano.sh/enable-quota-burst=true - volcano.sh/quota-burst-time=200000 - spec: - containers: - - name: container-1 - image: nginx:latest - resources: - limits: - cpu: "4" - requests: - cpu: "2" - imagePullSecrets: - - name: default-secret - --- - apiVersion: v1 - kind: Service - metadata: - name: nginx - namespace: default - labels: - app: nginx - spec: - selector: - app: nginx - ports: - - name: cce-service-0 - targetPort: 80 - nodePort: 0 - port: 80 - protocol: TCP - type: ClusterIP - - +------------------------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Annotation | Mandatory | Description | - +====================================+=======================+=================================================================================================================================================================================================================================================================================================================================================+ - | volcano.sh/enable-quota-burst=true | Yes | CPU Burst is enabled for the workload. | - +------------------------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volcano.sh/quota-burst-time=200000 | No | To ensure CPU scheduling stability and reduce contention when multiple containers encounter CPU bursts at the same time, the default **CPU Burst** value is the same as the **CPU Quota** value. That is, a container can use a maximum of twice the **CPU Limit** value. By default, **CPU Burst** is set for all service containers in a pod. | - | | | | - | | | In this example, the **CPU Limit** of the container is **4**, that is, the default value is **400,000** (1 core = 100,000), indicating that a maximum of four additional cores can be used after the **CPU Limit** value is reached. | - +------------------------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -#. Verify CPU Burst. - - You can use the wrk tool to increase load of the workload and observe the service latency, traffic limiting, and CPU limit exceeding when CPU Burst is enabled and disabled, respectively. - - a. Run the following command to increase load of the pod. *$service_ip* indicates the service IP address associated with the pod. - - .. code-block:: - - # You need to download and install the wrk tool on the node. - # The Gzip compression module is enabled in the Apache configuration to simulate the computing logic for the server to process requests. - # Run the following command to increase the load. Note that you need to change the IP address of the target application. - wrk -H "Accept-Encoding: deflate, gzip" -t 4 -c 28 -d 120 --latency --timeout 2s http://$service_ip - - b. Obtain the pod ID. - - .. code-block:: - - kubectl get pods -n -o jsonpath='{.metadata.uid}' - - c. You can run the following command on the node to view the traffic limiting status and CPU limit exceeding status. In the command, *$PodID* indicates the pod ID. - - .. code-block:: - - $cat /sys/fs/cgroup/cpuacct/kubepods/$PodID/cpu.stat - nr_periods 0 # Number of scheduling periods - nr_throttled 0 # Traffic limiting times - throttled_time 0 # Traffic limiting duration (ns) - nr_bursts 0 # CPU Limit exceeding times - burst_time 0 # Total Limit exceeding duration - - .. table:: **Table 3** Result summary in this example - - +-----------------------+-------------+------------------------+---------------------------+-----------------------+--------------------------------+ - | CPU Burst | P99 Latency | nr_throttled | throttled_time | nr_bursts | bursts_time | - | | | | | | | - | | | Traffic Limiting Times | Traffic Limiting Duration | Limit Exceeding Times | Total Limit Exceeding Duration | - +=======================+=============+========================+===========================+=======================+================================+ - | CPU Burst not enabled | 2.96 ms | 986 | 14.3s | 0 | 0 | - +-----------------------+-------------+------------------------+---------------------------+-----------------------+--------------------------------+ - | CPU Burst enabled | 456 µs | 0 | 0 | 469 | 3.7s | - +-----------------------+-------------+------------------------+---------------------------+-----------------------+--------------------------------+ - Handling Suggestions -------------------- @@ -642,3 +536,9 @@ Handling Suggestions - You can add oversubscribed resources (such as CPU and memory) at any time. You can reduce the oversubscribed resource types only when the resource allocation rate does not exceed 100%. + +- If an offline job is deployed on a node ahead of an online job and the online job cannot be scheduled due to insufficient resources, configure a higher priorityClass for the online job than that for the offline job. + +- If there are only online jobs on a node and the eviction threshold is reached, the offline jobs that are scheduled to the current node will be evicted soon. This is normal. + +.. |image1| image:: /_static/images/en-us_image_0000001647576724.png diff --git a/umn/source/scheduling/cloud_native_hybrid_deployment/index.rst b/umn/source/scheduling/cloud_native_hybrid_deployment/index.rst new file mode 100644 index 0000000..6e3a365 --- /dev/null +++ b/umn/source/scheduling/cloud_native_hybrid_deployment/index.rst @@ -0,0 +1,14 @@ +:original_name: cce_10_0709.html + +.. _cce_10_0709: + +Cloud Native Hybrid Deployment +============================== + +- :ref:`Dynamic Resource Oversubscription ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + dynamic_resource_oversubscription diff --git a/umn/source/workloads/cpu_core_binding/binding_cpu_cores.rst b/umn/source/scheduling/cpu_scheduling/cpu_policy.rst similarity index 63% rename from umn/source/workloads/cpu_core_binding/binding_cpu_cores.rst rename to umn/source/scheduling/cpu_scheduling/cpu_policy.rst index d489906..ef55a32 100644 --- a/umn/source/workloads/cpu_core_binding/binding_cpu_cores.rst +++ b/umn/source/scheduling/cpu_scheduling/cpu_policy.rst @@ -2,8 +2,11 @@ .. _cce_10_0351: -Binding CPU Cores -================= +CPU Policy +========== + +Scenarios +--------- By default, kubelet uses `CFS quotas `__ to enforce pod CPU limits. When the node runs many CPU-bound pods, the workload can move to different CPU cores depending on whether the pod is throttled and which CPU cores are available at scheduling time. Many workloads are not sensitive to this migration and thus work fine without any intervention. Some applications are CPU-sensitive. They are sensitive to: @@ -13,40 +16,34 @@ By default, kubelet uses `CFS quotas `. -- Both **requests** and **limits** must be set in the pod definition and their values must be the same. -- The value of **requests** must be an integer for the container. -- For an init container, it is recommended that you set its **requests** to the same as that of the service container. Otherwise, the service container does not inherit the CPU allocation result of the init container, and the CPU manager reserves more CPU resources than supposed. For more information, see `App Containers can't inherit Init Containers CPUs - CPU Manager Static Policy `__. - -You can use :ref:`Scheduling Policy (Affinity/Anti-affinity) ` to schedule the configured pods to the nodes where the static CPU policy is enabled. In this way, cores can be bound. +If your workloads are sensitive to any of these items and CPU cache affinity and scheduling latency significantly affect workload performance, kubelet allows alternative CPU management policies (CPU binding) to determine some placement preferences on the node. The CPU manager preferentially allocates resources on a socket and full physical cores to avoid interference. .. _cce_10_0351__section173918176434: Enabling the CPU Management Policy ---------------------------------- -A `CPU management policy `__ is specified by the kubelet flag **--cpu-manager-policy**. The following policies are supported: +A `CPU management policy `__ is specified by the kubelet flag **--cpu-manager-policy**. By default, Kubernetes supports the following policies: - Disabled (**none**): the default policy. The **none** policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the OS scheduler does automatically. -- Enabled (**static**): The **static** policy allows containers in **Guaranteed** pods with integer CPU requests to be granted increased CPU affinity and exclusivity on the node. +- Enabled (**static**): The **static** policy allows containers in `guaranteed `__ pods with integer GPU requests to be granted increased CPU affinity and exclusivity on the node. -When creating a cluster, you can configure the CPU management policy in **Advanced Settings**, as shown in the following figure. - -|image1| +When creating a cluster, you can configure the CPU management policy in **Advanced Settings**. You can also configure the policy in a node pool. The configuration will change the kubelet flag **--cpu-manager-policy** on the node. Log in to the CCE console, click the cluster name, access the cluster details page, and choose **Nodes** in the navigation pane. On the page displayed, click the **Node Pools** tab. Choose **More** > **Manage** in the **Operation** column of the target node pool, and change the value of **cpu-manager-policy** to **static**. -Pod Configuration ------------------ +Allowing Pods to Exclusively Use the CPU Resources +-------------------------------------------------- -For CPU, both **requests** and **limits** must be set to the same, and **requests** must be an integer. +Prerequisites: + +- Enable the **static** policy on the node. For details, see :ref:`Enabling the CPU Management Policy `. +- Both requests and limits must be configured in pods and their values must be the same integer. +- If an init container needs to exclusively use CPUs, set its requests to the same as that of the service container. Otherwise, the service container does not inherit the CPU allocation result of the init container, and the CPU manager reserves more CPU resources than supposed. For more information, see `App Containers can't inherit Init Containers CPUs - CPU Manager Static Policy `__. + +You can use :ref:`Scheduling Policy (Affinity/Anti-affinity) ` to schedule the configured pods to the nodes where the **static** policy is enabled. In this way, the pods can exclusively use the CPU resources. + +Example YAML: .. code-block:: @@ -76,5 +73,3 @@ For CPU, both **requests** and **limits** must be set to the same, and **request memory: 2048Mi imagePullSecrets: - name: default-secret - -.. |image1| image:: /_static/images/en-us_image_0000001569022837.png diff --git a/umn/source/workloads/cpu_core_binding/index.rst b/umn/source/scheduling/cpu_scheduling/index.rst similarity index 50% rename from umn/source/workloads/cpu_core_binding/index.rst rename to umn/source/scheduling/cpu_scheduling/index.rst index 05441bd..a0f4354 100644 --- a/umn/source/workloads/cpu_core_binding/index.rst +++ b/umn/source/scheduling/cpu_scheduling/index.rst @@ -2,13 +2,13 @@ .. _cce_10_0551: -CPU Core Binding -================ +CPU Scheduling +============== -- :ref:`Binding CPU Cores ` +- :ref:`CPU Policy ` .. toctree:: :maxdepth: 1 :hidden: - binding_cpu_cores + cpu_policy diff --git a/umn/source/workloads/gpu_scheduling.rst b/umn/source/scheduling/gpu_scheduling/default_gpu_scheduling_in_kubernetes.rst similarity index 87% rename from umn/source/workloads/gpu_scheduling.rst rename to umn/source/scheduling/gpu_scheduling/default_gpu_scheduling_in_kubernetes.rst index 21bac57..85c738f 100644 --- a/umn/source/workloads/gpu_scheduling.rst +++ b/umn/source/scheduling/gpu_scheduling/default_gpu_scheduling_in_kubernetes.rst @@ -2,8 +2,8 @@ .. _cce_10_0345: -GPU Scheduling -============== +Default GPU Scheduling in Kubernetes +==================================== You can use GPUs in CCE containers. @@ -12,9 +12,9 @@ Prerequisites - A GPU node has been created. For details, see :ref:`Creating a Node `. -- The gpu-beta add-on has been installed. During the installation, select the GPU driver on the node. For details, see :ref:`gpu-beta `. +- The gpu-device-plugin (previously gpu-beta add-on) has been installed. During the installation, select the GPU driver on the node. For details, see :ref:`gpu-beta `. -- gpu-beta mounts the driver directory to **/usr/local/nvidia/lib64**. To use GPU resources in a container, you need to add **/usr/local/nvidia/lib64** to the **LD_LIBRARY_PATH** environment variable. +- gpu-device-plugin mounts the driver directory to **/usr/local/nvidia/lib64**. To use GPU resources in a container, add **/usr/local/nvidia/lib64** to the **LD_LIBRARY_PATH** environment variable. Generally, you can use any of the following methods to add a file: @@ -77,6 +77,10 @@ Create a workload and request GPUs. You can specify the number of GPUs as follow **nvidia.com/gpu** specifies the number of GPUs to be requested. The value can be smaller than **1**. For example, **nvidia.com/gpu: 0.5** indicates that multiple pods share a GPU. In this case, all the requested GPU resources come from the same GPU card. +.. note:: + + When you use **nvidia.com/gpu** to specify the number of GPUs, the values of requests and limits must be the same. + After **nvidia.com/gpu** is specified, workloads will not be scheduled to nodes without GPUs. If the node is GPU-starved, Kubernetes events similar to the following are reported: - 0/2 nodes are available: 2 Insufficient nvidia.com/gpu. @@ -84,12 +88,6 @@ After **nvidia.com/gpu** is specified, workloads will not be scheduled to nodes To use GPUs on the CCE console, select the GPU quota and specify the percentage of GPUs reserved for the container when creating a workload. - -.. figure:: /_static/images/en-us_image_0000001569022929.png - :alt: **Figure 1** Using GPUs - - **Figure 1** Using GPUs - GPU Node Labels --------------- diff --git a/umn/source/scheduling/gpu_scheduling/index.rst b/umn/source/scheduling/gpu_scheduling/index.rst new file mode 100644 index 0000000..a6fad57 --- /dev/null +++ b/umn/source/scheduling/gpu_scheduling/index.rst @@ -0,0 +1,14 @@ +:original_name: cce_10_0720.html + +.. _cce_10_0720: + +GPU Scheduling +============== + +- :ref:`Default GPU Scheduling in Kubernetes ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + default_gpu_scheduling_in_kubernetes diff --git a/umn/source/scheduling/index.rst b/umn/source/scheduling/index.rst new file mode 100644 index 0000000..78b5e25 --- /dev/null +++ b/umn/source/scheduling/index.rst @@ -0,0 +1,22 @@ +:original_name: cce_10_0674.html + +.. _cce_10_0674: + +Scheduling +========== + +- :ref:`Overview ` +- :ref:`CPU Scheduling ` +- :ref:`GPU Scheduling ` +- :ref:`Volcano Scheduling ` +- :ref:`Cloud Native Hybrid Deployment ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + cpu_scheduling/index + gpu_scheduling/index + volcano_scheduling/index + cloud_native_hybrid_deployment/index diff --git a/umn/source/scheduling/overview.rst b/umn/source/scheduling/overview.rst new file mode 100644 index 0000000..78030d1 --- /dev/null +++ b/umn/source/scheduling/overview.rst @@ -0,0 +1,55 @@ +:original_name: cce_10_0702.html + +.. _cce_10_0702: + +Overview +======== + +CCE supports different types of resource scheduling and task scheduling, improving application performance and overall cluster resource utilization. This section describes the main functions of CPU resource scheduling, GPU/NPU heterogeneous resource scheduling, and Volcano scheduling. + +CPU Scheduling +-------------- + +CCE provides CPU policies to allocate complete physical CPU cores to applications, improving application performance and reducing application scheduling latency. + ++------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------+ +| Function | Description | Reference | ++============+=====================================================================================================================================================================================================================================================================================================================================================================================================================+=================================+ +| CPU policy | When many CPU-intensive pods are running on a node, workloads may be migrated to different CPU cores. Many workloads are not sensitive to this migration and thus work fine without any intervention. For CPU-sensitive applications, you can use the CPU policy provided by Kubernetes to allocate dedicated cores to applications, improving application performance and reducing application scheduling latency. | :ref:`CPU Policy ` | ++------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------+ + +GPU Scheduling +-------------- + +CCE schedules heterogeneous GPU resources in clusters and allows GPUs to be used in containers. + ++--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------+ +| Function | Description | Reference | ++======================================+=================================================================================================================================================+===========================================================+ +| Default GPU scheduling in Kubernetes | This function allows you to specify the number of GPUs that a pod requests. The value can be less than 1 so that multiple pods can share a GPU. | :ref:`Default GPU Scheduling in Kubernetes ` | ++--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------+ + +Volcano Scheduling +------------------ + +Volcano is a Kubernetes-based batch processing platform that supports machine learning, deep learning, bioinformatics, genomics, and other big data applications. It provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management. + ++--------------------------+---------------------------------------------------------------------------------------+-----------------------------------------------+ +| Function | Description | Reference | ++==========================+=======================================================================================+===============================================+ +| NUMA affinity scheduling | Volcano targets to lift the limitation to make scheduler NUMA topology aware so that: | :ref:`NUMA Affinity Scheduling ` | +| | | | +| | - Pods are not scheduled to the nodes that NUMA topology does not match. | | +| | - Pods are scheduled to the best node for NUMA topology. | | ++--------------------------+---------------------------------------------------------------------------------------+-----------------------------------------------+ + +Cloud Native Hybrid Deployment +------------------------------ + +The cloud native hybrid deployment solution focuses on the Volcano and Kubernetes ecosystems to help users improve resource utilization and efficiency and reduce costs. + ++-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------+ +| Function | Description | Reference | ++===================================+==================================================================================================================================================================================================================================================================================================================================================================+========================================================+ +| Dynamic resource oversubscription | Based on the types of online and offline jobs, Volcano scheduling is used to utilize the resources that are requested but not used in the cluster (that is, the difference between the number of requested resources and the number of used resources), implementing resource oversubscription and hybrid deployment and improving cluster resource utilization. | :ref:`Dynamic Resource Oversubscription ` | ++-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------+ diff --git a/umn/source/workloads/volcano_scheduling/index.rst b/umn/source/scheduling/volcano_scheduling/index.rst similarity index 53% rename from umn/source/workloads/volcano_scheduling/index.rst rename to umn/source/scheduling/volcano_scheduling/index.rst index 71363a4..b8726dc 100644 --- a/umn/source/workloads/volcano_scheduling/index.rst +++ b/umn/source/scheduling/volcano_scheduling/index.rst @@ -5,10 +5,10 @@ Volcano Scheduling ================== -- :ref:`Hybrid Deployment of Online and Offline Jobs ` +- :ref:`NUMA Affinity Scheduling ` .. toctree:: :maxdepth: 1 :hidden: - hybrid_deployment_of_online_and_offline_jobs + numa_affinity_scheduling diff --git a/umn/source/scheduling/volcano_scheduling/numa_affinity_scheduling.rst b/umn/source/scheduling/volcano_scheduling/numa_affinity_scheduling.rst new file mode 100644 index 0000000..bc2fa9e --- /dev/null +++ b/umn/source/scheduling/volcano_scheduling/numa_affinity_scheduling.rst @@ -0,0 +1,327 @@ +:original_name: cce_10_0425.html + +.. _cce_10_0425: + +NUMA Affinity Scheduling +======================== + +Background +---------- + +When the node runs many CPU-bound pods, the workload can move to different CPU cores depending on whether the pod is throttled and which CPU cores are available at scheduling time. Many workloads are not sensitive to this migration and thus work fine without any intervention. However, in workloads where CPU cache affinity and scheduling latency significantly affect workload performance, the kubelet allows alternative CPU management policies to determine some placement preferences on the node. + +Both the CPU Manager and Topology Manager are kubelet components, but they have the following limitations: + +- The scheduler is not topology-aware. Therefore, the workload may be scheduled on a node and then fail on the node due to the Topology Manager. This is unacceptable for TensorFlow jobs. If any worker or ps failed on node, the job will fail. +- The managers are node-level that results in an inability to match the best node for NUMA topology in the whole cluster. + +For more information, see https://github.com/volcano-sh/volcano/blob/master/docs/design/numa-aware.md. + +Volcano targets to resolve the limitation to make scheduler NUMA topology aware so as to achieve the following: + +- Do not schedule pods to the nodes which NUMA topology does not match. +- Schedule pods to the best node for NUMA topology. + +Application Scope +----------------- + +- Support CPU resource topology scheduling +- Support pod-level topology policies + +.. _cce_10_0425__section2430103110429: + +Scheduling Prediction +--------------------- + +For pods with the topology policy, predicate the matched node list. + ++-----------------------------------+---------------------------------------------------------------------------------------------------+ +| policy | action | ++===================================+===================================================================================================+ +| none | 1. No filter action | ++-----------------------------------+---------------------------------------------------------------------------------------------------+ +| best-effort | 1. Filter out the node with the topology policy **best-effort**. | ++-----------------------------------+---------------------------------------------------------------------------------------------------+ +| restricted | 1. Filter out the node with the topology policy **restricted**. | +| | | +| | 2. Filter out the node that the CPU topology meets the CPU requirements for **restricted**. | ++-----------------------------------+---------------------------------------------------------------------------------------------------+ +| single-numa-node | 1. Filter out the node with the topology policy **single-numa-node**. | +| | | +| | 2. Filter out the node that the CPU topology meets the CPU requirements for **single-numa-node**. | ++-----------------------------------+---------------------------------------------------------------------------------------------------+ + + +.. figure:: /_static/images/en-us_image_0000001647417448.png + :alt: **Figure 1** Comparison of NUMA scheduling policies + + **Figure 1** Comparison of NUMA scheduling policies + +Scheduling Priority +------------------- + +Topology policy aims to schedule pods to the optimal node. In this example, each node is scored to sort out the optimal node. + +Principle: Schedule pods to the worker nodes that require the fewest NUMA nodes. + +The scoring formula is as follows: + +score = weight \* (100 - 100 \* numaNodeNum / maxNumaNodeNum) + +Parameter description: + +- **weight**: indicates the weight of NUMA Aware Plugin. +- **numaNodeNum**: indicates the number of NUMA nodes required for running the pod on the worker node. +- **maxNumaNodeNum**: indicates the maximum number of NUMA nodes in a pod of all worker nodes. + +Enabling Volcano to Support NUMA Affinity Scheduling +---------------------------------------------------- + +#. Enable the CPU management policy. For details, see :ref:`Enabling the CPU Management Policy `. + +#. Configure a CPU topology policy. + + a. Log in to the CCE console, click the cluster name, access the cluster details page, and choose **Nodes** in the navigation pane. On the page displayed, click the **Node Pools** tab. Choose **More** > **Manage** in the **Operation** column of the target node pool. + + b. Change the value of **topology-manager-policy** under **kubelet** to the required CPU topology policy. As shown in the following figure, the CPU topology policy is **best-effort**. + + The valid topology policies are **none**, **best-effort**, **restricted**, and **single-numa-node**. For details about these policies, see :ref:`Scheduling Prediction `. + + |image1| + +#. Enable the numa-aware add-on and the **resource_exporter** function. + + **volcano 1.7.1 or later** + + a. Log in to the CCE console and access the cluster console. In the navigation pane, choose **Add-ons**. On the right of the page, locate the **volcano** add-on and click **Edit**. In the **Parameters** area, configure Volcano scheduler parameters. + + .. code-block:: + + { + "ca_cert": "", + "default_scheduler_conf": { + "actions": "allocate, backfill", + "tiers": [ + { + "plugins": [ + { + "name": "priority" + }, + { + "name": "gang" + }, + { + "name": "conformance" + } + ] + }, + { + "plugins": [ + { + "name": "drf" + }, + { + "name": "predicates" + }, + { + "name": "nodeorder" + } + ] + }, + { + "plugins": [ + { + "name": "cce-gpu-topology-predicate" + }, + { + "name": "cce-gpu-topology-priority" + }, + { + "name": "cce-gpu" + }, + { + // add this also enable resource_exporter + "name": "numa-aware", + // the weight of the NUMA Aware Plugin + "arguments": { + "weight": "10" + } + } + ] + }, + { + "plugins": [ + { + "name": "nodelocalvolume" + }, + { + "name": "nodeemptydirvolume" + }, + { + "name": "nodeCSIscheduling" + }, + { + "name": "networkresource" + } + ] + } + ] + }, + "server_cert": "", + "server_key": "" + } + + **volcano earlier than 1.7.1** + + a. The **resource_exporter_enable** parameter is enabled for the volcano add-on to collect node NUMA information. + + .. code-block:: + + { + "plugins": { + "eas_service": { + "availability_zone_id": "", + "driver_id": "", + "enable": "false", + "endpoint": "", + "flavor_id": "", + "network_type": "", + "network_virtual_subnet_id": "", + "pool_id": "", + "project_id": "", + "secret_name": "eas-service-secret" + } + }, + "resource_exporter_enable": "true" + } + + After this function is enabled, you can view the NUMA topology information of the current node. + + .. code-block:: + + kubectl get numatopo + NAME AGE + node-1 4h8m + node-2 4h8m + node-3 4h8m + + b. Enable the volcano numa-aware algorithm add-on. + + **kubectl edit cm -n kube-system volcano-scheduler-configmap** + + .. code-block:: + + kind: ConfigMap + apiVersion: v1 + metadata: + name: volcano-scheduler-configmap + namespace: kube-system + data: + default-scheduler.conf: |- + actions: "allocate, backfill" + tiers: + - plugins: + - name: priority + - name: gang + - name: conformance + - plugins: + - name: overcommit + - name: drf + - name: predicates + - name: nodeorder + - plugins: + - name: cce-gpu-topology-predicate + - name: cce-gpu-topology-priority + - name: cce-gpu + - plugins: + - name: nodelocalvolume + - name: nodeemptydirvolume + - name: nodeCSIscheduling + - name: networkresource + arguments: + NetworkType: vpc-router + - name: numa-aware # add it to enable numa-aware plugin + arguments: + weight: 10 # the weight of the NUMA Aware Plugin + +Using Volcano to Support NUMA Affinity Scheduling +------------------------------------------------- + +#. Configure NUMA affinity for Deployments. The following is an example: + + .. code-block:: + + kind: Deployment + apiVersion: apps/v1 + metadata: + name: numa-tset + spec: + replicas: 1 + selector: + matchLabels: + app: numa-tset + template: + metadata: + labels: + app: numa-tset + annotations: + volcano.sh/numa-topology-policy: single-numa-node # set the topology policy + spec: + containers: + - name: container-1 + image: nginx:alpine + resources: + requests: + cpu: 2 # The value must be an integer and must be the same as that in limits. + memory: 2048Mi + limits: + cpu: 2 # The value must be an integer and must be the same as that in requests. + memory: 2048Mi + imagePullSecrets: + - name: default-secret + +#. Create a volcano job and use NUMA affinity. + + .. code-block:: + + apiVersion: batch.volcano.sh/v1alpha1 + kind: Job + metadata: + name: vj-test + spec: + schedulerName: volcano + minAvailable: 1 + tasks: + - replicas: 1 + name: "test" + topologyPolicy: best-effort # set the topology policy for task + template: + spec: + containers: + - image: alpine + command: ["/bin/sh", "-c", "sleep 1000"] + imagePullPolicy: IfNotPresent + name: running + resources: + limits: + cpu: 20 + memory: "100Mi" + restartPolicy: OnFailure + +#. Check the NUMA usage. + + .. code-block:: + + # Check the CPU usage of the current node. + lscpu + ... + CPU(s): 32 + NUMA node(s): 2 + NUMA node0 CPU(s): 0-15 + NUMA node1 CPU(s): 16-31 + + # Check the CPU allocation of the current node. + cat /var/lib/kubelet/cpu_manager_state + {"policyName":"static","defaultCpuSet":"0,10-15,25-31","entries":{"777870b5-c64f-42f5-9296-688b9dc212ba":{"container-1":"16-24"},"fb15e10a-b6a5-4aaa-8fcd-76c1aa64e6fd":{"container-1":"1-9"}},"checksum":318470969} + +.. |image1| image:: /_static/images/en-us_image_0000001695737101.png diff --git a/umn/source/storage/deployment_examples/creating_a_deployment_mounted_with_an_evs_volume.rst b/umn/source/storage/deployment_examples/creating_a_deployment_mounted_with_an_evs_volume.rst deleted file mode 100644 index 6f5232b..0000000 --- a/umn/source/storage/deployment_examples/creating_a_deployment_mounted_with_an_evs_volume.rst +++ /dev/null @@ -1,255 +0,0 @@ -:original_name: cce_10_0257.html - -.. _cce_10_0257: - -Creating a Deployment Mounted with an EVS Volume -================================================ - -Scenario --------- - -After an EVS volume is created or imported to CCE, you can mount it to a workload. - -.. important:: - - EVS disks cannot be attached across AZs. Before mounting a volume, you can run the **kubectl get pvc** command to query the available PVCs in the AZ where the current cluster is located. - -Prerequisites -------------- - -You have created a cluster and installed the CSI plug-in (:ref:`everest `) in the cluster. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.15 or later. - -Using EVS Volumes for Deployments ---------------------------------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **evs-deployment-example.yaml** file, which is used to create a Deployment. - - **touch evs-deployment-example.yaml** - - **vi evs-deployment-example.yaml** - - Example of mounting an EVS volume to a Deployment (PVC-based, shared volume): - - .. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: evs-deployment-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: evs-deployment-example - template: - metadata: - labels: - app: evs-deployment-example - spec: - containers: - - image: nginx - name: container-0 - volumeMounts: - - mountPath: /tmp - name: pvc-evs-example - imagePullSecrets: - - name: default-secret - restartPolicy: Always - volumes: - - name: pvc-evs-example - persistentVolumeClaim: - claimName: pvc-evs-auto-example - - .. table:: **Table 1** Key parameters - - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - | Parent Parameter | Parameter | Description | - +==================================================+===========+================================================================================================+ - | spec.template.spec.containers.volumeMounts | name | Name of the volume mounted to the container. | - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers.volumeMounts | mountPath | Mount path of the container. In this example, the volume is mounted to the **/tmp** directory. | - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - | spec.template.spec.volumes | name | Name of the volume. | - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - | spec.template.spec.volumes.persistentVolumeClaim | claimName | Name of an existing PVC. | - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - -#. Run the following command to create the workload: - - **kubectl create -f evs-deployment-example.yaml** - -Using EVS Volumes for StatefulSets ----------------------------------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **evs-statefulset-example.yaml** file, which is used to create a Deployment. - - **touch** **evs-statefulset-example.yaml** - - **vi** **evs-statefulset-example.yaml** - - Mounting an EVS volume to a StatefulSet (PVC template-based, non-shared volume): - - **Example YAML:** - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: evs-statefulset-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: evs-statefulset-example - template: - metadata: - labels: - app: evs-statefulset-example - spec: - containers: - - name: container-0 - image: 'nginx:latest' - volumeMounts: - - name: pvc-evs-auto-example - mountPath: /tmp - restartPolicy: Always - imagePullSecrets: - - name: default-secret - volumeClaimTemplates: - - metadata: - name: pvc-evs-auto-example - namespace: default - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - annotations: - everest.io/disk-volume-type: SAS - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - storageClassName: csi-disk - serviceName: evs-statefulset-example-headless - updateStrategy: - type: RollingUpdate - - .. table:: **Table 2** Key parameters - - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | Parent Parameter | Parameter | Description | - +===========================================+=============+====================================================================================================================================+ - | metadata | name | Name of the created workload. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers | image | Image of the workload. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers.volumeMount | mountPath | Mount path of the container. In this example, the volume is mounted to the **/tmp** directory. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec | serviceName | Service corresponding to the workload. For details about how to create a Service, see :ref:`Creating a StatefulSet `. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.volumeClaimTemplates.metadata.name** must be consistent because they have a mapping relationship. - -#. Run the following command to create the workload: - - **kubectl create -f evs-statefulset-example.yaml** - -Verifying Persistent Storage of an EVS Volume ---------------------------------------------- - -#. Query the pod and EVS files of the deployed workload (for example, **evs-statefulset-example**). - - a. Run the following command to query the pod name of the workload: - - .. code-block:: - - kubectl get po | grep evs-statefulset-example - - Expected outputs: - - .. code-block:: - - evs-statefulset-example-0 1/1 Running 0 22h - - b. Run the following command to check whether an EVS volume is mounted to the **/tmp** directory: - - .. code-block:: - - kubectl exec evs-statefulset-example-0 -- df tmp - - Expected outputs: - - .. code-block:: - - /dev/sda 10255636 36888 10202364 1% /tmp - -#. Run the following command to create a file named **test** in the **/tmp** directory: - - .. code-block:: - - kubectl exec evs-statefulset-example-0 -- touch /tmp/test - -#. Run the following command to view the file in the **/tmp** directory: - - .. code-block:: - - kubectl exec evs-statefulset-example-0 -- ls -l /tmp - - Expected outputs: - - .. code-block:: - - -rw-r--r-- 1 root root 0 Jun 1 02:50 test - -#. Run the following command to delete the pod named **evs-statefulset-example-0**: - - .. code-block:: - - kubectl delete po evs-statefulset-example-0 - -#. Check whether the file still exists after the pod is rebuilt. - - a. Run the following command to query the name of the rebuilt pod: - - .. code-block:: - - kubectl get po - - Expected outputs: - - .. code-block:: - - evs-statefulset-example-0 1/1 Running 0 2m - - b. Run the following command to view the file in the **/tmp** directory: - - .. code-block:: - - kubectl exec evs-statefulset-example-0 -- ls -l /tmp - - Expected outputs: - - .. code-block:: - - -rw-r--r-- 1 root root 0 Jun 1 02:50 test - - c. The **test** file still exists after the pod is rebuilt, indicating that the data in the EVS volume can be persistently stored. diff --git a/umn/source/storage/deployment_examples/creating_a_deployment_mounted_with_an_obs_volume.rst b/umn/source/storage/deployment_examples/creating_a_deployment_mounted_with_an_obs_volume.rst deleted file mode 100644 index 1f474db..0000000 --- a/umn/source/storage/deployment_examples/creating_a_deployment_mounted_with_an_obs_volume.rst +++ /dev/null @@ -1,73 +0,0 @@ -:original_name: cce_10_0269.html - -.. _cce_10_0269: - -Creating a Deployment Mounted with an OBS Volume -================================================ - -Scenario --------- - -After an OBS volume is created or imported to CCE, you can mount the volume to a workload. - -Prerequisites -------------- - -You have created a cluster and installed the CSI plug-in (:ref:`everest `) in the cluster. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.15 or later. - -Procedure ---------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **obs-deployment-example.yaml** file, which is used to create a pod. - - **touch obs-deployment-example.yaml** - - **vi obs-deployment-example.yaml** - - Example of mounting an OBS volume to a Deployment (PVC-based, shared volume): - - .. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: obs-deployment-example # Workload name - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: obs-deployment-example - template: - metadata: - labels: - app: obs-deployment-example - spec: - containers: - - image: nginx - name: container-0 - volumeMounts: - - mountPath: /tmp # Mount path - name: pvc-obs-example - restartPolicy: Always - imagePullSecrets: - - name: default-secret - volumes: - - name: pvc-obs-example - persistentVolumeClaim: - claimName: pvc-obs-auto-example # PVC name - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - -#. Run the following command to create the workload: - - **kubectl create -f obs-deployment-example.yaml** diff --git a/umn/source/storage/deployment_examples/creating_a_deployment_mounted_with_an_sfs_volume.rst b/umn/source/storage/deployment_examples/creating_a_deployment_mounted_with_an_sfs_volume.rst deleted file mode 100644 index cc680ed..0000000 --- a/umn/source/storage/deployment_examples/creating_a_deployment_mounted_with_an_sfs_volume.rst +++ /dev/null @@ -1,73 +0,0 @@ -:original_name: cce_10_0263.html - -.. _cce_10_0263: - -Creating a Deployment Mounted with an SFS Volume -================================================ - -Scenario --------- - -After an SFS volume is created or imported to CCE, you can mount the volume to a workload. - -Prerequisites -------------- - -You have created a cluster and installed the CSI plug-in (:ref:`everest `) in the cluster. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.15 or later. - -Procedure ---------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **sfs-deployment-example.yaml** file, which is used to create a pod. - - **touch sfs-deployment-example.yaml** - - **vi sfs-deployment-example.yaml** - - Example of mounting an SFS volume to a Deployment (PVC-based, shared volume): - - .. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: sfs-deployment-example # Workload name - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: sfs-deployment-example - template: - metadata: - labels: - app: sfs-deployment-example - spec: - containers: - - image: nginx - name: container-0 - volumeMounts: - - mountPath: /tmp # Mount path - name: pvc-sfs-example - imagePullSecrets: - - name: default-secret - restartPolicy: Always - volumes: - - name: pvc-sfs-example - persistentVolumeClaim: - claimName: pvc-sfs-auto-example # PVC name - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - -#. Run the following command to create the workload: - - **kubectl create -f sfs-deployment-example.yaml** diff --git a/umn/source/storage/deployment_examples/creating_a_statefulset_mounted_with_an_obs_volume.rst b/umn/source/storage/deployment_examples/creating_a_statefulset_mounted_with_an_obs_volume.rst deleted file mode 100644 index e9c3747..0000000 --- a/umn/source/storage/deployment_examples/creating_a_statefulset_mounted_with_an_obs_volume.rst +++ /dev/null @@ -1,216 +0,0 @@ -:original_name: cce_10_0268.html - -.. _cce_10_0268: - -Creating a StatefulSet Mounted with an OBS Volume -================================================= - -Scenario --------- - -CCE allows you to use an existing OBS volume to create a StatefulSet through a PVC. - -Prerequisites -------------- - -You have created a cluster and installed the CSI plug-in (:ref:`everest `) in the cluster. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.15 or later. - -Procedure ---------- - -#. Create an OBS volume by referring to :ref:`PVCs ` and obtain the PVC name. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create a YAML file for creating the workload. Assume that the file name is **obs-statefulset-example.yaml**. - - **touch obs-statefulset-example.yaml** - - **vi obs-statefulset-example.yaml** - - Configuration example: - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: obs-statefulset-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: obs-statefulset-example - template: - metadata: - labels: - app: obs-statefulset-example - spec: - volumes: - - name: pvc-obs-example - persistentVolumeClaim: - claimName: pvc-obs-example - containers: - - name: container-0 - image: 'nginx:latest' - volumeMounts: - - name: pvc-obs-example - mountPath: /tmp - restartPolicy: Always - imagePullSecrets: - - name: default-secret - serviceName: obs-statefulset-example-headless # Name of the headless Service - - .. table:: **Table 1** Key parameters - - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +=============+====================================================================================================================================+ - | replicas | Number of pods. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | name | Name of the new workload. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | image | Image used by the workload. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | mountPath | Mount path of a container. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | serviceName | Service corresponding to the workload. For details about how to create a Service, see :ref:`Creating a StatefulSet `. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | claimName | Name of an existing PVC. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - - Example of mounting an OBS volume to a StatefulSet (PVC template-based, dedicated volume): - - **Example YAML:** - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: obs-statefulset-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: obs-statefulset-example - template: - metadata: - labels: - app: obs-statefulset-example - spec: - containers: - - name: container-0 - image: 'nginx:latest' - volumeMounts: - - name: pvc-obs-auto-example - mountPath: /tmp - restartPolicy: Always - imagePullSecrets: - - name: default-secret - volumeClaimTemplates: - - metadata: - name: pvc-obs-auto-example - namespace: default - annotations: - everest.io/obs-volume-type: STANDARD - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 1Gi - storageClassName: csi-obs - serviceName: obs-statefulset-example-headless - -#. Create a StatefulSet. - - **kubectl create -f obs-statefulset-example.yaml** - -Verifying Persistent Storage of an OBS Volume ---------------------------------------------- - -#. Query the pod and OBS volume of the deployed workload (for example, **obs-statefulset-example**). - - a. Run the following command to query the pod name of the workload: - - .. code-block:: - - kubectl get po | grep obs-statefulset-example - - Expected outputs: - - .. code-block:: - - obs-statefulset-example-0 1/1 Running 0 2m5s - - b. Run the following command to check whether an OBS volume is mounted to the **/tmp** directory: - - .. code-block:: - - kubectl exec obs-statefulset-example-0 -- mount|grep /tmp - - Expected outputs: - - .. code-block:: - - s3fs on /tmp type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) - -#. Run the following command to create a file named **test** in the **/tmp** directory: - - .. code-block:: - - kubectl exec obs-statefulset-example-0 -- touch /tmp/test - -#. Run the following command to view the file in the **/tmp** directory: - - .. code-block:: - - kubectl exec obs-statefulset-example-0 -- ls -l /tmp - - Expected outputs: - - .. code-block:: - - -rw-r--r-- 1 root root 0 Jun 1 02:50 test - -#. Run the following command to delete the pod named **obs-statefulset-example-0**: - - .. code-block:: - - kubectl delete po obs-statefulset-example-0 - -#. Check whether the file still exists after the pod is rebuilt. - - a. Run the following command to query the name of the rebuilt pod: - - .. code-block:: - - kubectl get po - - Expected outputs: - - .. code-block:: - - obs-statefulset-example-0 1/1 Running 0 2m - - b. Run the following command to view the file in the **/tmp** directory: - - .. code-block:: - - kubectl exec obs-statefulset-example-0 -- ls -l /tmp - - Expected outputs: - - .. code-block:: - - -rw-r--r-- 1 root root 0 Jun 1 02:50 test - - c. The **test** file still exists after the pod is rebuilt, indicating that the data in the OBS volume can be persistently stored. diff --git a/umn/source/storage/deployment_examples/creating_a_statefulset_mounted_with_an_sfs_volume.rst b/umn/source/storage/deployment_examples/creating_a_statefulset_mounted_with_an_sfs_volume.rst deleted file mode 100644 index 0026419..0000000 --- a/umn/source/storage/deployment_examples/creating_a_statefulset_mounted_with_an_sfs_volume.rst +++ /dev/null @@ -1,141 +0,0 @@ -:original_name: cce_10_0262.html - -.. _cce_10_0262: - -Creating a StatefulSet Mounted with an SFS Volume -================================================= - -Scenario --------- - -CCE allows you to use an existing SGS volume to create a StatefulSet (by using a PVC). - -Prerequisites -------------- - -You have created a cluster and installed the CSI plug-in (:ref:`everest `) in the cluster. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.15 or later. - -Procedure ---------- - -#. Create an SFS volume by referring to :ref:`PVCs ` and record the volume name. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create a YAML file for creating the workload. Assume that the file name is **sfs-statefulset-example**.\ **yaml**. - - **touch sfs-statefulset-example.yaml** - - **vi sfs-statefulset-example.yaml** - - Configuration example: - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: sfs-statefulset-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: sfs-statefulset-example - template: - metadata: - labels: - app: sfs-statefulset-example - spec: - volumes: - - name: pvc-sfs-example - persistentVolumeClaim: - claimName: pvc-sfs-example - containers: - - name: container-0 - image: 'nginx:latest' - volumeMounts: - - name: pvc-sfs-example - mountPath: /tmp - restartPolicy: Always - imagePullSecrets: - - name: default-secret - serviceName: sfs-statefulset-example-headless - updateStrategy: - type: RollingUpdate - - .. table:: **Table 1** Key parameters - - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | Parent Parameter | Parameter | Description | - +==================================================+=============+====================================================================================================================================+ - | spec | replicas | Number of pods. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | metadata | name | Name of the new workload. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers | image | Image used by the workload. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers.volumeMounts | mountPath | Mount path of a container. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec | serviceName | Service corresponding to the workload. For details about how to create a Service, see :ref:`Creating a StatefulSet `. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.volumes.persistentVolumeClaim | claimName | Name of an existing PVC. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - - Example of mounting an SFS volume to a StatefulSet (PVC template-based, dedicated volume): - - **Example YAML file:** - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: sfs-statefulset-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: sfs-statefulset-example - template: - metadata: - labels: - app: sfs-statefulset-example - spec: - containers: - - name: container-0 - image: 'nginx:latest' - volumeMounts: - - name: pvc-sfs-auto-example - mountPath: /tmp - restartPolicy: Always - imagePullSecrets: - - name: default-secret - volumeClaimTemplates: - - metadata: - name: pvc-sfs-auto-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 10Gi - storageClassName: csi-nas - serviceName: sfs-statefulset-example-headless - updateStrategy: - type: RollingUpdate - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - -#. Create a StatefulSet. - - **kubectl create -f sfs-statefulset-example.yaml** diff --git a/umn/source/storage/deployment_examples/index.rst b/umn/source/storage/deployment_examples/index.rst deleted file mode 100644 index d63fdf4..0000000 --- a/umn/source/storage/deployment_examples/index.rst +++ /dev/null @@ -1,22 +0,0 @@ -:original_name: cce_10_0393.html - -.. _cce_10_0393: - -Deployment Examples -=================== - -- :ref:`Creating a Deployment Mounted with an EVS Volume ` -- :ref:`Creating a Deployment Mounted with an OBS Volume ` -- :ref:`Creating a StatefulSet Mounted with an OBS Volume ` -- :ref:`Creating a Deployment Mounted with an SFS Volume ` -- :ref:`Creating a StatefulSet Mounted with an SFS Volume ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - creating_a_deployment_mounted_with_an_evs_volume - creating_a_deployment_mounted_with_an_obs_volume - creating_a_statefulset_mounted_with_an_obs_volume - creating_a_deployment_mounted_with_an_sfs_volume - creating_a_statefulset_mounted_with_an_sfs_volume diff --git a/umn/source/storage/elastic_volume_service_evs/dynamically_mounting_an_evs_disk_to_a_statefulset.rst b/umn/source/storage/elastic_volume_service_evs/dynamically_mounting_an_evs_disk_to_a_statefulset.rst new file mode 100644 index 0000000..8c0fbc4 --- /dev/null +++ b/umn/source/storage/elastic_volume_service_evs/dynamically_mounting_an_evs_disk_to_a_statefulset.rst @@ -0,0 +1,306 @@ +:original_name: cce_10_0616.html + +.. _cce_10_0616: + +Dynamically Mounting an EVS Disk to a StatefulSet +================================================= + +Application Scenarios +--------------------- + +Dynamic mounting is available only for creating a :ref:`StatefulSet `. It is implemented through a volume claim template (`volumeClaimTemplates `__ field) and depends on the storage class to dynamically provision PVs. In this mode, each pod in a multi-pod StatefulSet is associated with a unique PVC and PV. After a pod is rescheduled, the original data can still be mounted to it based on the PVC name. In the common mounting mode for a Deployment, if ReadWriteMany is supported, multiple pods of the Deployment will be mounted to the same underlying storage. + +Prerequisites +------------- + +- You have created a cluster and installed the CSI add-on (:ref:`everest `) in the cluster. +- If you want to create a cluster using commands, use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +(Console) Dynamically Mounting an EVS Disk +------------------------------------------ + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane on the left, click **Workloads**. In the right pane, click the **StatefulSets** tab. + +#. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **VolumeClaimTemplate (VTC)**. + +#. Click **Create PVC**. In the dialog box displayed, configure the PVC parameters. + + Click **Create**. + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================+ + | PVC Type | In this example, select **EVS**. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the name of the PVC. After a PVC is created, a suffix is automatically added based on the number of pods. The format is <*Custom PVC name*>-<*Serial number*>, for example, example-0. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | You can select **Dynamically provision** to create a PVC, PV, and underlying storage on the console in cascading mode. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Storage Classes | The storage class for EVS disks is **csi-disk**. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | AZ | Select the AZ of the EVS disk. The AZ must be the same as that of the cluster node. | + | | | + | | .. note:: | + | | | + | | An EVS disk can only be mounted to a node in the same AZ. After an EVS disk is created, its AZ cannot be changed. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Disk Type | Select an EVS disk type. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode | EVS disks support only **ReadWriteOnce**, indicating that a storage volume can be mounted to one node in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Capacity (GiB) | Capacity of the requested storage volume. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Encryption | You can select **Encryption** and an encryption key to encrypt underlying storage. Only EVS disks and SFS file systems support encryption. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Enter the path to which the volume is mounted. + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, errors will occur in containers. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the EVS disk. + +#. Dynamically mount and use storage volumes. For details about other parameters, see :ref:`Creating a StatefulSet `. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence `. + +(kubectl) Using an Existing EVS Disk +------------------------------------ + +#. Use kubectl to connect to the cluster. + +#. Create a file named **statefulset-evs.yaml**. In this example, the EVS volume is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: StatefulSet + metadata: + name: statefulset-evs + namespace: default + spec: + selector: + matchLabels: + app: statefulset-evs + template: + metadata: + labels: + app: statefulset-evs + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-disk # The value must be the same as that in the volumeClaimTemplates field. + mountPath: /data # Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + serviceName: statefulset-evs # Headless Service name. + replicas: 2 + volumeClaimTemplates: + - apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-disk + namespace: default + annotations: + everest.io/disk-volume-type: SAS # EVS disk type. + everest.io/crypt-key-id: # (Optional) Encryption key ID. Mandatory for an encrypted disk. + labels: + failure-domain.beta.kubernetes.io/region: # Region of the node where the application is to be deployed. + failure-domain.beta.kubernetes.io/zone: # AZ of the node where the application is to be deployed. + spec: + accessModes: + - ReadWriteOnce # The value must be ReadWriteOnce for EVS disks. + resources: + requests: + storage: 10Gi # EVS disk capacity, ranging from 1 to 32768. + storageClassName: csi-disk # Storage class type for EVS disks. + --- + apiVersion: v1 + kind: Service + metadata: + name: statefulset-evs # Headless Service name. + namespace: default + labels: + app: statefulset-evs + spec: + selector: + app: statefulset-evs + clusterIP: None + ports: + - name: statefulset-evs + targetPort: 80 + nodePort: 0 + port: 80 + protocol: TCP + type: ClusterIP + + .. table:: **Table 2** Key parameters + + +------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +==========================================+=======================+===========================================================================================================================================================================================================================================================================================================================+ + | failure-domain.beta.kubernetes.io/region | Yes | Region where the cluster is located. | + | | | | + | | | For details about the value of **region**, see `Regions and Endpoints `__. | + +------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | failure-domain.beta.kubernetes.io/zone | Yes | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | + | | | | + | | | For details about the value of **zone**, see `Regions and Endpoints `__. | + +------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/disk-volume-type | Yes | EVS disk type. All letters are in uppercase. | + | | | | + | | | - **SATA**: common I/O | + | | | - **SAS**: high I/O | + | | | - **SSD**: ultra-high I/O | + +------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/crypt-key-id | No | Mandatory when the EVS disk is encrypted. Enter the encryption key ID selected during EVS disk creation. | + | | | | + | | | To obtain the encryption key ID, log in to the **Cloud Server Console**. In the navigation pane, choose **Elastic Volume Service** > **Disks**. Click the name of the target EVS disk to go to its details page. On the **Summary** tab page, copy the value of **KMS Key ID** in the **Configuration Information** area. | + +------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storage | Yes | Requested PVC capacity, in Gi. The value ranges from **1** to **32768**. | + +------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storageClassName | Yes | The storage class name for EVS disks is **csi-disk**. | + +------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Run the following command to create a workload to which the EVS volume is mounted: + + .. code-block:: + + kubectl apply -f statefulset-evs.yaml + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence `. + +.. _cce_10_0616__section11593165910013: + +Verifying Data Persistence +-------------------------- + +#. View the deployed application and EVS volume files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep statefulset-evs + + Expected output: + + .. code-block:: + + statefulset-evs-0 1/1 Running 0 45s + statefulset-evs-1 1/1 Running 0 28s + + b. Run the following command to check whether the EVS volume has been mounted to the **/data** path: + + .. code-block:: + + kubectl exec statefulset-evs-0 -- df | grep data + + Expected output: + + .. code-block:: + + /dev/sdd 10255636 36888 10202364 0% /data + + c. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec statefulset-evs-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec statefulset-evs-0 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec statefulset-evs-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + +#. Run the following command to delete the pod named **web-evs-auto-0**: + + .. code-block:: + + kubectl delete pod statefulset-evs-0 + + Expected output: + + .. code-block:: + + pod "statefulset-evs-0" deleted + +#. After the deletion, the StatefulSet controller automatically creates a replica with the same name. Run the following command to check whether the files in the **/data** path have been modified: + + .. code-block:: + + kubectl exec statefulset-evs-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + + If the **static** file still exists, the data in the EVS volume can be stored persistently. + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 3 `. + +.. _cce_10_0616__cce_10_0615_table1619535674020: + +.. table:: **Table 3** Related operations + + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +=======================================+====================================================================================================================================================+=============================================================================================================================================================================================+ + | Expanding the capacity of an EVS disk | Quickly expand the capacity of a mounted EVS disk on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **More** in the **Operation** column of the target PVC and select **Scale-out**. | + | | | #. Enter the capacity to be added and click **OK**. | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/elastic_volume_service_evs/index.rst b/umn/source/storage/elastic_volume_service_evs/index.rst new file mode 100644 index 0000000..3e63476 --- /dev/null +++ b/umn/source/storage/elastic_volume_service_evs/index.rst @@ -0,0 +1,22 @@ +:original_name: cce_10_0044.html + +.. _cce_10_0044: + +Elastic Volume Service (EVS) +============================ + +- :ref:`Overview ` +- :ref:`Using an Existing EVS Disk Through a Static PV ` +- :ref:`Using an EVS Disk Through a Dynamic PV ` +- :ref:`Dynamically Mounting an EVS Disk to a StatefulSet ` +- :ref:`Snapshots and Backups ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + using_an_existing_evs_disk_through_a_static_pv + using_an_evs_disk_through_a_dynamic_pv + dynamically_mounting_an_evs_disk_to_a_statefulset + snapshots_and_backups diff --git a/umn/source/storage/elastic_volume_service_evs/overview.rst b/umn/source/storage/elastic_volume_service_evs/overview.rst new file mode 100644 index 0000000..eb83c62 --- /dev/null +++ b/umn/source/storage/elastic_volume_service_evs/overview.rst @@ -0,0 +1,49 @@ +:original_name: cce_10_0613.html + +.. _cce_10_0613: + +Overview +======== + +To achieve persistent storage, CCE allows you to mount the storage volumes created from Elastic Volume Service (EVS) disks to a path of a container. When the container is migrated within an AZ, the mounted EVS volumes are also migrated. By using EVS volumes, you can mount the remote file directory of a storage system to a container so that data in the data volume is permanently preserved. Even if the container is deleted, the data in the data volume is still stored in the storage system. + +EVS Disk Performance Specifications +----------------------------------- + +EVS performance metrics include: + +- IOPS: number of read/write operations performed by an EVS disk per second +- Throughput: amount of data read from and written into an EVS disk per second +- Read/write I/O latency: minimum interval between two consecutive read/write operations on an EVS disk + +.. table:: **Table 1** EVS disk performance specifications + + +----------------------------------+--------------------------------------+------------------------------------+----------------------------------+ + | Parameter | Ultra-high I/O | High I/O | Common I/O | + +==================================+======================================+====================================+==================================+ + | Max. capacity (GiB) | - System disk: 1,024 | - System disk: 1,024 | - System disk: 1,024 | + | | - Data disk: 32,768 | - Data disk: 32,768 | - Data disk: 32,768 | + +----------------------------------+--------------------------------------+------------------------------------+----------------------------------+ + | Max. IOPS | 50,000 | 5,000 | 2,200 | + +----------------------------------+--------------------------------------+------------------------------------+----------------------------------+ + | Max. throughput (MiB/s) | 350 | 150 | 50 | + +----------------------------------+--------------------------------------+------------------------------------+----------------------------------+ + | Burst IOPS limit | 16,000 | 5,000 | 2,200 | + +----------------------------------+--------------------------------------+------------------------------------+----------------------------------+ + | Disk IOPS | Min. (50,000, 1,800 + 50 x Capacity) | Min. (5,000, 1,800 + 8 x Capacity) | Min. (2,200, 500 + 2 x Capacity) | + +----------------------------------+--------------------------------------+------------------------------------+----------------------------------+ + | Disk throughput (MiB/s) | Min. (350, 120 + 0.5 x Capacity) | Min. (150, 100 + 0.15 x Capacity) | 50 | + +----------------------------------+--------------------------------------+------------------------------------+----------------------------------+ + | Single-queue access latency (ms) | 1 | 1-3 | 5-10 | + +----------------------------------+--------------------------------------+------------------------------------+----------------------------------+ + | API name | SSD | SAS | SATA | + +----------------------------------+--------------------------------------+------------------------------------+----------------------------------+ + +Application Scenarios +--------------------- + +EVS disks can be mounted in the following modes based on application scenarios: + +- :ref:`Using an Existing EVS Disk Through a Static PV `: static creation mode, where you use an existing EVS disk to create a PV and then mount storage to the workload through a PVC. This mode applies to scenarios where the underlying storage is available. +- :ref:`Using an EVS Disk Through a Dynamic PV `: dynamic creation mode, where you do not need to create EVS volumes in advance. Instead, specify a StorageClass during PVC creation and an EVS disk and a PV will be automatically created. This mode applies to scenarios where no underlying storage is available. +- :ref:`Dynamically Mounting an EVS Disk to a StatefulSet `: Only StatefulSets support this mode. Each pod is associated with a unique PVC and PV. After a pod is rescheduled, the original data can still be mounted to it based on the PVC name. This mode applies to StatefulSets with multiple pods. diff --git a/umn/source/storage/snapshots_and_backups.rst b/umn/source/storage/elastic_volume_service_evs/snapshots_and_backups.rst similarity index 74% rename from umn/source/storage/snapshots_and_backups.rst rename to umn/source/storage/elastic_volume_service_evs/snapshots_and_backups.rst index e6b3937..259fd69 100644 --- a/umn/source/storage/snapshots_and_backups.rst +++ b/umn/source/storage/elastic_volume_service_evs/snapshots_and_backups.rst @@ -7,18 +7,19 @@ Snapshots and Backups CCE works with EVS to support snapshots. A snapshot is a complete copy or image of EVS disk data at a certain point of time, which can be used for data DR. -You can create snapshots to rapidly save the disk data at specified time points. In addition, you can use snapshots to create new disks so that the created disks will contain the snapshot data in the beginning. +You can create snapshots to rapidly save the disk data at a certain point of time. In addition, you can use snapshots to create disks so that the created disks will contain the snapshot data in the beginning. Precautions ----------- - The snapshot function is available **only for clusters of v1.15 or later** and requires the CSI-based everest add-on. - The subtype (common I/O, high I/O, or ultra-high I/O), disk mode (SCSI or VBD), data encryption, sharing status, and capacity of an EVS disk created from a snapshot must be the same as those of the disk associated with the snapshot. These attributes cannot be modified after being queried or set. -- Snapshots can be created only for available or in-use CSI disks. During the free trial, you can create up to 7 snapshots per disk. +- The disk must be available or in use. During the free trial, you can create up to 7 snapshots per disk. +- Snapshots can be created only for PVCs created using the storage class (whose name starts with csi) provided by the everest add-on. Snapshots cannot be created for PVCs created using the Flexvolume storage class whose name is ssd, sas, or sata. - Snapshot data of encrypted disks is stored encrypted, and that of non-encrypted disks is stored non-encrypted. -Application Scenario --------------------- +Application Scenarios +--------------------- The snapshot feature helps address your following needs: @@ -43,18 +44,18 @@ The snapshot feature helps address your following needs: Creating a Snapshot ------------------- -**Using the CCE Console** +**Using the CCE console** #. Log in to the CCE console. #. Click the cluster name and go to the cluster console. Choose **Storage** from the navigation pane, and click the **Snapshots and Backups** tab. #. Click **Create Snapshot** in the upper right corner. In the dialog box displayed, set related parameters. - **Snapshot Name**: Enter a snapshot name. - - **Storage**: Select a PVC. Only EVS PVCs can create a snapshot. + - **Storage**: Select an EVS PVC. #. Click **Create**. -**Creating from YAML** +**Using YAML** .. code-block:: @@ -64,26 +65,29 @@ Creating a Snapshot finalizers: - snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection - snapshot.storage.kubernetes.io/volumesnapshot-bound-protection - name: cce-disksnap-test + name: cce-disksnap-test # Snapshot name namespace: default spec: source: - persistentVolumeClaimName: pvc-evs-test # PVC name. Only an EVS PVC can be created. + persistentVolumeClaimName: pvc-evs-test # PVC name. Only an EVS PVC can be selected. volumeSnapshotClassName: csi-disk-snapclass -Using a Snapshot to Creating a PVC ----------------------------------- +Using a Snapshot to Create a PVC +-------------------------------- The disk type, encryption setting, and disk mode of the created EVS PVC are consistent with those of the snapshot's source EVS disk. -**Using the CCE Console** +**Using the CCE console** #. Log in to the CCE console. #. Click the cluster name and go to the cluster console. Choose **Storage** from the navigation pane, and click the **Snapshots and Backups** tab. -#. Locate the snapshot for which you want to create a PVC, click **Create PVC**, and specify the PVC name in the displayed dialog box. +#. Locate the snapshot that you want to use for creating a PVC, click **Create PVC**, and configure PVC parameters in the displayed dialog box. + + - **PVC Name**: Enter a PVC name. + #. Click **Create**. -**Creating from YAML** +**Using YAML** .. code-block:: @@ -93,16 +97,16 @@ The disk type, encryption setting, and disk mode of the created EVS PVC are cons name: pvc-test namespace: default annotations: - everest.io/disk-volume-type: SSD # EVS disk type, which must be the same as that of the source EVS disk of the snapshot. + everest.io/disk-volume-type: SSD # EVS disk type, which must be the same as that of the snapshot's source EVS disk. labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: + failure-domain.beta.kubernetes.io/region: # Replace the region with the one where the EVS disk is located. + failure-domain.beta.kubernetes.io/zone: # Replace the AZ with the one where the EVS disk is located. spec: accessModes: - ReadWriteOnce resources: requests: - storage: '10' + storage: 10Gi storageClassName: csi-disk dataSource: name: cce-disksnap-test # Snapshot name diff --git a/umn/source/storage/elastic_volume_service_evs/using_an_evs_disk_through_a_dynamic_pv.rst b/umn/source/storage/elastic_volume_service_evs/using_an_evs_disk_through_a_dynamic_pv.rst new file mode 100644 index 0000000..85f22e1 --- /dev/null +++ b/umn/source/storage/elastic_volume_service_evs/using_an_evs_disk_through_a_dynamic_pv.rst @@ -0,0 +1,346 @@ +:original_name: cce_10_0615.html + +.. _cce_10_0615: + +Using an EVS Disk Through a Dynamic PV +====================================== + +CCE allows you to specify a StorageClass to automatically create an EVS disk and the corresponding PV. This function is applicable when no underlying storage volume is available. + +Prerequisites +------------- + +- You have created a cluster and installed the CSI add-on (:ref:`everest `) in the cluster. +- If you want to create a cluster using commands, use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +Constraints +----------- + +- EVS disks cannot be attached across AZs and cannot be used by multiple workloads, multiple pods of the same workload, or multiple tasks. Data sharing of a shared disk is not supported between nodes in a CCE cluster. If an EVS disk is attacked to multiple nodes, I/O conflicts and data cache conflicts may occur. Therefore, create only one pod when creating a Deployment that uses EVS disks. + +- For clusters earlier than v1.19.10, if an HPA policy is used to scale out a workload with EVS disks attached, the existing pods cannot be read or written when a new pod is scheduled to another node. + + For clusters of v1.19.10 and later, if an HPA policy is used to scale out a workload with EVS disks attached, a new pod cannot be started because EVS disks cannot be attached. + +(Console) Automatically Creating an EVS Disk +-------------------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Dynamically create a PVC and PV. + + a. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **Create PVC** in the upper right corner. In the dialog box displayed, configure the PVC parameters. + + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===========================================================================================================================================================================================================================================================+ + | PVC Type | In this example, select **EVS**. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the PVC name, which must be unique in the same namespace. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | - If no underlying storage is available, select **Dynamically provision** to create a PVC, PV, and underlying storage on the console in cascading mode. | + | | - If underlying storage is available, create a storage volume or use an existing storage volume to statically create a PVC based on whether a PV has been created. For details, see :ref:`Using an Existing EVS Disk Through a Static PV `. | + | | | + | | In this example, select **Dynamically provision**. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Storage Classes | The storage class for EVS disks is **csi-disk**. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | AZ | Select the AZ of the EVS disk. The AZ must be the same as that of the cluster node. | + | | | + | | .. note:: | + | | | + | | An EVS disk can only be mounted to a node in the same AZ. After an EVS disk is created, its AZ cannot be changed. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Disk Type | Select an EVS disk type. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode | EVS disks support only **ReadWriteOnce**, indicating that a storage volume can be mounted to one node in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Capacity (GiB) | Capacity of the requested storage volume. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Encryption | You can select **Encryption** and an encryption key to encrypt underlying storage. Before using the encryption function, check whether the region where the EVS disk is located supports disk encryption. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Click **Create**. + + You can choose **Storage** in the navigation pane and view the created PVC and PV on the **PersistentVolumeClaims (PVCs)** and **PersistentVolumes (PVs)** tab pages. + +#. Create an application. + + a. In the navigation pane on the left, click **Workloads**. In the right pane, click the **StatefulSets** tab. + + b. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **PVC**. + + Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0615__cce_10_0614_table2529244345: + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | PVC | Select an existing EVS volume. | + | | | + | | An EVS volume cannot be repeatedly mounted to multiple workloads. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the EVS disk. + + .. note:: + + A non-shared EVS disk cannot be attached to multiple pods in a workload. Otherwise, the pods cannot start properly. Ensure that the number of workload pods is 1 when you attach an EVS disk. + + c. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence `. + +(kubectl) Automatically Creating an EVS Disk +-------------------------------------------- + +#. Use kubectl to connect to the cluster. +#. Use **StorageClass** to dynamically create a PVC and PV. + + a. Create the **pvc-evs-auto.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-evs-auto + namespace: default + annotations: + everest.io/disk-volume-type: SAS # EVS disk type. + everest.io/crypt-key-id: # (Optional) Encryption key ID. Mandatory for an encrypted disk. + labels: + failure-domain.beta.kubernetes.io/region: # Region of the node where the application is to be deployed. + failure-domain.beta.kubernetes.io/zone: # AZ of the node where the application is to be deployed. + spec: + accessModes: + - ReadWriteOnce # The value must be ReadWriteOnce for EVS disks. + resources: + requests: + storage: 10Gi # EVS disk capacity, ranging from 1 to 32768. + storageClassName: csi-disk # Storage class type for EVS disks. + + .. table:: **Table 2** Key parameters + + +------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +==========================================+=======================+==============================================================================================================================================================================================+ + | failure-domain.beta.kubernetes.io/region | Yes | Region where the cluster is located. | + | | | | + | | | For details about the value of **region**, see `Regions and Endpoints `__. | + +------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | failure-domain.beta.kubernetes.io/zone | Yes | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | + | | | | + | | | For details about the value of **zone**, see `Regions and Endpoints `__. | + +------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/disk-volume-type | Yes | EVS disk type. All letters are in uppercase. | + | | | | + | | | - **SATA**: common I/O | + | | | - **SAS**: high I/O | + | | | - **SSD**: ultra-high I/O | + +------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/crypt-key-id | No | This parameter is mandatory when an EVS disk is encrypted. Enter the encryption key ID selected during EVS disk creation. You can use a custom key or the default key named **evs/default**. | + | | | | + | | | To obtain a key ID, log in to the DEW console, locate the key to be encrypted, and copy the key ID. | + +------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storage | Yes | Requested PVC capacity, in Gi. The value ranges from **1** to **32768**. | + +------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storageClassName | Yes | The storage class name of the EVS volumes is **csi-disk**. | + +------------------------------------------+-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PVC: + + .. code-block:: + + kubectl apply -f pvc-evs-auto.yaml + +#. Create an application. + + a. Create a file named **web-evs-auto.yaml**. In this example, the EVS volume is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: StatefulSet + metadata: + name: web-evs-auto + namespace: default + spec: + replicas: 1 + selector: + matchLabels: + app: web-evs-auto + serviceName: web-evs-auto # Headless Service name. + template: + metadata: + labels: + app: web-evs-auto + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-disk # Volume name, which must be the same as the volume name in the volumes field. + mountPath: /data # Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: pvc-disk # Volume name, which can be customized. + persistentVolumeClaim: + claimName: pvc-evs-auto # Name of the created PVC. + --- + apiVersion: v1 + kind: Service + metadata: + name: web-evs-auto # Headless Service name. + namespace: default + labels: + app: web-evs-auto + spec: + selector: + app: web-evs-auto + clusterIP: None + ports: + - name: web-evs-auto + targetPort: 80 + nodePort: 0 + port: 80 + protocol: TCP + type: ClusterIP + + b. Run the following command to create a workload to which the EVS volume is mounted: + + .. code-block:: + + kubectl apply -f web-evs-auto.yaml + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence `. + +.. _cce_10_0615__section11593165910013: + +Verifying Data Persistence +-------------------------- + +#. View the deployed application and EVS volume files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-evs-auto + + Expected output: + + .. code-block:: + + web-evs-auto-0 1/1 Running 0 38s + + b. Run the following command to check whether the EVS volume has been mounted to the **/data** path: + + .. code-block:: + + kubectl exec web-evs-auto-0 -- df | grep data + + Expected output: + + .. code-block:: + + /dev/sdc 10255636 36888 10202364 0% /data + + c. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-evs-auto-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec web-evs-auto-0 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-evs-auto-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + +#. Run the following command to delete the pod named **web-evs-auto-0**: + + .. code-block:: + + kubectl delete pod web-evs-auto-0 + + Expected output: + + .. code-block:: + + pod "web-evs-auto-0" deleted + +#. After the deletion, the StatefulSet controller automatically creates a replica with the same name. Run the following command to check whether the files in the **/data** path have been modified: + + .. code-block:: + + kubectl exec web-evs-auto-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + + If the **static** file still exists, the data in the EVS volume can be stored persistently. + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 3 `. + +.. _cce_10_0615__table1619535674020: + +.. table:: **Table 3** Related operations + + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +=======================================+====================================================================================================================================================+=============================================================================================================================================================================================+ + | Expanding the capacity of an EVS disk | Quickly expand the capacity of a mounted EVS disk on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **More** in the **Operation** column of the target PVC and select **Scale-out**. | + | | | #. Enter the capacity to be added and click **OK**. | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/elastic_volume_service_evs/using_an_existing_evs_disk_through_a_static_pv.rst b/umn/source/storage/elastic_volume_service_evs/using_an_existing_evs_disk_through_a_static_pv.rst new file mode 100644 index 0000000..896450c --- /dev/null +++ b/umn/source/storage/elastic_volume_service_evs/using_an_existing_evs_disk_through_a_static_pv.rst @@ -0,0 +1,454 @@ +:original_name: cce_10_0614.html + +.. _cce_10_0614: + +Using an Existing EVS Disk Through a Static PV +============================================== + +CCE allows you to create a PV using an existing EVS disk. After the PV is created, you can create a PVC and bind it to the PV. This mode applies to scenarios where the underlying storage is available. + +Prerequisites +------------- + +- You have created a cluster and installed the CSI add-on (:ref:`everest `) in the cluster. +- You have created an EVS disk that meets the following requirements: + + - The existing EVS disk cannot be a system disk, DSS disk, or shared disk. + - The device type of the EVS disk must be **SCSI** (the default device type is **VBD** when you purchase an EVS disk). + - The EVS disk must be available and not used by other resources. + - The AZ of the EVS disk must be the same as that of the cluster node. Otherwise, the EVS disk cannot be mounted and the pod cannot start. + - If the EVS disk is encrypted, the key must be available. + - EVS disks that have partitions or use non-ext4 file systems are not supported. + +- If you want to create a cluster using commands, use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +Constraints +----------- + +- EVS disks cannot be attached across AZs and cannot be used by multiple workloads, multiple pods of the same workload, or multiple tasks. Data sharing of a shared disk is not supported between nodes in a CCE cluster. If an EVS disk is attacked to multiple nodes, I/O conflicts and data cache conflicts may occur. Therefore, create only one pod when creating a Deployment that uses EVS disks. + +- For clusters earlier than v1.19.10, if an HPA policy is used to scale out a workload with EVS disks attached, the existing pods cannot be read or written when a new pod is scheduled to another node. + + For clusters of v1.19.10 and later, if an HPA policy is used to scale out a workload with EVS disks attached, a new pod cannot be started because EVS disks cannot be attached. + +Using an Existing EVS Disk on the Console +----------------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Statically create a PVC and PV. + + a. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **Create PVC** in the upper right corner. In the dialog box displayed, configure the PVC parameters. + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================+ + | PVC Type | In this example, select **EVS**. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the PVC name, which must be unique in the same namespace. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | - If underlying storage is available, create a storage volume or use an existing storage volume to statically create a PVC based on whether a PV has been created. | + | | - If no underlying storage is available, select **Dynamically provision**. For details, see :ref:`Using an EVS Disk Through a Dynamic PV `. | + | | | + | | In this example, select **Create new** to create a PV and PVC at the same time on the console. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PV\ :sup:`a` | Select an existing PV in the cluster. Create a PV in advance. For details, see "Creating a storage volume" in :ref:`Related Operations `. | + | | | + | | In this example, you do not need to set this parameter. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | EVS\ :sup:`b` | Click **Select EVS**. On the displayed page, select the EVS disk that meets your requirements and click **OK**. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PV Name\ :sup:`b` | Enter the PV name, which must be unique in the same cluster. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode\ :sup:`b` | EVS disks support only **ReadWriteOnce**, indicating that a storage volume can be mounted to one node in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Reclaim Policy\ :sup:`b` | You can select **Delete** or **Retain** to specify the reclaim policy of the underlying storage when the PVC is deleted. For details, see :ref:`PV Reclaim Policy `. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + .. note:: + + a: The parameter is available when **Creation Method** is set to **Use existing**. + + b: The parameter is available when **Creation Method** is set to **Create new**. + + b. Click **Create** to create a PVC and a PV. + + You can choose **Storage** in the navigation pane and view the created PVC and PV on the **PersistentVolumeClaims (PVCs)** and **PersistentVolumes (PVs)** tab pages. + +#. Create an application. + + a. In the navigation pane on the left, click **Workloads**. In the right pane, click the **StatefulSets** tab. + + b. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **PVC**. + + Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0614__table2529244345: + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | PVC | Select an existing EVS volume. | + | | | + | | An EVS volume cannot be repeatedly mounted to multiple workloads. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the EVS disk. + + .. note:: + + A non-shared EVS disk cannot be attached to multiple pods in a workload. Otherwise, the pods cannot start properly. Ensure that the number of workload pods is 1 when you attach an EVS disk. + + c. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence `. + +(kubectl) Using an Existing EVS Disk +------------------------------------ + +#. Use kubectl to connect to the cluster. +#. Create a PV. If a PV has been created in your cluster, skip this step. + + a. .. _cce_10_0614__li162841212145314: + + Create the **pv-evs.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolume + metadata: + annotations: + pv.kubernetes.io/provisioned-by: everest-csi-provisioner + everest.io/reclaim-policy: retain-volume-only # (Optional) The PV is deleted while the underlying volume is retained. + name: pv-evs # PV name. + labels: + failure-domain.beta.kubernetes.io/region: # Region of the node where the application is to be deployed. + failure-domain.beta.kubernetes.io/zone: # AZ of the node where the application is to be deployed. + spec: + accessModes: + - ReadWriteOnce # Access mode. The value is fixed to ReadWriteOnce for EVS disks. + capacity: + storage: 10Gi # EVS disk capacity, in the unit of Gi. The value ranges from 1 to 32768. + csi: + driver: disk.csi.everest.io # Dependent storage driver for the mounting. + fsType: ext4 + volumeHandle: # Volume ID of the EVS disk. + volumeAttributes: + everest.io/disk-mode: SCSI # Device type of the EVS disk. Only SCSI is supported. + everest.io/disk-volume-type: SAS # EVS disk type. + storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner + everest.io/crypt-key-id: # (Optional) Encryption key ID. Mandatory for an encrypted disk. + persistentVolumeReclaimPolicy: Delete # Reclaim policy. + storageClassName: csi-disk # Storage class name. The value must be csi-disk for EVS disks. + + .. table:: **Table 2** Key parameters + + +-----------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +===============================================+=======================+===========================================================================================================================================================================================================================================================================================================================+ + | everest.io/reclaim-policy: retain-volume-only | No | Optional. | + | | | | + | | | Currently, only **retain-volume-only** is supported. | + | | | | + | | | This field is valid only when the everest version is 1.2.9 or later and the reclaim policy is **Delete**. If the reclaim policy is **Delete** and the current value is **retain-volume-only**, the associated PV is deleted while the underlying storage volume is retained, when a PVC is deleted. | + +-----------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | failure-domain.beta.kubernetes.io/region | Yes | Region where the cluster is located. | + | | | | + | | | For details about the value of **region**, see `Regions and Endpoints `__. | + +-----------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | failure-domain.beta.kubernetes.io/zone | Yes | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | + | | | | + | | | For details about the value of **zone**, see `Regions and Endpoints `__. | + +-----------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | volumeHandle | Yes | Volume ID of the EVS disk. | + | | | | + | | | To obtain the volume ID, log in to the **Cloud Server Console**. In the navigation pane, choose **Elastic Volume Service** > **Disks**. Click the name of the target EVS disk to go to its details page. On the **Summary** tab page, click the copy button after **ID**. | + +-----------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/disk-volume-type | Yes | EVS disk type. All letters are in uppercase. | + | | | | + | | | - **SATA**: common I/O | + | | | - **SAS**: high I/O | + | | | - **SSD**: ultra-high I/O | + +-----------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/crypt-key-id | No | Mandatory when the EVS disk is encrypted. Enter the encryption key ID selected during EVS disk creation. | + | | | | + | | | To obtain the encryption key ID, log in to the **Cloud Server Console**. In the navigation pane, choose **Elastic Volume Service** > **Disks**. Click the name of the target EVS disk to go to its details page. On the **Summary** tab page, copy the value of **KMS Key ID** in the **Configuration Information** area. | + +-----------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | persistentVolumeReclaimPolicy | Yes | A reclaim policy is supported when the cluster version is or later than 1.19.10 and the everest version is or later than 1.2.9. | + | | | | + | | | The **Delete** and **Retain** reclaim policies are supported. For details, see :ref:`PV Reclaim Policy `. If high data security is required, you are advised to select **Retain** to prevent data from being deleted by mistake. | + | | | | + | | | **Delete**: | + | | | | + | | | - If **everest.io/reclaim-policy** is not specified, both the PV and EVS volare deleted when a PVC is deleted. | + | | | - If **everest.io/reclaim-policy** is set to **retain-volume-only**, when a PVC is deleted, the PV is deleted but the EVS resources are retained. | + | | | | + | | | **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV is in the **Released** status and cannot be bound to the PVC again. | + +-----------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storageClassName | Yes | The storage class name for EVS disks is **csi-disk**. | + +-----------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PV: + + .. code-block:: + + kubectl apply -f pv-evs.yaml + +#. Create a PVC. + + a. Create the **pvc-evs.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-evs + namespace: default + annotations: + everest.io/disk-volume-type: SAS # EVS disk type. + everest.io/crypt-key-id: # (Optional) Encryption key ID. Mandatory for an encrypted disk. + labels: + failure-domain.beta.kubernetes.io/region: # Region of the node where the application is to be deployed. + failure-domain.beta.kubernetes.io/zone: # AZ of the node where the application is to be deployed. + spec: + accessModes: + - ReadWriteOnce # The value must be ReadWriteOnce for EVS disks. + resources: + requests: + storage: 10Gi # EVS disk capacity, ranging from 1 to 32768. The value must be the same as the storage size of the existing PV. + storageClassName: csi-disk # Storage class type for EVS disks. + volumeName: pv-evs # PV name. + + .. table:: **Table 3** Key parameters + + +------------------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +==========================================+=======================+======================================================================================================================================+ + | failure-domain.beta.kubernetes.io/region | Yes | Region where the cluster is located. | + | | | | + | | | For details about the value of **region**, see `Regions and Endpoints `__. | + +------------------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+ + | failure-domain.beta.kubernetes.io/zone | Yes | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | + | | | | + | | | For details about the value of **zone**, see `Regions and Endpoints `__. | + +------------------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + | | | | + | | | The value must be the same as the storage size of the existing PV. | + +------------------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+ + | volumeName | Yes | PV name, which must be the same as the PV name in :ref:`1 `. | + +------------------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+ + | storageClassName | Yes | Storage class name, which must be the same as the storage class of the PV in :ref:`1 `. | + | | | | + | | | The storage class name of the EVS volumes is **csi-disk**. | + +------------------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PVC: + + .. code-block:: + + kubectl apply -f pvc-evs.yaml + +#. Create an application. + + a. Create a file named **web-evs.yaml**. In this example, the EVS volume is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: StatefulSet + metadata: + name: web-evs + namespace: default + spec: + replicas: 1 # The number of workload replicas that use the EVS volume must be 1. + selector: + matchLabels: + app: web-evs + serviceName: web-evs # Headless Service name. + template: + metadata: + labels: + app: web-evs + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-disk # Volume name, which must be the same as the volume name in the volumes field. + mountPath: /data # Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: pvc-disk # Volume name, which can be customized. + persistentVolumeClaim: + claimName: pvc-evs # Name of the created PVC. + --- + apiVersion: v1 + kind: Service + metadata: + name: web-evs # Headless Service name. + namespace: default + labels: + app: web-evs + spec: + selector: + app: web-evs + clusterIP: None + ports: + - name: web-evs + targetPort: 80 + nodePort: 0 + port: 80 + protocol: TCP + type: ClusterIP + + b. Run the following command to create a workload to which the EVS volume is mounted: + + .. code-block:: + + kubectl apply -f web-evs.yaml + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence `. + +.. _cce_10_0614__section11593165910013: + +Verifying Data Persistence +-------------------------- + +#. View the deployed application and EVS volume files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-evs + + Expected output: + + .. code-block:: + + web-evs-0 1/1 Running 0 38s + + b. Run the following command to check whether the EVS volume has been mounted to the **/data** path: + + .. code-block:: + + kubectl exec web-evs-0 -- df | grep data + + Expected output: + + .. code-block:: + + /dev/sdc 10255636 36888 10202364 0% /data + + c. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-evs-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec web-evs-0 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-evs-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + +#. Run the following command to delete the pod named **web-evs-0**: + + .. code-block:: + + kubectl delete pod web-evs-0 + + Expected output: + + .. code-block:: + + pod "web-evs-0" deleted + +#. After the deletion, the StatefulSet controller automatically creates a replica with the same name. Run the following command to check whether the files in the **/data** path have been modified: + + .. code-block:: + + kubectl exec web-evs-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + + If the **static** file still exists, the data in the EVS volume can be stored persistently. + +.. _cce_10_0614__section16505832153318: + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 4 `. + +.. _cce_10_0614__table1619535674020: + +.. table:: **Table 4** Related operations + + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +=======================================+====================================================================================================================================================+====================================================================================================================================================================================================================================+ + | Creating a storage volume (PV) | Create a PV on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumes (PVs)** tab. Click **Create Volume** in the upper right corner. In the dialog box displayed, configure the parameters. | + | | | | + | | | - **Volume Type**: Select **EVS**. | + | | | - **EVS**: Click **Select EVS**. On the displayed page, select the EVS disk that meets your requirements and click **OK**. | + | | | - **PV Name**: Enter the PV name, which must be unique in the same cluster. | + | | | - **Access Mode**: EVS disks support only **ReadWriteOnce**, indicating that a storage volume can be mounted to one node in read/write mode. For details, see :ref:`Volume Access Modes `. | + | | | - **Reclaim Policy**: **Delete** or **Retain**. For details, see :ref:`PV Reclaim Policy `. | + | | | | + | | | #. Click **Create**. | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Expanding the capacity of an EVS disk | Quickly expand the capacity of a mounted EVS disk on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **More** in the **Operation** column of the target PVC and select **Scale-out**. | + | | | #. Enter the capacity to be added and click **OK**. | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/ephemeral_volumes_emptydir/importing_an_ev_to_a_storage_pool.rst b/umn/source/storage/ephemeral_volumes_emptydir/importing_an_ev_to_a_storage_pool.rst new file mode 100644 index 0000000..56d5ca2 --- /dev/null +++ b/umn/source/storage/ephemeral_volumes_emptydir/importing_an_ev_to_a_storage_pool.rst @@ -0,0 +1,41 @@ +:original_name: cce_10_0725.html + +.. _cce_10_0725: + +Importing an EV to a Storage Pool +================================= + +CCE allows you to use LVM to combine data volumes on nodes into a storage pool (VolumeGroup) and create LVs for containers to mount. Before creating a local EV, import the data disk of the node to the storage pool. + +Constraints +----------- + +- Local EVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 1.2.29 or later. + +- The first data disk (used by container runtime and the kubelet component) on a node cannot be imported as a storage pool. +- Storage pools in striped mode do not support scale-out. After scale-out, fragmented space may be generated and the storage pool cannot be used. +- Storage pools cannot be scaled in or deleted. +- If disks in a storage pool on a node are deleted, the storage pool will malfunction. + +Importing a Storage Pool +------------------------ + +**Imported during node creation** + +When creating a node, you can add a data disk to the node in **Storage Settings** and import the data disk to the storage pool as an EV. For details, see :ref:`Creating a Node `. + +**Imported manually** + +If no EV is imported during node creation, or the capacity of the current storage volume is insufficient, you can manually import a storage pool. + +#. Go to the ECS console and add a SCSI disk to the node. +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. In the navigation pane, choose **Storage** and switch to the **Storage Pool** tab. +#. View the node to which the disk has been added and select **Import as EV**. You can select a write mode during the import. + + .. note:: + + If the manually attached disk is not displayed in the storage pool, wait for 1 minute and refresh the list. + + - **Linear**: A linear logical volume integrates one or more physical volumes. Data is written to the next physical volume when the previous one is used up. + - **Striped**: A striped logical volume stripes data into blocks of the same size and stores them in multiple physical volumes in sequence, allowing data to be concurrently read and written. Select this option only when there are multiple volumes. diff --git a/umn/source/storage/ephemeral_volumes_emptydir/index.rst b/umn/source/storage/ephemeral_volumes_emptydir/index.rst new file mode 100644 index 0000000..4b9858d --- /dev/null +++ b/umn/source/storage/ephemeral_volumes_emptydir/index.rst @@ -0,0 +1,20 @@ +:original_name: cce_10_0636.html + +.. _cce_10_0636: + +Ephemeral Volumes (emptyDir) +============================ + +- :ref:`Overview ` +- :ref:`Importing an EV to a Storage Pool ` +- :ref:`Using a Local EV ` +- :ref:`Using a Temporary Path ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + importing_an_ev_to_a_storage_pool + using_a_local_ev + using_a_temporary_path diff --git a/umn/source/storage/ephemeral_volumes_emptydir/overview.rst b/umn/source/storage/ephemeral_volumes_emptydir/overview.rst new file mode 100644 index 0000000..f96b681 --- /dev/null +++ b/umn/source/storage/ephemeral_volumes_emptydir/overview.rst @@ -0,0 +1,26 @@ +:original_name: cce_10_0637.html + +.. _cce_10_0637: + +Overview +======== + +Introduction +------------ + +Some applications require additional storage, but whether the data is still available after a restart is not important. For example, although cache services are limited by memory size, cache services can move infrequently used data to storage slower than memory. As a result, overall performance is not impacted significantly. Other applications require read-only data injected as files, such as configuration data or secrets. + +`Ephemeral volumes `__ (EVs) in Kubernetes are designed for the above scenario. EVs are created and deleted together with pods following the pod lifecycle. + +Common EVs in Kubernetes: + +- :ref:`emptyDir `: empty at pod startup, with storage coming locally from the kubelet base directory (usually the root disk) or memory. emptyDir is allocated from the `EV of the node `__. If data from other sources (such as log files or image tiering data) occupies the temporary storage, the storage capacity may be insufficient. +- :ref:`ConfigMap `: Kubernetes data of the ConfigMap type is mounted to pods as data volumes. +- :ref:`Secret `: Kubernetes data of the Secret type is mounted to pods as data volumes. + +Constraints +----------- + +- Local EVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 1.2.29 or later. +- Do not manually delete the corresponding storage pool or detach data disks from the node. Otherwise, exceptions such as data loss may occur. +- Ensure that the **/var/lib/kubelet/pods/** directory is not mounted to the pod on the node. Otherwise, the pod, mounted with such volumes, may fail to be deleted. diff --git a/umn/source/storage/ephemeral_volumes_emptydir/using_a_local_ev.rst b/umn/source/storage/ephemeral_volumes_emptydir/using_a_local_ev.rst new file mode 100644 index 0000000..18de523 --- /dev/null +++ b/umn/source/storage/ephemeral_volumes_emptydir/using_a_local_ev.rst @@ -0,0 +1,107 @@ +:original_name: cce_10_0726.html + +.. _cce_10_0726: + +Using a Local EV +================ + +Local Ephemeral Volumes (EVs) are stored in EV :ref:`storage pools `. Local EVs deliver better performance than the default storage medium of native emptyDir and support scale-out. + +Prerequisites +------------- + +- You have created a cluster and installed the CSI add-on (:ref:`everest `) in the cluster. +- If you want to create a cluster using commands, use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. +- To use a local EV, import a data disk of a node to the local EV storage pool. For details, see :ref:`Importing an EV to a Storage Pool `. + +Constraints +----------- + +- Local EVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 1.2.29 or later. +- Do not manually delete the corresponding storage pool or detach data disks from the node. Otherwise, exceptions such as data loss may occur. +- The **/var/lib/kubelet/pods/** directory cannot be mounted to pods running on the node. Otherwise, the pods mounted with such volumes may fail to be deleted. + +Using the Console to Mount a Local EV +------------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane on the left, click **Workloads**. In the right pane, click the **Deployments** tab. + +#. Click **Create Workload** in the upper right corner of the page. In the **Container Settings** area, click the **Data Storage** tab and click **Add Volume** > **Local Ephemeral Volume (emptyDir)**. + +#. Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0726__table2529244345: + + .. table:: **Table 1** Mounting a local EV + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Capacity | Capacity of the requested storage volume. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. After the configuration, click **Create Workload**. + +Using kubectl to Mount a Local EV +--------------------------------- + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Create a file named **nginx-emptydir.yaml** and edit it. + + **vi nginx-emptydir.yaml** + + Content of the YAML file: + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: nginx-emptydir + namespace: default + spec: + replicas: 2 + selector: + matchLabels: + app: nginx-emptydir + template: + metadata: + labels: + app: nginx-emptydir + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: vol-emptydir # Volume name, which must be the same as the volume name in the volumes field. + mountPath: /tmp # Path to which an EV is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: vol-emptydir # Volume name, which can be customized. + emptyDir: + medium: LocalVolume # If the disk medium of emptyDir is set to LocalVolume, the local EV is used. + sizeLimit: 1Gi # Volume capacity. + +#. Create a workload. + + **kubectl apply -f nginx-emptydir.yaml** diff --git a/umn/source/storage/ephemeral_volumes_emptydir/using_a_temporary_path.rst b/umn/source/storage/ephemeral_volumes_emptydir/using_a_temporary_path.rst new file mode 100644 index 0000000..07dabf9 --- /dev/null +++ b/umn/source/storage/ephemeral_volumes_emptydir/using_a_temporary_path.rst @@ -0,0 +1,102 @@ +:original_name: cce_10_0638.html + +.. _cce_10_0638: + +Using a Temporary Path +====================== + +A temporary path is of the Kubernetes-native emptyDir type. Its lifecycle is the same as that of a pod. Memory can be specified as the storage medium. When the pod is deleted, the emptyDir volume is deleted and its data is lost. + +Using the Console to Use a Temporary Path +----------------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane on the left, click **Workloads**. In the right pane, click the **Deployments** tab. + +#. Click **Create Workload** in the upper right corner of the page. In the **Container Settings** area, click the **Data Storage** tab and click **Add Volume** > **emptyDir**. + +#. Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0638__table1867417102475: + + .. table:: **Table 1** Mounting an EV + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Storage Medium | **Memory**: | + | | | + | | - You can select this option to improve the running speed, but the storage capacity is subject to the memory size. This mode is applicable when data volume is small and efficient read and write is required. | + | | - If this function is disabled, data is stored in hard disks, which applies to a large amount of data with low requirements on reading and writing efficiency. | + | | | + | | .. note:: | + | | | + | | - If **Memory** is selected, pay attention to the memory size. If the storage capacity exceeds the memory size, an OOM event occurs. | + | | - If **Memory** is selected, the size of an EV is the same as pod specifications. | + | | - If **Memory** is not selected, EVs will not occupy the system memory. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. After the configuration, click **Create Workload**. + +Using kubectl to Use a Temporary Path +------------------------------------- + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Create a file named **nginx-emptydir.yaml** and edit it. + + **vi nginx-emptydir.yaml** + + Content of the YAML file: + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: nginx-emptydir + namespace: default + spec: + replicas: 2 + selector: + matchLabels: + app: nginx-emptydir + template: + metadata: + labels: + app: nginx-emptydir + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: vol-emptydir # Volume name, which must be the same as the volume name in the volumes field. + mountPath: /tmp # Path to which an EV is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: vol-emptydir # Volume name, which can be customized. + emptyDir: + medium: Memory # EV disk medium: If this parameter is set to Memory, the memory is enabled. If this parameter is left blank, the native default storage medium is used. + sizeLimit: 1Gi # Volume capacity. + +#. Create a workload. + + **kubectl apply -f nginx-emptydir.yaml** diff --git a/umn/source/storage/hostpath.rst b/umn/source/storage/hostpath.rst new file mode 100644 index 0000000..95576e9 --- /dev/null +++ b/umn/source/storage/hostpath.rst @@ -0,0 +1,111 @@ +:original_name: cce_10_0377.html + +.. _cce_10_0377: + +hostPath +======== + +hostPath is used for mounting the file directory of the host where the container is located to the specified mount point of the container. If the container needs to access **/etc/hosts**, use hostPath to map **/etc/hosts**. + +.. important:: + + - Avoid using hostPath volumes as much as possible, as they are prone to security risks. If hostPath volumes must be used, they can only be applied to files or paths and mounted in read-only mode. + - After the pod to which a hostPath volume is mounted is deleted, the data in the hostPath volume is retained. + +Mounting a hostPath Volume on the Console +----------------------------------------- + +You can mount a path on the host to a specified container path. A hostPath volume is usually used to **store workload logs permanently** or used by workloads that need to **access internal data structure of the Docker engine on the host**. + +#. Log in to the CCE console. + +#. When creating a workload, click **Data Storage** in the **Container Settings** area. Click **Add Volume** and choose **hostPath** from the drop-down list. + +#. Set parameters for adding a local volume, as listed in :ref:`Table 1 `. + + .. _cce_10_0377__table14312815449: + + .. table:: **Table 1** Setting parameters for mounting a hostPath volume + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Storage Type | Select **HostPath**. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Host Path | Path of the host to which the local volume is to be mounted, for example, **/etc/hosts**. | + | | | + | | .. note:: | + | | | + | | **Host Path** cannot be set to the root directory **/**. Otherwise, the mounting fails. Mount paths can be as follows: | + | | | + | | - **/opt/xxxx** (excluding **/opt/cloud**) | + | | - **/mnt/xxxx** (excluding **/mnt/paas**) | + | | - **/tmp/xxx** | + | | - **/var/xxx** (excluding key directories such as **/var/lib**, **/var/script**, and **/var/paas**) | + | | - **/xxxx** (It cannot conflict with the system directory, such as **bin**, **lib**, **home**, **root**, **boot**, **dev**, **etc**, **lost+found**, **mnt**, **proc**, **sbin**, **srv**, **tmp**, **var**, **media**, **opt**, **selinux**, **sys**, and **usr**.) | + | | | + | | Do not set this parameter to **/home/paas**, **/var/paas**, **/var/lib**, **/var/script**, **/mnt/paas**, or **/opt/cloud**. Otherwise, the system or node installation will fail. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. After the configuration, click **Create Workload**. + +Mounting a hostPath Volume Using kubectl +---------------------------------------- + +#. Use kubectl to connect to the cluster. + +#. Create a file named **nginx-hostpath.yaml** and edit it. + + **vi nginx-hostpath.yaml** + + The content of the YAML file is as follows. Mount the **/data** directory on the node to the **/data** directory in the container. + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: nginx-hostpath + namespace: default + spec: + replicas: 2 + selector: + matchLabels: + app: nginx-hostpath + template: + metadata: + labels: + app: nginx-hostpath + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: vol-hostpath # Volume name, which must be the same as the volume name in the volumes field. + mountPath: /data # Mount path in the container. + imagePullSecrets: + - name: default-secret + volumes: + - name: vol-hostpath # Volume name, which can be customized. + hostPath: + path: /data # Directory location on the host node. + +#. Create a workload. + + **kubectl apply -f nginx-hostpath.yaml** diff --git a/umn/source/storage/index.rst b/umn/source/storage/index.rst index 68cc663..355da37 100644 --- a/umn/source/storage/index.rst +++ b/umn/source/storage/index.rst @@ -6,25 +6,27 @@ Storage ======= - :ref:`Overview ` -- :ref:`Using Local Disks as Storage Volumes ` -- :ref:`PVs ` -- :ref:`PVCs ` +- :ref:`Storage Basics ` +- :ref:`Elastic Volume Service (EVS) ` +- :ref:`Scalable File Service (SFS) ` +- :ref:`SFS Turbo File Systems ` +- :ref:`Object Storage Service (OBS) ` +- :ref:`Local Persistent Volumes (Local PVs) ` +- :ref:`Ephemeral Volumes (emptyDir) ` +- :ref:`hostPath ` - :ref:`StorageClass ` -- :ref:`Snapshots and Backups ` -- :ref:`Using a Custom AK/SK to Mount an OBS Volume ` -- :ref:`Setting Mount Options ` -- :ref:`Deployment Examples ` .. toctree:: :maxdepth: 1 :hidden: overview - using_local_disks_as_storage_volumes - pvs - pvcs + storage_basics + elastic_volume_service_evs/index + scalable_file_service_sfs/index + sfs_turbo_file_systems/index + object_storage_service_obs/index + local_persistent_volumes_local_pvs/index + ephemeral_volumes_emptydir/index + hostpath storageclass - snapshots_and_backups - using_a_custom_ak_sk_to_mount_an_obs_volume - setting_mount_options - deployment_examples/index diff --git a/umn/source/storage/local_persistent_volumes_local_pvs/dynamically_mounting_a_local_pv_to_a_statefulset.rst b/umn/source/storage/local_persistent_volumes_local_pvs/dynamically_mounting_a_local_pv_to_a_statefulset.rst new file mode 100644 index 0000000..eb029d0 --- /dev/null +++ b/umn/source/storage/local_persistent_volumes_local_pvs/dynamically_mounting_a_local_pv_to_a_statefulset.rst @@ -0,0 +1,272 @@ +:original_name: cce_10_0635.html + +.. _cce_10_0635: + +Dynamically Mounting a Local PV to a StatefulSet +================================================ + +Application Scenarios +--------------------- + +Dynamic mounting is available only for creating a :ref:`StatefulSet `. It is implemented through a volume claim template (`volumeClaimTemplates `__ field) and depends on the storage class to dynamically provision PVs. In this mode, each pod in a multi-pod StatefulSet is associated with a unique PVC and PV. After a pod is rescheduled, the original data can still be mounted to it based on the PVC name. In the common mounting mode for a Deployment, if ReadWriteMany is supported, multiple pods of the Deployment will be mounted to the same underlying storage. + +Prerequisites +------------- + +- You have created a cluster and installed the CSI add-on (:ref:`everest `) in the cluster. +- If you want to create a cluster using commands, use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. +- You have imported a data disk of a node to the local PV storage pool. + +Dynamically Mounting a Local PV on the Console +---------------------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. + +#. In the navigation pane on the left, click **Workloads**. In the right pane, click the **StatefulSets** tab. + +#. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **VolumeClaimTemplate (VTC)**. + +#. Click **Create PVC**. In the dialog box displayed, configure the volume claim template parameters. + + Click **Create**. + + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +=================+=============================================================================================================================================================================================================+ + | PVC Type | In this section, select **Local PV**. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the name of the PVC. After a PVC is created, a suffix is automatically added based on the number of pods. The format is <*Custom PVC name*>-<*Serial number*>, for example, example-0. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | You can only select **Dynamically provision** to create a PVC, PV, and underlying storage on the console in cascading mode. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Storage Classes | The storage class of local PVs is **csi-local-topology**. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode | Local PVs support only **ReadWriteOnce**, indicating that a storage volume can be mounted to one node in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Storage Pool | View the imported storage pool. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Capacity (GiB) | Capacity of the requested storage volume. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Enter the path to which the volume is mounted. + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, errors will occur in containers. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the local PV. + +#. Dynamically mount and use storage volumes. For details about other parameters, see :ref:`Creating a StatefulSet `. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence `. + +(kubectl) Using an Existing Local PV +------------------------------------ + +#. Use kubectl to connect to the cluster. + +#. Create a file named **statefulset-local.yaml**. In this example, the local PV is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: StatefulSet + metadata: + name: statefulset-local + namespace: default + spec: + selector: + matchLabels: + app: statefulset-local + template: + metadata: + labels: + app: statefulset-local + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-local # The value must be the same as that in the volumeClaimTemplates field. + mountPath: /data # Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + serviceName: statefulset-local # Headless Service name. + replicas: 2 + volumeClaimTemplates: + - apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-local + namespace: default + spec: + accessModes: + - ReadWriteOnce # The local PV must adopt ReadWriteOnce. + resources: + requests: + storage: 10Gi # Storage volume capacity. + storageClassName: csi-local-topology # StorageClass is local PV. + --- + apiVersion: v1 + kind: Service + metadata: + name: statefulset-local # Headless Service name. + namespace: default + labels: + app: statefulset-local + spec: + selector: + app: statefulset-local + clusterIP: None + ports: + - name: statefulset-local + targetPort: 80 + nodePort: 0 + port: 80 + protocol: TCP + type: ClusterIP + + .. table:: **Table 2** Key parameters + + +------------------+-----------+-----------------------------------------------------------+ + | Parameter | Mandatory | Description | + +==================+===========+===========================================================+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + +------------------+-----------+-----------------------------------------------------------+ + | storageClassName | Yes | The storage class of local PVs is **csi-local-topology**. | + +------------------+-----------+-----------------------------------------------------------+ + +#. Run the following command to create an application to which the local PV is mounted: + + .. code-block:: + + kubectl apply -f statefulset-local.yaml + + After the workload is created, you can try :ref:`Verifying Data Persistence `. + +.. _cce_10_0635__section11593165910013: + +Verifying Data Persistence +-------------------------- + +#. View the deployed application and files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep statefulset-local + + Expected output: + + .. code-block:: + + statefulset-local-0 1/1 Running 0 45s + statefulset-local-1 1/1 Running 0 28s + + b. Run the following command to check whether the local PV has been mounted to the **/data** path: + + .. code-block:: + + kubectl exec statefulset-local-0 -- df | grep data + + Expected output: + + .. code-block:: + + /dev/mapper/vg--everest--localvolume--persistent-pvc-local 10255636 36888 10202364 0% /data + + c. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec statefulset-local-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec statefulset-local-0 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec statefulset-local-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + +#. Run the following command to delete the pod named **web-local-auto-0**: + + .. code-block:: + + kubectl delete pod statefulset-local-0 + + Expected output: + + .. code-block:: + + pod "statefulset-local-0" deleted + +#. After the deletion, the StatefulSet controller automatically creates a replica with the same name. Run the following command to check whether the files in the **/data** path have been modified: + + .. code-block:: + + kubectl exec statefulset-local-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + + If the **static** file still exists, the data in the local PV can be stored persistently. + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 3 `. + +.. _cce_10_0635__cce_10_0634_table1619535674020: + +.. table:: **Table 3** Related operations + + +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +=======================+====================================================================================================================================================+==============================================================================================================================================================+ + | Viewing events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/local_persistent_volumes_local_pvs/importing_a_pv_to_a_storage_pool.rst b/umn/source/storage/local_persistent_volumes_local_pvs/importing_a_pv_to_a_storage_pool.rst new file mode 100644 index 0000000..966872c --- /dev/null +++ b/umn/source/storage/local_persistent_volumes_local_pvs/importing_a_pv_to_a_storage_pool.rst @@ -0,0 +1,41 @@ +:original_name: cce_10_0642.html + +.. _cce_10_0642: + +Importing a PV to a Storage Pool +================================ + +CCE allows you to use LVM to combine data volumes on nodes into a storage pool (VolumeGroup) and create LVs for containers to mount. Before creating a local PV, import the data disk of the node to the storage pool. + +Constraints +----------- + +- Local PVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 2.1.23 or later. Version 2.1.23 or later is recommended. + +- The first data disk (used by container runtime and the kubelet component) on a node cannot be imported as a storage pool. +- Storage pools in striped mode do not support scale-out. After scale-out, fragmented space may be generated and the storage pool cannot be used. +- Storage pools cannot be scaled in or deleted. +- If disks in a storage pool on a node are deleted, the storage pool will malfunction. + +Importing a Storage Pool +------------------------ + +**Imported during node creation** + +When creating a node, you can add a data disk to the node in **Storage Settings** and import the data disk to the storage pool as a PV. For details, see :ref:`Creating a Node `. + +**Imported manually** + +If no PV is imported during node creation, or the capacity of the current storage volume is insufficient, you can manually import a storage pool. + +#. Go to the ECS console and add a SCSI disk to the node. +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. In the navigation pane, choose **Storage** and switch to the **Storage Pool** tab. +#. View the node to which the disk has been added and select **Import as PV**. You can select a write mode during the import. + + .. note:: + + If the manually attached disk is not displayed in the storage pool, wait for 1 minute and refresh the list. + + - **Linear**: A linear logical volume integrates one or more physical volumes. Data is written to the next physical volume when the previous one is used up. + - **Striped**: A striped logical volume stripes data into blocks of the same size and stores them in multiple physical volumes in sequence, allowing data to be concurrently read and written. Select this option only when there are multiple volumes. diff --git a/umn/source/storage/local_persistent_volumes_local_pvs/index.rst b/umn/source/storage/local_persistent_volumes_local_pvs/index.rst new file mode 100644 index 0000000..22d371f --- /dev/null +++ b/umn/source/storage/local_persistent_volumes_local_pvs/index.rst @@ -0,0 +1,20 @@ +:original_name: cce_10_0391.html + +.. _cce_10_0391: + +Local Persistent Volumes (Local PVs) +==================================== + +- :ref:`Overview ` +- :ref:`Importing a PV to a Storage Pool ` +- :ref:`Using a Local PV Through a Dynamic PV ` +- :ref:`Dynamically Mounting a Local PV to a StatefulSet ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + importing_a_pv_to_a_storage_pool + using_a_local_pv_through_a_dynamic_pv + dynamically_mounting_a_local_pv_to_a_statefulset diff --git a/umn/source/storage/local_persistent_volumes_local_pvs/overview.rst b/umn/source/storage/local_persistent_volumes_local_pvs/overview.rst new file mode 100644 index 0000000..4a5ae62 --- /dev/null +++ b/umn/source/storage/local_persistent_volumes_local_pvs/overview.rst @@ -0,0 +1,33 @@ +:original_name: cce_10_0633.html + +.. _cce_10_0633: + +Overview +======== + +Introduction +------------ + +CCE allows you to use LVM to combine data volumes on nodes into a storage pool (VolumeGroup) and create LVs for containers to mount. A PV that uses a local persistent volume as the medium is considered local PV. + +Compared with the HostPath volume, the local PV can be used in a persistent and portable manner. In addition, the PV of the local PV has the node affinity configuration. The pod mounted to the local PV is automatically scheduled based on the affinity configuration. You do not need to manually schedule the pod to a specific node. + +Mounting Modes +-------------- + +Local PVs can be mounted only in the following modes: + +- :ref:`Using a Local PV Through a Dynamic PV `: dynamic creation mode, where you specify a StorageClass during PVC creation and an OBS volume and a PV will be automatically created. +- :ref:`Dynamically Mounting a Local PV to a StatefulSet `: Only StatefulSets support this mode. Each pod is associated with a unique PVC and PV. After a pod is rescheduled, the original data can still be mounted to it based on the PVC name. This mode applies to StatefulSets with multiple pods. + +.. note:: + + Local PVs cannot be used through static PVs. That is, local PVs cannot be manually created and then mounted to workloads through PVCs. + +Constraints +----------- + +- Local PVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 2.1.23 or later. Version 2.1.23 or later is recommended. +- Deleting, removing, resetting, or scaling in a node will cause the PVC/PV data of the local PV associated with the node to be lost, which cannot be restored or used again. For details, see :ref:`Removing a Node `, :ref:`Deleting a Node `, :ref:`Resetting a Node `, and :ref:`Scaling In a Node `. In these scenarios, the pod that uses the local PV is evicted from the node. A new pod will be created and stay in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod remains in the creating state because the underlying logical volume corresponding to the PVC does not exist. +- Do not manually delete the corresponding storage pool or detach data disks from the node. Otherwise, exceptions such as data loss may occur. +- A local PV cannot be mounted to multiple workloads or jobs at the same time. diff --git a/umn/source/storage/local_persistent_volumes_local_pvs/using_a_local_pv_through_a_dynamic_pv.rst b/umn/source/storage/local_persistent_volumes_local_pvs/using_a_local_pv_through_a_dynamic_pv.rst new file mode 100644 index 0000000..73714a3 --- /dev/null +++ b/umn/source/storage/local_persistent_volumes_local_pvs/using_a_local_pv_through_a_dynamic_pv.rst @@ -0,0 +1,306 @@ +:original_name: cce_10_0634.html + +.. _cce_10_0634: + +Using a Local PV Through a Dynamic PV +===================================== + +Prerequisites +------------- + +- You have created a cluster and installed the CSI add-on (:ref:`everest `) in the cluster. +- If you want to create a cluster using commands, use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. +- You have imported a data disk of a node to the local PV storage pool. For details, see :ref:`Importing a PV to a Storage Pool `. + +Constraints +----------- + +- Local PVs are supported only when the cluster version is v1.21.2-r0 or later and the everest add-on version is 2.1.23 or later. Version 2.1.23 or later is recommended. +- Deleting, removing, resetting, or scaling in a node will cause the PVC/PV data of the local PV associated with the node to be lost, which cannot be restored or used again. For details, see :ref:`Removing a Node `, :ref:`Deleting a Node `, :ref:`Resetting a Node `, and :ref:`Scaling In a Node `. In these scenarios, the pod that uses the local PV is evicted from the node. A new pod will be created and stay in the pending state. This is because the PVC used by the pod has a node label, due to which the pod cannot be scheduled. After the node is reset, the pod may be scheduled to the reset node. In this case, the pod remains in the creating state because the underlying logical volume corresponding to the PVC does not exist. +- Do not manually delete the corresponding storage pool or detach data disks from the node. Otherwise, exceptions such as data loss may occur. +- A local PV cannot be mounted to multiple workloads or jobs at the same time. + +Automatically Creating a Local PV on the Console +------------------------------------------------ + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Dynamically create a PVC and PV. + + a. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **Create PVC** in the upper right corner. In the dialog box displayed, configure the PVC parameters. + + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +=================+=============================================================================================================================================================================================================+ + | PVC Type | In this section, select **Local PV**. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the PVC name, which must be unique in the same namespace. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | You can only select **Dynamically provision** to create a PVC, PV, and underlying storage on the console in cascading mode. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Storage Classes | The storage class of local PVs is **csi-local-topology**. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode | Local PVs support only **ReadWriteOnce**, indicating that a storage volume can be mounted to one node in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Storage Pool | View the imported storage pool. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Capacity (GiB) | Capacity of the requested storage volume. | + +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Click **Create** to create a PVC and a PV. + + You can choose **Storage** in the navigation pane and view the created PVC and PV on the **PersistentVolumeClaims (PVCs)** and **PersistentVolumes (PVs)** tab pages. + + .. note:: + + The volume binding mode of the local storage class (named **csi-local-topology**) is late binding (that is, the value of **volumeBindingMode** is **WaitForFirstConsumer**). In this mode, PV creation and binding are delayed. The corresponding PV is created and bound only when the PVC is used during workload creation. + +#. Create an application. + + a. In the navigation pane on the left, click **Workloads**. In the right pane, click the **Deployments** tab. + + b. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **PVC**. + + Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0634__table2529244345: + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | PVC | Select an existing local PV. | + | | | + | | A local PV cannot be repeatedly mounted to multiple workloads. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the local PV. + + c. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence `. + +(kubectl) Automatically Creating a Local PV +------------------------------------------- + +#. Use kubectl to connect to the cluster. +#. Use **StorageClass** to dynamically create a PVC and PV. + + a. Create the **pvc-local.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-local + namespace: default + spec: + accessModes: + - ReadWriteOnce # The local PV must adopt ReadWriteOnce. + resources: + requests: + storage: 10Gi # Size of the local PV. + storageClassName: csi-local-topology # StorageClass is local PV. + + .. table:: **Table 2** Key parameters + + +------------------+-----------+-----------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +==================+===========+===================================================================================+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + +------------------+-----------+-----------------------------------------------------------------------------------+ + | storageClassName | Yes | Storage class name. The storage class name of local PV is **csi-local-topology**. | + +------------------+-----------+-----------------------------------------------------------------------------------+ + + b. Run the following command to create a PVC: + + .. code-block:: + + kubectl apply -f pvc-local.yaml + +#. Create an application. + + a. Create a file named **web-demo.yaml**. In this example, the local PV is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: StatefulSet + metadata: + name: web-local + namespace: default + spec: + replicas: 1 + selector: + matchLabels: + app: web-local + serviceName: web-local # Headless Service name. + template: + metadata: + labels: + app: web-local + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-disk #Volume name, which must be the same as the volume name in the volumes field. + mountPath: /data #Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: pvc-disk #Volume name, which can be customized. + persistentVolumeClaim: + claimName: pvc-local #Name of the created PVC. + --- + apiVersion: v1 + kind: Service + metadata: + name: web-local # Headless Service name. + namespace: default + labels: + app: web-local + spec: + selector: + app: web-local + clusterIP: None + ports: + - name: web-local + targetPort: 80 + nodePort: 0 + port: 80 + protocol: TCP + type: ClusterIP + + b. Run the following command to create an application to which the local PV is mounted: + + .. code-block:: + + kubectl apply -f web-local.yaml + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence `. + +.. _cce_10_0634__section11593165910013: + +Verifying Data Persistence +-------------------------- + +#. View the deployed application and local files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-local + + Expected output: + + .. code-block:: + + web-local-0 1/1 Running 0 38s + + b. Run the following command to check whether the local PV has been mounted to the **/data** path: + + .. code-block:: + + kubectl exec web-local-0 -- df | grep data + + Expected output: + + .. code-block:: + + /dev/mapper/vg--everest--localvolume--persistent-pvc-local 10255636 36888 10202364 0% /data + + c. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-local-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec web-local-0 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-local-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + +#. Run the following command to delete the pod named **web-local-0**: + + .. code-block:: + + kubectl delete pod web-local-0 + + Expected output: + + .. code-block:: + + pod "web-local-0" deleted + +#. After the deletion, the StatefulSet controller automatically creates a replica with the same name. Run the following command to check whether the files in the **/data** path have been modified: + + .. code-block:: + + kubectl exec web-local-0 -- ls /data + + Expected output: + + .. code-block:: + + lost+found + static + + If the **static** file still exists, the data in the local PV can be stored persistently. + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 3 `. + +.. _cce_10_0634__table1619535674020: + +.. table:: **Table 3** Related operations + + +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +=======================+====================================================================================================================================================+==============================================================================================================================================================+ + | Viewing events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/object_storage_service_obs/configuring_obs_mount_options.rst b/umn/source/storage/object_storage_service_obs/configuring_obs_mount_options.rst new file mode 100644 index 0000000..fa0584e --- /dev/null +++ b/umn/source/storage/object_storage_service_obs/configuring_obs_mount_options.rst @@ -0,0 +1,171 @@ +:original_name: cce_10_0631.html + +.. _cce_10_0631: + +Configuring OBS Mount Options +============================= + +This section describes how to configure OBS volume mount options. You can configure mount options in a PV and bind the PV to a PVC. Alternatively, configure mount options in a StorageClass and use the StorageClass to create a PVC. In this way, PVs can be dynamically created and inherit mount options configured in the StorageClass by default. + +Prerequisites +------------- + +The everest add-on version must be **1.2.8 or later**. The add-on identifies the mount options and transfers them to the underlying storage resources, which determine whether the specified options are valid. + +Constraints +----------- + +Mount options cannot be configured for Kata containers. + +.. _cce_10_0631__section1254912109811: + +OBS Mount Options +----------------- + +When mounting an OBS volume, the everest add-on presets the options described in :ref:`Table 1 ` and :ref:`Table 2 ` by default. The options in :ref:`Table 1 ` are mandatory. + +.. _cce_10_0631__table1688593020213: + +.. table:: **Table 1** Mandatory mount options configured by default + + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Value | Description | + +=======================+=======================+==========================================================================================================================================+ + | use_ino | Leave it blank. | If enabled, obsfs allocates the **inode** number. Enabled by default in read/write mode. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------+ + | big_writes | Leave it blank. | If configured, the maximum size of the cache can be modified. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------+ + | nonempty | Leave it blank. | Allows non-empty mount paths. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------+ + | allow_other | Leave it blank. | Allows other users to access the parallel file system. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------+ + | no_check_certificate | Leave it blank. | Disables server certificate verification. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------+ + | enable_noobj_cache | Leave it blank. | Enables cache entries for objects that do not exist, which can improve performance. Enabled by default in object bucket read/write mode. | + | | | | + | | | **This option is no longer configured by default since everest 1.2.40.** | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------+ + | sigv2 | Leave it blank. | Specifies the signature version. Used by default in object buckets. | + +-----------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0631__table9886123010217: + +.. table:: **Table 2** Optional mount options configured by default + + +---------------------+-----------------+--------------------------------------------------------------------------------------------------------------------+ + | Parameter | Value | Description | + +=====================+=================+====================================================================================================================+ + | max_write | 131072 | This parameter is valid only when **big_writes** is configured. The recommended value is **128 KB**. | + +---------------------+-----------------+--------------------------------------------------------------------------------------------------------------------+ + | ssl_verify_hostname | 0 | Disables SSL certificate verification based on the host name. | + +---------------------+-----------------+--------------------------------------------------------------------------------------------------------------------+ + | max_background | 100 | Allows setting the maximum number of waiting requests in the background. Used by default in parallel file systems. | + +---------------------+-----------------+--------------------------------------------------------------------------------------------------------------------+ + | public_bucket | 1 | If set to **1**, public buckets are mounted anonymously. Enabled by default in object bucket read/write mode. | + +---------------------+-----------------+--------------------------------------------------------------------------------------------------------------------+ + | umask | Leave it blank. | Mask of the configuration file permission. | + +---------------------+-----------------+--------------------------------------------------------------------------------------------------------------------+ + +Configuring Mount Options in a PV +--------------------------------- + +You can use the **mountOptions** field to configure mount options in a PV. The options you can configure in **mountOptions** are listed in :ref:`OBS Mount Options `. + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Configure mount options in a PV. Example: + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolume + metadata: + annotations: + pv.kubernetes.io/provisioned-by: everest-csi-provisioner + everest.io/reclaim-policy: retain-volume-only # (Optional) The PV is deleted while the underlying volume is retained. + name: pv-obs # PV name. + spec: + accessModes: + - ReadWriteMany # Access mode. The value must be ReadWriteMany for OBS. + capacity: + storage: 1Gi # OBS volume capacity. + csi: + driver: obs.csi.everest.io # Dependent storage driver for the mounting. + fsType: obsfs # Instance type. + volumeHandle: # Name of the OBS volume. + volumeAttributes: + storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner + everest.io/obs-volume-type: STANDARD + everest.io/region: # Region where the OBS volume is. + nodePublishSecretRef: # Custom secret of the OBS volume. + name: # Custom secret name. + namespace: # Namespace of the custom secret. + persistentVolumeReclaimPolicy: Retain # Reclaim policy. + storageClassName: csi-obs # Storage class name. + mountOptions: # Mount options. + - umask=0027 + +#. After a PV is created, you can create a PVC and bind it to the PV, and then mount the PV to the container in the workload. For details, see :ref:`Using an Existing OBS Bucket Through a Static PV `. + +#. Check whether the mount options take effect. + + In this example, the PVC is mounted to the workload that uses the **nginx:latest** image. You can log in to the node where the pod to which the OBS volume is mounted resides and view the progress details. + + Run the following command: + + - Object bucket: **ps -ef \| grep s3fs** + + .. code-block:: + + root 22142 1 0 Jun03 ? 00:00:00 /usr/bin/s3fs {your_obs_name} /mnt/paas/kubernetes/kubelet/pods/{pod_uid}/volumes/kubernetes.io~csi/{your_pv_name}/mount -o url=https://{endpoint}:443 -o endpoint={region} -o passwd_file=/opt/everest-host-connector/***_obstmpcred/{your_obs_name} -o nonempty -o big_writes -o sigv2 -o allow_other -o no_check_certificate -o ssl_verify_hostname=0 -o umask=0027 -o max_write=131072 -o multipart_size=20 + + - Parallel file system: **ps -ef \| grep obsfs** + + .. code-block:: + + root 1355 1 0 Jun03 ? 00:03:16 /usr/bin/obsfs {your_obs_name} /mnt/paas/kubernetes/kubelet/pods/{pod_uid}/volumes/kubernetes.io~csi/{your_pv_name}/mount -o url=https://{endpoint}:443 -o endpoint={region} -o passwd_file=/opt/everest-host-connector/***_obstmpcred/{your_obs_name} -o allow_other -o nonempty -o big_writes -o use_ino -o no_check_certificate -o ssl_verify_hostname=0 -o max_background=100 -o umask=0027 -o max_write=131072 + +Configuring Mount Options in a StorageClass +------------------------------------------- + +You can use the **mountOptions** field to configure mount options in a StorageClass. The options you can configure in **mountOptions** are listed in :ref:`OBS Mount Options `. + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Create a customized StorageClass. Example: + + .. code-block:: + + kind: StorageClass + apiVersion: storage.k8s.io/v1 + metadata: + name: csi-obs-mount-option + provisioner: everest-csi-provisioner + parameters: + csi.storage.k8s.io/csi-driver-name: obs.csi.everest.io + csi.storage.k8s.io/fstype: s3fs + everest.io/obs-volume-type: STANDARD + reclaimPolicy: Delete + volumeBindingMode: Immediate + mountOptions: # Mount options. + - umask=0027 + +#. After the StorageClass is configured, you can use it to create a PVC. By default, the dynamically created PVs inherit the mount options configured in the StorageClass. For details, see :ref:`Using an OBS Bucket Through a Dynamic PV `. + +#. Check whether the mount options take effect. + + In this example, the PVC is mounted to the workload that uses the **nginx:latest** image. You can log in to the node where the pod to which the OBS volume is mounted resides and view the progress details. + + Run the following command: + + - Object bucket: **ps -ef \| grep s3fs** + + .. code-block:: + + root 22142 1 0 Jun03 ? 00:00:00 /usr/bin/s3fs {your_obs_name} /mnt/paas/kubernetes/kubelet/pods/{pod_uid}/volumes/kubernetes.io~csi/{your_pv_name}/mount -o url=https://{endpoint}:443 -o endpoint={region} -o passwd_file=/opt/everest-host-connector/***_obstmpcred/{your_obs_name} -o nonempty -o big_writes -o sigv2 -o allow_other -o no_check_certificate -o ssl_verify_hostname=0 -o umask=0027 -o max_write=131072 -o multipart_size=20 + + - Parallel file system: **ps -ef \| grep obsfs** + + .. code-block:: + + root 1355 1 0 Jun03 ? 00:03:16 /usr/bin/obsfs {your_obs_name} /mnt/paas/kubernetes/kubelet/pods/{pod_uid}/volumes/kubernetes.io~csi/{your_pv_name}/mount -o url=https://{endpoint}:443 -o endpoint={region} -o passwd_file=/opt/everest-host-connector/***_obstmpcred/{your_obs_name} -o allow_other -o nonempty -o big_writes -o use_ino -o no_check_certificate -o ssl_verify_hostname=0 -o max_background=100 -o umask=0027 -o max_write=131072 diff --git a/umn/source/storage/object_storage_service_obs/index.rst b/umn/source/storage/object_storage_service_obs/index.rst new file mode 100644 index 0000000..99f7a14 --- /dev/null +++ b/umn/source/storage/object_storage_service_obs/index.rst @@ -0,0 +1,22 @@ +:original_name: cce_10_0160.html + +.. _cce_10_0160: + +Object Storage Service (OBS) +============================ + +- :ref:`Overview ` +- :ref:`Using an Existing OBS Bucket Through a Static PV ` +- :ref:`Using an OBS Bucket Through a Dynamic PV ` +- :ref:`Configuring OBS Mount Options ` +- :ref:`Using a Custom Access Key (AK/SK) to Mount an OBS Volume ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + using_an_existing_obs_bucket_through_a_static_pv + using_an_obs_bucket_through_a_dynamic_pv + configuring_obs_mount_options + using_a_custom_access_key_ak_sk_to_mount_an_obs_volume diff --git a/umn/source/storage/object_storage_service_obs/overview.rst b/umn/source/storage/object_storage_service_obs/overview.rst new file mode 100644 index 0000000..00f9d51 --- /dev/null +++ b/umn/source/storage/object_storage_service_obs/overview.rst @@ -0,0 +1,36 @@ +:original_name: cce_10_0628.html + +.. _cce_10_0628: + +Overview +======== + +Introduction +------------ + +Object Storage Service (OBS) provides massive, secure, and cost-effective data storage capabilities for you to store data of any type and size. You can use it in enterprise backup/archiving, video on demand (VoD), video surveillance, and many other scenarios. + +- **Standard APIs**: With HTTP RESTful APIs, OBS allows you to use client tools or third-party tools to access object storage. +- **Data sharing**: Servers, embedded devices, and IoT devices can use the same path to access shared object data in OBS. +- **Public/Private networks**: OBS allows data to be accessed from public networks to meet Internet application requirements. +- **Capacity and performance**: No capacity limit; high performance (read/write I/O latency within 10 ms). +- **Use cases**: Deployments/StatefulSets in the **ReadOnlyMany** mode and jobs created for big data analysis, static website hosting, online VOD, gene sequencing, intelligent video surveillance, backup and archiving, and enterprise cloud boxes (web disks). You can create object storage by using the OBS console, tools, and SDKs. + +OBS Specifications +------------------ + +OBS provides multiple storage classes to meet customers' requirements on storage performance and costs. + +- Parallel File System (PFS, **recommended**): It is an optimized high-performance file system provided by OBS. It provides millisecond-level access latency, TB/s-level bandwidth, and million-level IOPS, and can quickly process HPC workloads. PFS outperforms OBS buckets. +- Object bucket (**not recommended**): + + - Standard: features low latency and high throughput. It is therefore good for storing frequently (multiple times per month) accessed files or small files (less than 1 MB). Its application scenarios include big data analytics, mobile apps, hot videos, and social apps. + - OBS Infrequent Access: applicable to storing semi-frequently accessed (less than 12 times a year) data requiring quick response. Its application scenarios include file synchronization or sharing, and enterprise-level backup. This storage class has the same durability, low latency, and high throughput as the Standard storage class, with a lower cost, but its availability is slightly lower than the Standard storage class. + +Application Scenarios +--------------------- + +OBS supports the following mounting modes based on application scenarios: + +- :ref:`Using an Existing OBS Bucket Through a Static PV `: static creation mode, where you use an existing OBS volume to create a PV and then mount storage to the workload through a PVC. This mode applies to scenarios where the underlying storage is available. +- :ref:`Using an OBS Bucket Through a Dynamic PV `: dynamic creation mode, where you do not need to create OBS volumes in advance. Instead, specify a StorageClass during PVC creation and an OBS volume and a PV will be automatically created. This mode applies to scenarios where no underlying storage is available. diff --git a/umn/source/storage/using_a_custom_ak_sk_to_mount_an_obs_volume.rst b/umn/source/storage/object_storage_service_obs/using_a_custom_access_key_ak_sk_to_mount_an_obs_volume.rst similarity index 88% rename from umn/source/storage/using_a_custom_ak_sk_to_mount_an_obs_volume.rst rename to umn/source/storage/object_storage_service_obs/using_a_custom_access_key_ak_sk_to_mount_an_obs_volume.rst index 5a399b0..5292292 100644 --- a/umn/source/storage/using_a_custom_ak_sk_to_mount_an_obs_volume.rst +++ b/umn/source/storage/object_storage_service_obs/using_a_custom_access_key_ak_sk_to_mount_an_obs_volume.rst @@ -2,13 +2,13 @@ .. _cce_10_0336: -Using a Custom AK/SK to Mount an OBS Volume -=========================================== +Using a Custom Access Key (AK/SK) to Mount an OBS Volume +======================================================== Scenario -------- -You can solve this issue by using Everest 1.2.8 and later versions to use custom access keys for different IAM users. +You can solve this issue by using everest 1.2.8 or later to use custom access keys for different IAM users. Prerequisites ------------- @@ -19,14 +19,15 @@ Prerequisites Constraints ----------- -Custom access keys cannot be configured for secure containers. +- When an OBS volume is mounted using a custom access key (AK/SK), the access key cannot be deleted or disabled. Otherwise, the service container cannot access the mounted OBS volume. +- Custom access keys cannot be configured for Kata containers. Disabling Auto Key Mounting --------------------------- The key you uploaded is used by default when mounting an OBS volume. That is, all IAM users under your account will use the same key to mount OBS buckets, and they have the same permissions on buckets. This setting does not allow you to configure differentiated permissions for different IAM users. -If you have uploaded the AK/SK, you are advised to disable the automatic mounting of access keys by enabling the **disable_auto_mount_secret** parameter in the everest add-on to prevent IAM users from performing unauthorized operations. In this way, the access keys uploaded on the console will not be used when creating OBS volumes. +If you have uploaded the AK/SK, disable the automatic mounting of access keys by enabling the **disable_auto_mount_secret** parameter in the everest add-on to prevent IAM users from performing unauthorized operations. In this way, the access keys uploaded on the console will not be used when creating OBS volumes. .. note:: @@ -41,6 +42,8 @@ Search for **disable-auto-mount-secret** and set it to **true**. Run **:wq** to save the settings and exit. Wait until the pod is restarted. +.. _cce_10_0336__section4633162355911: + Obtaining an Access Key ----------------------- @@ -48,7 +51,7 @@ Obtaining an Access Key #. Hover the cursor over the username in the upper right corner and choose **My Credentials** from the drop-down list. #. In the navigation pane, choose **Access Keys**. #. Click **Create Access Key**. The **Create Access Key** dialog box is displayed. -#. Click **OK** to download the AK/SK. +#. Click **OK** to download the access key. Creating a Secret Using an Access Key ------------------------------------- @@ -81,23 +84,23 @@ Creating a Secret Using an Access Key Specifically: - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+================================================================================================================================+ - | access.key | Base64-encoded AK. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------+ - | secret.key | Base64-encoded SK. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------+ - | name | Secret name. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------+ - | namespace | Namespace of the secret. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------+ - | secret.kubernetes.io/used-by: csi | You need to add this label in the YAML file if you want to make it available on the CCE console when you create an OBS PV/PVC. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------+ - | type | Secret type. The value must be **cfe/secure-opaque**. | - | | | - | | When this type is used, the data entered by users is automatically encrypted. | - +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------+ + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+====================================================================================================================+ + | access.key | Base64-encoded AK. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------+ + | secret.key | Base64-encoded SK. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------+ + | name | Secret name. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------+ + | namespace | Namespace of the secret. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------+ + | secret.kubernetes.io/used-by: csi | Add this label in the YAML file if you want to make it available on the CCE console when you create an OBS PV/PVC. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------+ + | type | Secret type. The value must be **cfe/secure-opaque**. | + | | | + | | When this type is used, the data entered by users is automatically encrypted. | + +-----------------------------------+--------------------------------------------------------------------------------------------------------------------+ #. Create the secret. @@ -152,7 +155,7 @@ After a secret is created using the AK/SK, you can associate the secret with the | volumeHandle | OBS bucket name. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -#. Create the PV. +#. Create a PV. **kubectl create -f pv-example.yaml** @@ -191,7 +194,7 @@ After a secret is created using the AK/SK, you can associate the secret with the csi.storage.k8s.io/node-publish-secret-namespace Namespace of the secret ================================================ ======================= -#. Create the PVC. +#. Create a PVC. **kubectl create -f pvc-example.yaml** @@ -231,7 +234,7 @@ When dynamically creating an OBS volume, you can use the following method to spe csi.storage.k8s.io/node-publish-secret-namespace Namespace of the secret ================================================ ======================= -#. Create the PVC. +#. Create a PVC. **kubectl create -f pvc-example.yaml** @@ -256,7 +259,7 @@ You can use a secret of an IAM user to mount an OBS volume. Assume that a worklo **kubectl exec obs-secret-5cd558f76f-vxslv -- ls -l /temp/** -#. Write data into the mount path. In this example, the write operation fails. +#. Write data into the mount path. In this example, the write operation failed. **kubectl exec obs-secret-5cd558f76f-vxslv -- touch /temp/test** @@ -271,7 +274,7 @@ You can use a secret of an IAM user to mount an OBS volume. Assume that a worklo |image2| -#. Write data into the mouth path again. In this example, the write operation succeeded. +#. Write data into the mount path again. In this example, the write operation succeeded. **kubectl exec obs-secret-5cd558f76f-vxslv -- touch /temp/test** @@ -285,5 +288,5 @@ You can use a secret of an IAM user to mount an OBS volume. Assume that a worklo -rwxrwxrwx 1 root root 0 Jun 7 01:52 test -.. |image1| image:: /_static/images/en-us_image_0000001715987941.png -.. |image2| image:: /_static/images/en-us_image_0000001569022933.png +.. |image1| image:: /_static/images/en-us_image_0000001695896633.png +.. |image2| image:: /_static/images/en-us_image_0000001695737357.png diff --git a/umn/source/storage/object_storage_service_obs/using_an_existing_obs_bucket_through_a_static_pv.rst b/umn/source/storage/object_storage_service_obs/using_an_existing_obs_bucket_through_a_static_pv.rst new file mode 100644 index 0000000..8fd2262 --- /dev/null +++ b/umn/source/storage/object_storage_service_obs/using_an_existing_obs_bucket_through_a_static_pv.rst @@ -0,0 +1,530 @@ +:original_name: cce_10_0379.html + +.. _cce_10_0379: + +Using an Existing OBS Bucket Through a Static PV +================================================ + +This section describes how to use an existing Object Storage Service (OBS) bucket to statically create PVs and PVCs and implement data persistence and sharing in workloads. + +Prerequisites +------------- + +- You have created a cluster and installed the CSI add-on (:ref:`everest `) in the cluster. +- If you want to create a cluster using commands, use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. +- You have created an OBS bucket. An OBS bucket of the parallel file system type can be selected only when it is in the same region as the cluster. + +Constraints +----------- + +- Kata containers do not support OBS volumes. + +- When parallel file systems and object buckets are used, the group and permission of the mount point cannot be modified. + +- CCE allows you to use OBS parallel file systems by calling the OBS SDK or mounting a PVC through the **obsfs** tool provided by OBS. Each time an OBS parallel file system is mounted, an obsfs resident process is generated, as shown in the following figure. + + + .. figure:: /_static/images/en-us_image_0000001647417468.png + :alt: **Figure 1** obsfs resident process + + **Figure 1** obsfs resident process + + Reserve 1 GiB of memory for each obsfs process. For example, for a node with 4 vCPUs and 8 GiB of memory, the obsfs parallel file system should be mounted to **no more than** eight pods. + + .. note:: + + An obsfs resident process runs on a node. If the consumed memory exceeds the upper limit of the node, the node malfunctions. On a node with 4 vCPUs and 8 GiB of memory, if more than 100 pods are mounted to parallel file systems, the node will be unavailable. Control the number of pods mounted to parallel file systems on a single node. + +- Multiple PVs can use the same OBS storage volume with the following restrictions: + + - If multiple PVCs/PVs use the same underlying object storage volume, when you attempt to mount the volume to the same pod, the operation will fail because the **volumeHandle** values of these PVs are the same. + - The **persistentVolumeReclaimPolicy** parameter in the PVs must be set to **Retain**. Otherwise, when a PV is deleted, the associated underlying volume may be deleted. In this case, other PVs associated with the underlying volume malfunction. + - When the underlying volume is repeatedly used, enable isolation and protection for ReadWriteMany at the application layer to prevent data overwriting and loss. + +Using an Existing OBS Bucket on the Console +------------------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Statically create a PVC and PV. + + a. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **Create PVC** in the upper right corner. In the dialog box displayed, configure the PVC parameters. + + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=====================================================================================================================================================================================================================+ + | PVC Type | In this section, select **OBS**. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the PVC name, which must be unique in the same namespace. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | - If underlying storage is available, create a storage volume or use an existing storage volume to statically create a PVC based on whether a PV has been created. | + | | - If no underlying storage is available, select **Dynamically provision**. For details, see :ref:`Using an OBS Bucket Through a Dynamic PV `. | + | | | + | | In this example, select **Create new** to create a PV and PVC at the same time on the console. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PV\ :sup:`a` | Select an existing PV volume in the cluster. Create a PV in advance. For details, see "Creating a storage volume" in :ref:`Related Operations `. | + | | | + | | You do not need to specify this parameter in this example. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | OBS\ :sup:`b` | Click **Select OBS**. On the displayed page, select the OBS bucket that meets your requirements and click **OK**. | + | | | + | | .. note:: | + | | | + | | Currently, only parallel file systems are supported. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PV Name\ :sup:`b` | Enter the PV name, which must be unique in the same cluster. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode\ :sup:`b` | OBS volumes support only **ReadWriteMany**, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Reclaim Policy\ :sup:`b` | You can select **Delete** or **Retain** to specify the reclaim policy of the underlying storage when the PVC is deleted. For details, see :ref:`PV Reclaim Policy `. | + | | | + | | .. note:: | + | | | + | | If multiple PVs use the same OBS volume, use **Retain** to avoid cascading deletion of underlying volumes. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Secret\ :sup:`b` | **Custom**: Customize a secret if you want to assign different user permissions to different OBS storage devices. For details, see :ref:`Using a Custom Access Key (AK/SK) to Mount an OBS Volume `. | + | | | + | | Only secrets with the **secret.kubernetes.io/used-by = csi** label can be selected. The secret type is cfe/secure-opaque. If no secret is available, click **Create Secret** to create one. | + | | | + | | - **Name**: Enter a secret name. | + | | - **Namespace**: Select the namespace where the secret is. | + | | - **Access Key (AK/SK)**: Upload a key file in .csv format. For details, see :ref:`Obtaining an Access Key `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Options\ :sup:`b` | Enter the mounting parameter key-value pairs. For details, see :ref:`Configuring OBS Mount Options `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + .. note:: + + a: The parameter is available when **Creation Method** is set to **Use existing**. + + b: The parameter is available when **Creation Method** is set to **Create new**. + + b. Click **Create** to create a PVC and a PV. + + You can choose **Storage** in the navigation pane and view the created PVC and PV on the **PersistentVolumeClaims (PVCs)** and **PersistentVolumes (PVs)** tab pages. + +#. Create an application. + + a. In the navigation pane on the left, click **Workloads**. In the right pane, click the **Deployments** tab. + + b. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **PVC**. + + Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0379__table2529244345: + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | PVC | Select an existing object storage volume. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the OBS volume. + + c. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`PV Reclaim Policy `. + +(kubectl) Using an Existing OBS Bucket +-------------------------------------- + +#. Use kubectl to connect to the cluster. +#. Create a PV. + + a. .. _cce_10_0379__li162841212145314: + + Create the **pv-obs.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolume + metadata: + annotations: + pv.kubernetes.io/provisioned-by: everest-csi-provisioner + everest.io/reclaim-policy: retain-volume-only # (Optional) The PV is deleted while the underlying volume is retained. + name: pv-obs # PV name. + spec: + accessModes: + - ReadWriteMany # Access mode. The value must be ReadWriteMany for OBS. + capacity: + storage: 1Gi # OBS volume capacity. + csi: + driver: obs.csi.everest.io # Dependent storage driver for the mounting. + driver: obs.csi.everest.io # Instance type. + volumeHandle: # Name of the OBS volume. + volumeAttributes: + storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner + everest.io/obs-volume-type: STANDARD + everest.io/region: # Region where the OBS volume is. + nodePublishSecretRef: # Custom secret of the OBS volume. + name: # Custom secret name. + namespace: # Namespace of the custom secret. + persistentVolumeReclaimPolicy: Retain # Reclaim policy. + storageClassName: csi-obs # Storage class name. + mountOptions: [] # Mount options. + + .. table:: **Table 2** Key parameters + + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +===============================================+=======================+=====================================================================================================================================================================================================================================================================================================+ + | everest.io/reclaim-policy: retain-volume-only | No | Optional. | + | | | | + | | | Currently, only **retain-volume-only** is supported. | + | | | | + | | | This field is valid only when the everest version is 1.2.9 or later and the reclaim policy is **Delete**. If the reclaim policy is **Delete** and the current value is **retain-volume-only**, the associated PV is deleted while the underlying storage volume is retained, when a PVC is deleted. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | fsType | Yes | Instance type. The value can be **obsfs** or **s3fs**. | + | | | | + | | | - **obsfs**: Parallel file system, which is mounted using obsfs (recommended). | + | | | - **s3fs**: Object bucket, which is mounted using s3fs. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | volumeHandle | Yes | OBS volume name. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/obs-volume-type | Yes | OBS storage class. | + | | | | + | | | - If **fsType** is set to **s3fs**, **STANDARD** (standard bucket) and **WARM** (infrequent access bucket) are supported. | + | | | - This parameter is invalid when **fsType** is set to **obsfs**. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/region | Yes | Region where the OBS bucket is deployed. | + | | | | + | | | For details about the value of **region**, see `Regions and Endpoints `__. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | nodePublishSecretRef | No | Access key (AK/SK) used for mounting the object storage volume. You can use the AK/SK to create a secret and mount it to the PV. For details, see :ref:`Using a Custom Access Key (AK/SK) to Mount an OBS Volume `. | + | | | | + | | | An example is as follows: | + | | | | + | | | .. code-block:: | + | | | | + | | | nodePublishSecretRef: | + | | | name: secret-demo | + | | | namespace: default | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | mountOptions | No | Mount options. For details, see :ref:`Configuring OBS Mount Options `. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | persistentVolumeReclaimPolicy | Yes | A reclaim policy is supported when the cluster version is or later than 1.19.10 and the everest version is or later than 1.2.9. | + | | | | + | | | The **Delete** and **Retain** reclaim policies are supported. For details, see :ref:`PV Reclaim Policy `. If multiple PVs use the same OBS volume, use **Retain** to avoid cascading deletion of underlying volumes. | + | | | | + | | | **Delete**: | + | | | | + | | | - If **everest.io/reclaim-policy** is not specified, both the PV and storage resources are deleted when a PVC is deleted. | + | | | - If **everest.io/reclaim-policy** is set to **retain-volume-only**, when a PVC is deleted, the PV is deleted but the storage resources are retained. | + | | | | + | | | **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV is in the **Released** status and cannot be bound to the PVC again. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storage | Yes | Storage capacity, in Gi. | + | | | | + | | | For OBS buckets, this field is used only for verification (cannot be empty or 0). Its value is fixed at **1**, and any value you set does not take effect for OBS buckets. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storageClassName | Yes | The storage class name of OBS volumes is **csi-obs**. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PV: + + .. code-block:: + + kubectl apply -f pv-obs.yaml + +#. Create a PVC. + + a. Create the **pvc-obs.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-obs + namespace: default + annotations: + volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner + everest.io/obs-volume-type: STANDARD + csi.storage.k8s.io/fstype: obsfs + csi.storage.k8s.io/node-publish-secret-name: # Custom secret name. + csi.storage.k8s.io/node-publish-secret-namespace: # Namespace of the custom secret. + spec: + accessModes: + - ReadWriteMany # The value must be ReadWriteMany for OBS. + resources: + requests: + storage: 1Gi + storageClassName: csi-obs # Storage class name, which must be the same as that of the PV. + volumeName: pv-obs # PV name. + + .. table:: **Table 3** Key parameters + + +--------------------------------------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +==================================================+=======================+============================================================================================================================================================+ + | csi.storage.k8s.io/node-publish-secret-name | No | Name of the custom secret specified in the PV. | + +--------------------------------------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | csi.storage.k8s.io/node-publish-secret-namespace | No | Namespace of the custom secret specified in the PV. | + +--------------------------------------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + | | | | + | | | For OBS, this field is used only for verification (cannot be empty or 0). Its value is fixed at **1**, and any value you set does not take effect for OBS. | + +--------------------------------------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storageClassName | Yes | Storage class name, which must be the same as the storage class of the PV in :ref:`1 `. | + | | | | + | | | The storage class name of OBS volumes is **csi-obs**. | + +--------------------------------------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | volumeName | Yes | PV name, which must be the same as the PV name in :ref:`1 `. | + +--------------------------------------------------+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PVC: + + .. code-block:: + + kubectl apply -f pvc-obs.yaml + +#. Create an application. + + a. Create a file named **web-demo.yaml**. In this example, the OBS volume is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: web-demo + namespace: default + spec: + replicas: 2 + selector: + matchLabels: + app: web-demo + template: + metadata: + labels: + app: web-demo + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-obs-volume #Volume name, which must be the same as the volume name in the volumes field. + mountPath: /data #Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: pvc-obs-volume #Volume name, which can be customized. + persistentVolumeClaim: + claimName: pvc-obs #Name of the created PVC. + + b. Run the following command to create an application to which the OBS volume is mounted: + + .. code-block:: + + kubectl apply -f web-demo.yaml + + After the workload is created, you can try :ref:`Verifying Data Persistence and Sharing `. + +.. _cce_10_0379__section11593165910013: + +Verifying Data Persistence and Sharing +-------------------------------------- + +#. View the deployed applications and files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-mjhm9 1/1 Running 0 46s + web-demo-846b489584-wvv5s 1/1 Running 0 46s + + b. Run the following commands in sequence to view the files in the **/data** path of the pods: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + If no result is returned for both pods, no file exists in the **/data** path. + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + + Expected output: + + .. code-block:: + + static + +#. **Verify data persistence.** + + a. Run the following command to delete the pod named **web-demo-846b489584-mjhm9**: + + .. code-block:: + + kubectl delete pod web-demo-846b489584-mjhm9 + + Expected output: + + .. code-block:: + + pod "web-demo-846b489584-mjhm9" deleted + + After the deletion, the Deployment controller automatically creates a replica. + + b. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + The expected output is as follows, in which **web-demo-846b489584-d4d4j** is the newly created pod: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 110s + web-demo-846b489584-wvv5s 1/1 Running 0 7m50s + + c. Run the following command to check whether the files in the **/data** path of the new pod have been modified: + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + static + + If the **static** file still exists, the data can be stored persistently. + +#. **Verify data sharing.** + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 7m + web-demo-846b489584-wvv5s 1/1 Running 0 13m + + b. Run the following command to create a file named **share** in the **/data** path of either pod: In this example, select the pod named **web-demo-846b489584-d4d4j**. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- touch /data/share + + Check the files in the **/data** path of the pod. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + share + static + + c. Check whether the **share** file exists in the **/data** path of another pod (**web-demo-846b489584-wvv5s**) as well to verify data sharing. + + .. code-block:: + + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + Expected output: + + .. code-block:: + + share + static + + After you create a file in the **/data** path of a pod, if the file is also created in the **/data** path of another pods, the two pods share the same volume. + +.. _cce_10_0379__section16505832153318: + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 4 `. + +.. _cce_10_0379__table1619535674020: + +.. table:: **Table 4** Related operations + + +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +================================+====================================================================================================================================================+============================================================================================================================================================================================================================================+ + | Creating a storage volume (PV) | Create a PV on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumes (PVs)** tab. Click **Create Volume** in the upper right corner. In the dialog box displayed, configure the parameters. | + | | | | + | | | - **Volume Type**: Select **OBS**. | + | | | | + | | | - **OBS**: Click **Select OBS**. On the displayed page, select the OBS storage that meets your requirements and click **OK**. | + | | | | + | | | - **PV Name**: Enter the PV name, which must be unique in the same cluster. | + | | | | + | | | - **Access Mode**: SFS volumes support only **ReadWriteMany**, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see :ref:`Volume Access Modes `. | + | | | | + | | | - **Reclaim Policy**: **Delete** or **Retain**. For details, see :ref:`PV Reclaim Policy `. | + | | | | + | | | .. note:: | + | | | | + | | | If multiple PVs use the same underlying storage volume, use **Retain** to prevent the underlying volume from being deleted with a PV. | + | | | | + | | | - **Custom**: Customize a secret if you want to assign different user permissions to different OBS storage devices. For details, see :ref:`Using a Custom Access Key (AK/SK) to Mount an OBS Volume `. | + | | | | + | | | Only secrets with the **secret.kubernetes.io/used-by = csi** label can be selected. The secret type is cfe/secure-opaque. If no secret is available, click **Create Secret** to create one. | + | | | | + | | | - **Mount Options**: Enter the mounting parameter key-value pairs. For details, see :ref:`Configuring OBS Mount Options `. | + | | | | + | | | #. Click **Create**. | + +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Updating an access key | Update the access key of object storage on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **More** > **Update Access Key** in the **Operation** column of the PVC. | + | | | #. Upload a key file in .csv format. For details, see :ref:`Obtaining an Access Key `. Click **OK**. | + | | | | + | | | .. note:: | + | | | | + | | | After a global access key is updated, all pods mounted with the object storage that uses this access key can be accessed only after being restarted. | + +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/object_storage_service_obs/using_an_obs_bucket_through_a_dynamic_pv.rst b/umn/source/storage/object_storage_service_obs/using_an_obs_bucket_through_a_dynamic_pv.rst new file mode 100644 index 0000000..fa6d0aa --- /dev/null +++ b/umn/source/storage/object_storage_service_obs/using_an_obs_bucket_through_a_dynamic_pv.rst @@ -0,0 +1,385 @@ +:original_name: cce_10_0630.html + +.. _cce_10_0630: + +Using an OBS Bucket Through a Dynamic PV +======================================== + +This section describes how to automatically create an OBS bucket. It is applicable when no underlying storage volume is available. + +Constraints +----------- + +- Kata containers do not support OBS volumes. + +- When parallel file systems and object buckets are used, the group and permission of the mount point cannot be modified. + +- CCE allows you to use OBS parallel file systems by calling the OBS SDK or mounting a PVC through the **obsfs** tool provided by OBS. Each time an OBS parallel file system is mounted, an obsfs resident process is generated, as shown in the following figure. + + + .. figure:: /_static/images/en-us_image_0000001647417468.png + :alt: **Figure 1** obsfs resident process + + **Figure 1** obsfs resident process + + Reserve 1 GiB of memory for each obsfs process. For example, for a node with 4 vCPUs and 8 GiB of memory, the obsfs parallel file system should be mounted to **no more than** eight pods. + + .. note:: + + An obsfs resident process runs on a node. If the consumed memory exceeds the upper limit of the node, the node malfunctions. On a node with 4 vCPUs and 8 GiB of memory, if more than 100 pods are mounted to parallel file systems, the node will be unavailable. Control the number of pods mounted to parallel file systems on a single node. + +- OBS allows a single user to create a maximum of 100 buckets. If a large number of dynamic PVCs are created, the number of buckets may exceed the upper limit. As a result, no more OBS buckets can be created. In this scenario, use OBS through the OBS API or SDK and do not mount OBS buckets to the workload on the console. + +Automatically Creating an OBS Volume on the Console +--------------------------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Dynamically create a PVC and PV. + + a. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **Create PVC** in the upper right corner. In the dialog box displayed, configure the PVC parameters. + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================+ + | PVC Type | In this section, select **OBS**. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the PVC name, which must be unique in the same namespace. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | - If no underlying storage is available, select **Dynamically provision** to create a PVC, PV, and underlying storage on the console in cascading mode. | + | | - If underlying storage is available, create a storage volume or use an existing storage volume to statically create a PVC based on whether a PV has been created. For details, see :ref:`Using an Existing OBS Bucket Through a Static PV `. | + | | | + | | In this example, select **Dynamically provision**. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Storage Classes | The storage class of OBS volumes is **csi-obs**. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Instance Type | - **Parallel file system**: a high-performance file system provided by OBS. It provides millisecond-level access latency, TB/s-level bandwidth, and million-level IOPS. **Parallel file systems are recommended.** | + | | - **Object bucket**: a container that stores objects in OBS. All objects in a bucket are at the same logical level. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | OBS Class | You can select the following object bucket types: | + | | | + | | - **Standard**: Applicable when a large number of hotspot files or small-sized files need to be accessed frequently (multiple times per month on average) and require fast access response. | + | | - **Infrequent access**: Applicable when data is not frequently accessed (fewer than 12 times per year on average) but requires fast access response. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode | OBS volumes support only **ReadWriteMany**, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Secret | **Custom**: Customize a secret if you want to assign different user permissions to different OBS storage devices. For details, see :ref:`Using a Custom Access Key (AK/SK) to Mount an OBS Volume `. | + | | | + | | Only secrets with the **secret.kubernetes.io/used-by = csi** label can be selected. The secret type is cfe/secure-opaque. If no secret is available, click **Create Secret** to create one. | + | | | + | | - **Name**: Enter a secret name. | + | | - **Namespace**: Select the namespace where the secret is. | + | | - **Access Key (AK/SK)**: Upload a key file in .csv format. For details, see :ref:`Obtaining an Access Key `. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Click **Create** to create a PVC and a PV. + + You can choose **Storage** in the navigation pane and view the created PVC and PV on the **PersistentVolumeClaims (PVCs)** and **PersistentVolumes (PVs)** tab pages. + +#. Create an application. + + a. In the navigation pane on the left, click **Workloads**. In the right pane, click the **Deployments** tab. + + b. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **PVC**. + + Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0630__cce_10_0379_table2529244345: + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | PVC | Select an existing object storage volume. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the OBS volume. + + c. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`PV Reclaim Policy `. + +(kubectl) Automatically Creating an OBS Volume +---------------------------------------------- + +#. Use kubectl to connect to the cluster. +#. Use **StorageClass** to dynamically create a PVC and PV. + + a. Create the **pvc-obs-auto.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-obs-auto + namespace: default + annotations: + everest.io/obs-volume-type: STANDARD # Object storage type. + csi.storage.k8s.io/fstype: obsfs # Instance type. + csi.storage.k8s.io/node-publish-secret-name: # Custom secret name. + csi.storage.k8s.io/node-publish-secret-namespace: # Namespace of the custom secret. + spec: + accessModes: + - ReadWriteMany # For object storage, the value must be ReadWriteMany. + resources: + requests: + storage: 1Gi # OBS volume capacity. + storageClassName: csi-obs # The storage class type is OBS. + + .. table:: **Table 2** Key parameters + + +--------------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +==================================================+=======================+=====================================================================================================================================================================================================================+ + | everest.io/obs-volume-type | Yes | OBS storage class. | + | | | | + | | | - If **fsType** is set to **s3fs**, **STANDARD** (standard bucket) and **WARM** (infrequent access bucket) are supported. | + | | | - This parameter is invalid when **fsType** is set to **obsfs**. | + +--------------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | csi.storage.k8s.io/fstype | Yes | Instance type. The value can be **obsfs** or **s3fs**. | + | | | | + | | | - **obsfs**: Parallel file system, which is mounted using obsfs (recommended). | + | | | - **s3fs**: Object bucket, which is mounted using s3fs. | + +--------------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | csi.storage.k8s.io/node-publish-secret-name | No | Custom secret name. | + | | | | + | | | (Recommended) Select this option if you want to assign different user permissions to different OBS storage devices. For details, see :ref:`Using a Custom Access Key (AK/SK) to Mount an OBS Volume `. | + +--------------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | csi.storage.k8s.io/node-publish-secret-namespace | No | Namespace of a custom secret. | + +--------------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + | | | | + | | | For OBS buckets, this field is used only for verification (cannot be empty or 0). Its value is fixed at **1**, and any value you set does not take effect for OBS buckets. | + +--------------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storageClassName | Yes | Storage class name. The storage class name of OBS volumes is **csi-obs**. | + +--------------------------------------------------+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PVC: + + .. code-block:: + + kubectl apply -f pvc-obs-auto.yaml + +#. Create an application. + + a. Create a file named **web-demo.yaml**. In this example, the OBS volume is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: web-demo + namespace: default + spec: + replicas: 2 + selector: + matchLabels: + app: web-demo + template: + metadata: + labels: + app: web-demo + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-obs-volume #Volume name, which must be the same as the volume name in the volumes field. + mountPath: /data #Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: pvc-obs-volume #Volume name, which can be customized. + persistentVolumeClaim: + claimName: pvc-obs-auto #Name of the created PVC. + + b. Run the following command to create an application to which the OBS volume is mounted: + + .. code-block:: + + kubectl apply -f web-demo.yaml + + After the workload is created, you can try :ref:`Verifying Data Persistence and Sharing `. + +.. _cce_10_0630__section11593165910013: + +Verifying Data Persistence and Sharing +-------------------------------------- + +#. View the deployed applications and files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-mjhm9 1/1 Running 0 46s + web-demo-846b489584-wvv5s 1/1 Running 0 46s + + b. Run the following commands in sequence to view the files in the **/data** path of the pods: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + If no result is returned for both pods, no file exists in the **/data** path. + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + + Expected output: + + .. code-block:: + + static + +#. **Verify data persistence.** + + a. Run the following command to delete the pod named **web-demo-846b489584-mjhm9**: + + .. code-block:: + + kubectl delete pod web-demo-846b489584-mjhm9 + + Expected output: + + .. code-block:: + + pod "web-demo-846b489584-mjhm9" deleted + + After the deletion, the Deployment controller automatically creates a replica. + + b. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + The expected output is as follows, in which **web-demo-846b489584-d4d4j** is the newly created pod: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 110s + web-demo-846b489584-wvv5s 1/1 Running 0 7m50s + + c. Run the following command to check whether the files in the **/data** path of the new pod have been modified: + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + static + + If the **static** file still exists, the data can be stored persistently. + +#. **Verify data sharing.** + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 7m + web-demo-846b489584-wvv5s 1/1 Running 0 13m + + b. Run the following command to create a file named **share** in the **/data** path of either pod: In this example, select the pod named **web-demo-846b489584-d4d4j**. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- touch /data/share + + Check the files in the **/data** path of the pod. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + share + static + + c. Check whether the **share** file exists in the **/data** path of another pod (**web-demo-846b489584-wvv5s**) as well to verify data sharing. + + .. code-block:: + + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + Expected output: + + .. code-block:: + + share + static + + After you create a file in the **/data** path of a pod, if the file is also created in the **/data** path of another pods, the two pods share the same volume. + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 3 `. + +.. _cce_10_0630__table1619535674020: + +.. table:: **Table 3** Related operations + + +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +========================+====================================================================================================================================================+=====================================================================================================================================================================================+ + | Updating an access key | Update the access key of object storage on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **More** > **Update Access Key** in the **Operation** column of the PVC. | + | | | #. Upload a key file in .csv format. For details, see :ref:`Obtaining an Access Key `. Click **OK**. | + | | | | + | | | .. note:: | + | | | | + | | | After a global access key is updated, all pods mounted with the object storage that uses this access key can be accessed only after being restarted. | + +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/overview.rst b/umn/source/storage/overview.rst index 4aaa3a2..a9f4f04 100644 --- a/umn/source/storage/overview.rst +++ b/umn/source/storage/overview.rst @@ -5,187 +5,56 @@ Overview ======== -Volume ------- +Container Storage +----------------- -On-disk files in a container are ephemeral, which will be lost when the container crashes and are difficult to be shared between containers running together in a pod. The Kubernetes volume abstraction solves both of these problems. Volumes cannot be independently created, but defined in the pod spec. All containers in a pod can access its volumes, but the volumes must have been mounted to any directory in a container. - -The following figure shows how a storage volume is used between containers in a pod. - -|image1| - -A volume will no longer exist if the pod to which it is mounted does not exist. However, files in the volume may outlive the volume, depending on the volume type. - -.. _cce_10_0307__section16559121287: - -Volume Types ------------- - -Volumes can be classified into local volumes and cloud volumes. - -- Local volumes - - CCE supports the following five types of local volumes. For details about how to use them, see :ref:`Using Local Disks as Storage Volumes `. - - - emptyDir: an empty volume used for temporary storage - - hostPath: mounts a directory on a host (node) to your container for reading data from the host. - - ConfigMap: references the data stored in a ConfigMap for use by containers. - - Secret: references the data stored in a secret for use by containers. - -- Cloud volumes - - CCE supports the following types of cloud volumes: - - - EVS - - SFS Turbo - - OBS - - SFS - -CSI ---- - -You can use Kubernetes Container Storage Interface (CSI) to develop plug-ins to support specific storage volumes. - -CCE developed the storage add-on :ref:`everest ` for you to use cloud storage services, such as EVS and OBS. You can install this add-on when creating a cluster. - -PV and PVC ----------- - -Kubernetes provides PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) to abstract details of how storage is provided from how it is consumed. You can request specific size of storage when needed, just like pods can request specific levels of resources (CPU and memory). - -- PV: A PV is a persistent storage volume in a cluster. Same as a node, a PV is a cluster-level resource. -- PVC: A PVC describes a workload's request for storage resources. This request consumes existing PVs in the cluster. If there is no PV available, underlying storage and PVs are dynamically created. When creating a PVC, you need to describe the attributes of the requested persistent storage, such as the size of the volume and the read/write permissions. - -You can bind PVCs to PVs in a pod so that the pod can use storage resources. The following figure shows the relationship between PVs and PVCs. +CCE container storage is implemented based on Kubernetes container storage APIs (:ref:`CSI `). CCE integrates multiple types of cloud storage and covers different application scenarios. CCE is fully compatible with Kubernetes native storage services, such as emptyDir, hostPath, secret, and ConfigMap. -.. figure:: /_static/images/en-us_image_0000001518222608.png - :alt: **Figure 1** PVC-to-PV binding +.. figure:: /_static/images/en-us_image_0000001647576484.png + :alt: **Figure 1** Container storage type - **Figure 1** PVC-to-PV binding + **Figure 1** Container storage type -PVs describes storage resources in the cluster. PVCs are requests for those resources. The following sections will describe how to use kubectl to connect to storage resources. +CCE allows you to mount cloud storage volumes to your pods. Their features are described below. -If you do not want to create storage resources or PVs manually, you can use :ref:`StorageClasses `. +.. table:: **Table 1** Cloud storage comparison -.. _cce_10_0307__section19926174743310: + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Dimension | EVS | SFS | SFS Turbo | OBS | + +======================+=======================================================================================================================================================================================================================================================================+========================================================================================================================================================================================================================================+================================================================================================================================================================================================================================================================================================================================================+========================================================================================================================================================================================================================================================+ + | Definition | EVS offers scalable block storage for cloud servers. With high reliability, high performance, and rich specifications, EVS disks can be used for distributed file systems, dev/test environments, data warehouses, and high-performance computing (HPC) applications. | Expandable to petabytes, SFS provides fully hosted shared file storage, highly available and stable to handle data- and bandwidth-intensive applications in HPC, media processing, file sharing, content management, and web services. | Expandable to 320 TB, SFS Turbo provides a fully hosted shared file storage, which is highly available and stable, to support small files and applications requiring low latency and high IOPS. You can use SFS Turbo in high-traffic websites, log storage, compression/decompression, DevOps, enterprise OA, and containerized applications. | Object Storage Service (OBS) provides massive, secure, and cost-effective data storage for you to store data of any type and size. You can use it in enterprise backup/archiving, video on demand (VoD), video surveillance, and many other scenarios. | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Data storage logic | Stores binary data and cannot directly store files. To store files, format the file system first. | Stores files and sorts and displays data in the hierarchy of files and folders. | Stores files and sorts and displays data in the hierarchy of files and folders. | Stores objects. Files directly stored automatically generate the system metadata, which can also be customized by users. | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access mode | Accessible only after being mounted to ECSs or BMSs and initialized. | Mounted to ECSs or BMSs using network protocols. A network address must be specified or mapped to a local directory for access. | Supports the Network File System (NFS) protocol (NFSv3 only). You can seamlessly integrate existing applications and tools with SFS Turbo. | Accessible through the Internet or Direct Connect (DC). Specify the bucket address and use transmission protocols such as HTTP or HTTPS. | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Static provisioning | Supported. For details, see :ref:`Using an Existing EVS Disk Through a Static PV `. | Supported. For details, see :ref:`Using an Existing SFS File System Through a Static PV `. | Supported. For details, see :ref:`Using an Existing SFS Turbo File System Through a Static PV `. | Supported. For details, see :ref:`Using an Existing OBS Bucket Through a Static PV `. | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Dynamic provisioning | Supported. For details, see :ref:`Using an EVS Disk Through a Dynamic PV `. | Supported. For details, see :ref:`Using an SFS File System Through a Dynamic PV `. | Not supported | Supported. For details, see :ref:`Using an OBS Bucket Through a Dynamic PV `. | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Features | Non-shared storage. Each volume can be mounted to only one node. | Shared storage featuring high performance and throughput | Shared storage featuring high performance and bandwidth | Shared, user-mode file system | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Usage | HPC, enterprise core cluster applications, enterprise application systems, and dev/test | HPC, media processing, content management, web services, big data, and analysis applications | High-traffic websites, log storage, DevOps, and enterprise OA | Big data analytics, static website hosting, online video on demand (VoD), gene sequencing, intelligent video surveillance, backup and archiving, and enterprise cloud boxes (web disks) | + | | | | | | + | | .. note:: | .. note:: | | | + | | | | | | + | | HPC apps here require high-speed and high-IOPS storage, such as industrial design and energy exploration. | HPC apps here require high bandwidth and shared file storage, such as gene sequencing and image rendering. | | | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Capacity | TB | SFS 1.0: PB | General-purpose: TB | EB | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Latency | 1-2 ms | SFS 1.0: 3-20 ms | General-purpose: 1-5 ms | 10 ms | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | IOPS/TPS | 33,000 for a single disk | SFS 1.0: 2,000 | General-purpose: up to 100,000 | Tens of millions | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Bandwidth | MB/s | SFS 1.0: GB/s | General-purpose: up to GB/s | TB/s | + +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -StorageClass ------------- +Documentation +------------- -StorageClass describes the storage class used in the cluster. You need to specify StorageClass when creating a PVC or PV. As of now, CCE provides storage classes such as csi-disk, csi-nas, and csi-obs by default. When defining a PVC, you can use a StorageClassName to create a PV of the corresponding type and automatically create underlying storage resources. - -You can run the following command to query the storage classes that CCE supports. You can use the CSI plug-in provided by CCE to customize a storage class, which functions similarly as the default storage classes in CCE. - -.. code-block:: - - # kubectl get sc - NAME PROVISIONER AGE - csi-disk everest-csi-provisioner 17d # Storage class for EVS disks - csi-disk-topology everest-csi-provisioner 17d # Storage class for EVS disks with delayed binding - csi-nas everest-csi-provisioner 17d # Storage class for SFS file systems - csi-obs everest-csi-provisioner 17d # Storage class for OBS buckets - -After a StorageClass is set, PVs can be automatically created and maintained. You only need to specify the StorageClass when creating a PVC, which greatly reduces the workload. - -Cloud Services for Container Storage ------------------------------------- - -CCE allows you to mount local and cloud storage volumes listed in :ref:`Volume Types ` to your pods. Their features are described below. - - -.. figure:: /_static/images/en-us_image_0000001568902557.png - :alt: **Figure 2** Volume types supported by CCE - - **Figure 2** Volume types supported by CCE - -.. table:: **Table 1** Detailed description of cloud storage services - - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Dimension | EVS | SFS | OBS | SFS Turbo | - +======================+=======================================================================================================================================================================================================================================================================+========================================================================================================================================================================================================================================+====================================================================================================================================================================================================================================================+======================================================================================================================================================================================================================================================================================================================================+ - | Definition | EVS offers scalable block storage for cloud servers. With high reliability, high performance, and rich specifications, EVS disks can be used for distributed file systems, dev/test environments, data warehouses, and high-performance computing (HPC) applications. | Expandable to petabytes, SFS provides fully hosted shared file storage, highly available and stable to handle data- and bandwidth-intensive applications in HPC, media processing, file sharing, content management, and web services. | OBS is a stable, secure, and easy-to-use object storage service that lets you inexpensively store data of any format and size. You can use it in enterprise backup/archiving, video on demand (VoD), video surveillance, and many other scenarios. | Expandable to 320 TB, SFS Turbo provides a fully hosted shared file storage, highly available and stable to support small files and applications requiring low latency and high IOPS. You can use SFS Turbo in high-traffic websites, log storage, compression/decompression, DevOps, enterprise OA, and containerized applications. | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Data storage logic | Stores binary data and cannot directly store files. To store files, you need to format the file system first. | Stores files and sorts and displays data in the hierarchy of files and folders. | Stores objects. Files directly stored automatically generate the system metadata, which can also be customized by users. | Stores files and sorts and displays data in the hierarchy of files and folders. | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Services | Accessible only after being mounted to ECSs or BMSs and initialized. | Mounted to ECSs or BMSs using network protocols. A network address must be specified or mapped to a local directory for access. | Accessible through the Internet or Direct Connect (DC). You need to specify the bucket address and use transmission protocols such as HTTP and HTTPS. | Supports the Network File System (NFS) protocol (NFSv3 only). You can seamlessly integrate existing applications and tools with SFS Turbo. | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Static provisioning | Supported | Supported | Supported | Supported | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Dynamic provisioning | Supported | Supported | Supported | Not supported | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Features | Non-shared storage. Each volume can be mounted to only one node. | Shared storage featuring high performance and throughput | Shared, user-mode file system | Shared storage featuring high performance and bandwidth | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Usage | HPC, enterprise core cluster applications, enterprise application systems, and dev/test | HPC, media processing, content management, web services, big data, and analysis applications | Big data analysis, static website hosting, online video on demand (VoD), gene sequencing, intelligent video surveillance, backup and archiving, and enterprise cloud boxes (web disks) | High-traffic websites, log storage, DevOps, and enterprise OA | - | | | | | | - | | .. note:: | .. note:: | | | - | | | | | | - | | HPC apps here require high-speed and high-IOPS storage, such as industrial design and energy exploration. | HPC apps here require high bandwidth and shared file storage, such as gene sequencing and image rendering. | | | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Capacity | TB | SFS 1.0: PB | EB | TB | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Latency | 1-2 ms | SFS 1.0: 3-20 ms | 10 ms | 1-2 ms | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | IOPS/TPS | 33,000 for a single disk | SFS 1.0: 2K | Tens of millions | 100K | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Bandwidth | MiB/s | SFS 1.0: GiB/s | TB/s | GiB/s | - +----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Constraints ------------ - -Secure containers do not support OBS volumes. - -- A single user can create a maximum of 100 OBS buckets on the console. If you have a large number of CCE workloads and you want to mount an OBS bucket to every workload, you may easily run out of buckets. In this scenario, you are advised to use OBS through the OBS API or SDK and do not mount OBS buckets to the workload on the console. - -- For clusters earlier than v1.19.10, if an HPA policy is used to scale out a workload with EVS volumes mounted, the existing pods cannot be read or written when a new pod is scheduled to another node. - - For clusters of v1.19.10 and later, if an HPA policy is used to scale out a workload with EVS volume mounted, a new pod cannot be started because EVS disks cannot be attached. - -- When you uninstall a subpath in a cluster of v1.19 or earlier, all folders in the subpath are traversed. If there are a large number of folders, the traversal takes a long time, so does the volume unmount. You are advised not to create too many folders in the subpath. - -- The maximum size of a single file in OBS mounted to a CCE cluster is far smaller than that defined by obsfs. - -Notice on Using Add-ons ------------------------ - -- To use the CSI plug-in (the :ref:`everest ` add-on in CCE), your cluster must be using **Kubernetes 1.15 or later**. This add-on is installed by default when you create a cluster of v1.15 or later. The FlexVolume plug-in (the :ref:`storage-driver ` add-on in CCE) is installed by default when you create a cluster of v1.13 or earlier. -- If your cluster is upgraded from v1.13 to v1.15, :ref:`storage-driver ` is replaced by everest (v1.1.6 or later) for container storage. The takeover does not affect the original storage functions. -- In version 1.2.0 of the everest add-on, **key authentication** is optimized when OBS is used. After the everest add-on is upgraded from a version earlier than 1.2.0, you need to restart all workloads that use OBS in the cluster. Otherwise, workloads may not be able to use OBS. - -Differences Between CSI and FlexVolume Plug-ins ------------------------------------------------ - -.. table:: **Table 2** CSI and FlexVolume - - +---------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Kubernetes Solution | CCE Add-on | Feature | Recommendation | - +=====================+=================+================================================================================================================================================================================================================================================================================================================================================================================================================================================+================================================================================================================================================================================================================================================================================+ - | CSI | Everest | CSI was developed as a standard for exposing arbitrary block and file storage storage systems to containerized workloads. Using CSI, third-party storage providers can deploy plugins exposing new storage systems in Kubernetes without having to touch the core Kubernetes code. In CCE, the everest add-on is installed by default in clusters of Kubernetes v1.15 and later to connect to storage services (EVS, OBS, SFS, and SFS Turbo). | The :ref:`everest ` add-on is installed by default in clusters of **v1.15 and later**. CCE will mirror the Kubernetes community by providing continuous support for updated CSI capabilities. | - | | | | | - | | | The everest add-on consists of two parts: | | - | | | | | - | | | - **everest-csi-controller** for storage volume creation, deletion, capacity expansion, and cloud disk snapshots | | - | | | - **everest-csi-driver** for mounting, unmounting, and formatting storage volumes on nodes | | - | | | | | - | | | For details, see :ref:`everest `. | | - +---------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Flexvolume | storage-driver | FlexVolume is an out-of-tree plugin interface that has existed in Kubernetes since version 1.2 (before CSI). CCE provided FlexVolume volumes through the storage-driver add-on installed in clusters of Kubernetes v1.13 and earlier versions. This add-on connects clusters to storage services (EVS, OBS, SFS, and SFS Turbo). | For the created clusters of **v1.13 or earlier**, the installed FlexVolume plug-in (CCE add-on :ref:`storage-driver `) can still be used. CCE stops providing update support for this add-on, and you are advised to :ref:`upgrade these clusters `. | - | | | | | - | | | For details, see :ref:`storage-driver `. | | - +---------------------+-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -.. note:: - - - A cluster can use only one type of storage plug-ins. - - The FlexVolume plug-in cannot be replaced by the CSI plug-in in clusters of v1.13 or earlier. You can only upgrade these clusters. For details, see :ref:`Cluster Upgrade `. - -Checking Storage Add-ons ------------------------- - -#. Log in to the CCE console. -#. In the navigation tree on the left, click **Add-ons**. -#. Click the **Add-on Instance** tab. -#. Select a cluster in the upper right corner. The default storage add-on installed during cluster creation is displayed. - -.. |image1| image:: /_static/images/en-us_image_0000001517903088.png +- :ref:`Storage Basics ` +- :ref:`Elastic Volume Service (EVS) ` +- :ref:`Scalable File Service (SFS) ` +- :ref:`SFS Turbo File Systems ` +- :ref:`Object Storage Service (OBS) ` diff --git a/umn/source/storage/pvcs.rst b/umn/source/storage/pvcs.rst deleted file mode 100644 index b467578..0000000 --- a/umn/source/storage/pvcs.rst +++ /dev/null @@ -1,317 +0,0 @@ -:original_name: cce_10_0378.html - -.. _cce_10_0378: - -PVCs -==== - -PersistentVolumeClaims (PVCs) describe a workload's request for storage resources. This request consumes existing PVs in the cluster. If there is no PV available, underlying storage and PVs are dynamically created. When creating a PVC, you need to describe the attributes of the requested persistent storage, such as the size of the volume and the read/write permissions. - -Constraints ------------ - -When a PVC is created, the system checks whether there is an available PV with the same configuration in the cluster. If yes, the PVC binds the available PV to the cluster. If no PV meets the matching conditions, the system dynamically creates a storage volume. - -+---------------+-------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ -| Description | PVC Field | PV Field | Matching Logic | -+===============+=================================================================================================+================================================================================================+=======================================================================================+ -| region | pvc.metadata.labels (failure-domain.beta.kubernetes.io/region or topology.kubernetes.io/region) | pv.metadata.labels (failure-domain.beta.kubernetes.io/region or topology.kubernetes.io/region) | Defined or not defined at the same time. If defined, the settings must be consistent. | -+---------------+-------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ -| zone | pvc.metadata.labels (failure-domain.beta.kubernetes.io/zone or topology.kubernetes.io/zone) | pv.metadata.labels (failure-domain.beta.kubernetes.io/zone or topology.kubernetes.io/zone) | Defined or not defined at the same time. If defined, the settings must be consistent. | -+---------------+-------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ -| EVS disk type | pvc.metadata.annotations (everest.io/disk-volume-type) | pv.spec.csi.volumeAttributes (everest.io/disk-volume-type) | Defined or not defined at the same time. If defined, the settings must be consistent. | -+---------------+-------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ -| Key ID | pvc.metadata.annotations (everest.io/crypt-key-id) | pv.spec.csi.volumeAttributes (everest.io/crypt-key-id) | Defined or not defined at the same time. If defined, the settings must be consistent. | -+---------------+-------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ -| accessMode | accessMode | accessMode | The settings must be consistent. | -+---------------+-------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ -| Storage class | storageclass | storageclass | The settings must be consistent. | -+---------------+-------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ - -.. _cce_10_0378__section43881411172418: - -Volume Access Modes -------------------- - -PVs can be mounted to the host system only in the mode supported by underlying storage resources. For example, a file storage system can be read and written by multiple nodes, but an EVS disk can be read and written by only one node. - -- ReadWriteOnce: A volume can be mounted as read-write by a single node. This access mode is supported by EVS. -- ReadWriteMany: A volume can be mounted as read-write by multiple nodes. This access mode is supported by SFS, SFS Turbo and OBS. - -.. table:: **Table 1** Supported access modes - - ============ ============= ============= - Storage Type ReadWriteOnce ReadWriteMany - ============ ============= ============= - EVS Y x - SFS x Y - OBS x Y - SFS Turbo x Y - ============ ============= ============= - -Using a Storage Class to Create a PVC -------------------------------------- - -StorageClass describes the storage class used in the cluster. You need to specify StorageClass to dynamically create PVs and underlying storage resources when creating a PVC. - -**Using the CCE Console** - -#. Log in to the CCE console. -#. Click the cluster name and go to the cluster console. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. -#. Click **Create PVC** in the upper right corner. In the dialog box displayed, set the PVC parameters. - - - **Storage Volume Claim Type**: Select a storage type as required. - - **PVC Name**: Enter a PVC name. - - **Creation Method**: Select **Dynamic creation**. - - **Storage Classes**: Select the required storage class. The following storage resources can be dynamically provisioned: - - - **csi-disk**: EVS disk. - - **csi-nas**: SFS Capacity-Oriented file storage. - - **csi-obs**: OBS bucket. - - - **AZ** (supported only by EVS): Select the AZ where the EVS disk is located. - - **Disk Type** (supported only by EVS disks): Select an EVS disk type as required. EVS disk types vary in different regions. - - - Common I/O - - High I/O - - Ultra-high I/O - - - **Access Mode**: **ReadWriteOnce** and **ReadWriteMany** are supported. For details, see :ref:`Volume Access Modes `. - - **Capacity (GiB)** (supported only for EVS, SFS): storage capacity. This parameter is not available for OBS. - - **Encryption** (supported only for EVS and SFS): Select **Encryption**. After selecting this option, you need to select a key. - - **Secret** (supported only for OBS): Select an access key for OBS. For details, see :ref:`Using a Custom AK/SK to Mount an OBS Volume `. - -#. Click **Create**. - -**Using YAML** - -Example YAML for EVS - -- **failure-domain.beta.kubernetes.io/region**: region where the cluster is located. - - For details about the value of **Region**, see `Regions and Endpoints `__. - -- **failure-domain.beta.kubernetes.io/zone**: AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. - - For details about the value of **zone**, see `Regions and Endpoints `__. - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: pvc-evs-auto-example - namespace: default - annotations: - everest.io/disk-volume-type: SSD # EVS disk type. - everest.io/crypt-key-id: 0992dbda-6340-470e-a74e-4f0db288ed82 # (Optional) Key ID, which is used to encrypt EVS disks - - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - spec: - accessModes: - - ReadWriteOnce # The value must be ReadWriteOnce for EVS. - resources: - requests: - storage: 10Gi # EVS disk capacity, ranging from 1 to 32768. - storageClassName: csi-disk # The storage class type is EVS. - -Example YAML for file storage: - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: pvc-sfs-auto-example - namespace: default - annotations: - everest.io/crypt-key-id: 0992dbda-6340-470e-a74e-4f0db288ed82 # (Optional) Key ID, which is used to encrypt file systems - everest.io/crypt-alias: sfs/default # (Optional) Key name, which is mandatory for encrypted volumes - everest.io/crypt-domain-id: 2cd7ebd02e4743eba4e6342c09e49344 # (Optional) ID of the tenant to which the encrypted volume belongs. Mandatory for encrypted volumes. - spec: - accessModes: - - ReadWriteMany # The value must be ReadWriteMany for SFS. - resources: - requests: - storage: 10Gi # SFS file system size. - storageClassName: csi-nas # The storage class type is SFS. - -Example YAML for OBS: - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: obs-warm-provision-pvc - namespace: default - annotations: - everest.io/obs-volume-type: STANDARD # OBS bucket type. Currently, standard (STANDARD) and infrequent access (WARM) are supported. - csi.storage.k8s.io/fstype: obsfs # File type. obsfs indicates to create a parallel file system (recommended), and s3fs indicates to create an OBS bucket. - - spec: - accessModes: - - ReadWriteMany # The value must be ReadWriteMany for OBS. - resources: - requests: - storage: 1Gi # This field is valid only for verification (fixed to 1, cannot be empty or 0). The value setting does not take effect for OBS buckets. - storageClassName: csi-obs # The storage class type is OBS. - -Using a PV to Create a PVC --------------------------- - -If a PV has been created, you can create a PVC to apply for PV resources. - -**Using the CCE Console** - -#. Log in to the CCE console. -#. Click the cluster name and go to the cluster console. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. -#. Click **Create PVC** in the upper right corner. In the dialog box displayed, set the PVC parameters. - - - **Storage Volume Claim Type**: Select a storage type as required. - - **PVC Name**: name of a PVC. - - **Creation Method**: Select **Existing storage volume**. - - **PV**: Select the volume to be associated, that is, the PV. - -#. Click **Create**. - -**Using YAML** - -Example YAML for EVS - -- **failure-domain.beta.kubernetes.io/region**: region where the cluster is located. - - For details about the value of **Region**, see `Regions and Endpoints `__. - -- **failure-domain.beta.kubernetes.io/zone**: AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. - - For details about the value of **zone**, see `Regions and Endpoints `__. - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: pvc-test - namespace: default - annotations: - everest.io/disk-volume-type: SAS # EVS disk type. - everest.io/crypt-key-id: fe0757de-104c-4b32-99c5-ee832b3bcaa3 # (Optional) Key ID, which is used to encrypt EVS disks - volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner - - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - spec: - accessModes: - - ReadWriteOnce # The value must be ReadWriteOnce for EVS. - resources: - requests: - storage: 10Gi - storageClassName: csi-disk # Storage class name. The value is csi-disk for EVS. - volumeName: cce-evs-test # PV name. - -Example YAML for SFS: - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: pvc-sfs-test - namespace: default - annotations: - volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner - spec: - accessModes: - - ReadWriteMany # The value must be ReadWriteMany for SFS. - resources: - requests: - storage: 100Gi # Requested PVC capacity - storageClassName: csi-nas # Storage class name - volumeName: cce-sfs-test # PV name - -Example YAML for OBS: - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: pvc-obs-test - namespace: default - annotations: - everest.io/obs-volume-type: STANDARD # OBS bucket type. Currently, standard (STANDARD) and infrequent access (WARM) are supported. - csi.storage.k8s.io/fstype: s3fs # File type. obsfs indicates to create a parallel file system (recommended), and s3fs indicates to create an OBS bucket. - csi.storage.k8s.io/node-publish-secret-name: test-user - csi.storage.k8s.io/node-publish-secret-namespace: default - volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner - - spec: - accessModes: - - ReadWriteMany # The value must be ReadWriteMany for OBS. - resources: - requests: - storage: 1Gi # Requested PVC capacity. This field is valid only for verification (fixed to 1, cannot be empty or 0). The value setting does not take effect for OBS buckets. - storageClassName: csi-obs # Storage class name. The value is csi-obs for OBS. - volumeName: cce-obs-test # PV name. - -Example YAML for SFS Turbo: - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: pvc-test - namespace: default - annotations: - volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner - spec: - accessModes: - - ReadWriteMany # The value must be ReadWriteMany for SFS Turbo. - resources: - requests: - storage: 100Gi # Requested PVC capacity. - storageClassName: csi-sfsturbo # Storage class name. The value is csi-sfsturbo for SFS Turbo. - volumeName: pv-sfsturbo-test # PV name. - -Using a Snapshot to Creating a PVC ----------------------------------- - -The disk type, encryption setting, and disk mode of the created EVS PVC are consistent with those of the snapshot's source EVS disk. - -**Using the CCE Console** - -#. Log in to the CCE console. -#. Click the cluster name and go to the cluster console. Choose **Storage** from the navigation pane, and click the **Snapshots and Backups** tab. -#. Locate the snapshot that you want to use for creating a PVC, click **Create PVC**, and specify the PVC name in the displayed dialog box. -#. Click **Create**. - -**Using YAML** - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: pvc-test - namespace: default - annotations: - everest.io/disk-volume-type: SSD # EVS disk type, which must be the same as that of the source EVS disk of the snapshot. - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: '10' - storageClassName: csi-disk - dataSource: - name: cce-disksnap-test # Snapshot name - kind: VolumeSnapshot - apiGroup: snapshot.storage.k8s.io diff --git a/umn/source/storage/pvs.rst b/umn/source/storage/pvs.rst deleted file mode 100644 index 0c2b472..0000000 --- a/umn/source/storage/pvs.rst +++ /dev/null @@ -1,428 +0,0 @@ -:original_name: cce_10_0379.html - -.. _cce_10_0379: - -PVs -=== - -PersistentVolumes (PVs) are persistent storage volumes in a cluster. Same as a node, a PV is a cluster-level resource. - -Constraints ------------ - -- On the new CCE console (the cluster needs to be **upgraded to v1.19.10 or later** and **the everest add-on needs to be upgraded to v1.2.10 or later**), PVs are open to you for management. On the old CCE console, PVs can only be imported or dynamically created. You cannot manage the PV lifecycle on the console. -- Multiple PVs can use the same SFS or SFS Turbo file system with the following restrictions: - - - An error may occur if multiple PVCs/PVs that use the same underlying SFS or SFS Turbo file system are mounted to the same pod. - - The **persistentVolumeReclaimPolicy** parameter in the PVs must be set to **Retain**. Otherwise, when a PV is deleted, the associated underlying volume may be deleted. In this case, other PVs associated with the underlying volume may be abnormal. - - When the underlying volume is repeatedly used, it is recommended that ReadWriteMany be implemented at the application layer to prevent data overwriting and loss. - -Volume Access Modes -------------------- - -PVs can be mounted to the host system only in the mode supported by underlying storage resources. For example, a file storage system can be read and written by multiple nodes, but an EVS disk can be read and written by only one node. - -- ReadWriteOnce: A volume can be mounted as read-write by a single node. This access mode is supported by EVS. -- ReadWriteMany: A volume can be mounted as read-write by multiple nodes. This access mode is supported by SFS, OBS, and SFS Turbo. - -.. table:: **Table 1** Access modes supported by cloud storage - - ============ ============= ============= - Storage Type ReadWriteOnce ReadWriteMany - ============ ============= ============= - EVS Y x - SFS x Y - OBS x Y - SFS Turbo x Y - ============ ============= ============= - -.. _cce_10_0379__section19999142414413: - -PV Reclaim Policy ------------------ - -A PV reclaim policy is used to delete or reclaim underlying volumes when a PVC is deleted. The value can be **Delete** or **Retain**. - -- **Delete**: When a PVC is deleted, the PV and underlying storage resources are deleted. -- **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After a PVC is deleted, the PV resource is in the Released state and cannot be bound to the PVC again. - -Everest also allows you to delete a PVC without deleting underlying storage resources. This function can be achieved only by using a YAML file. Set the PV reclaim policy to **Delete** and add **annotations"everest.io/reclaim-policy: retain-volume-only"**. In this way, when the PVC is deleted, the PV resource is deleted, but the underlying storage resources are retained. - -Creating an EVS Volume ----------------------- - -.. note:: - - The requirements for creating an EVS volume are as follows: - - - System disks, DSS disks, and shared disks cannot be used. - - The EVS disk is one of the supported types (common I/O, high I/O, and ultra-high I/O), and the EVS disk device type is SCSI. - - The EVS disk is not frozen or used, and the status is available. - - If the EVS disk is encrypted, the key must be available. - -**Using the CCE Console** - -#. Log in to the CCE console. -#. Click the cluster name and access the cluster console. Choose **Storage** from the navigation pane, and click the **PersistentVolumes (PVs)** tab. -#. Click **Create Volume** in the upper right corner. In the dialog box displayed, set the volume parameters. - - - **Volume Type**: Select **EVS**. - - **EVS**: - - **PV Name**: Enter a PV name. - - **Access Mode**: ReadWriteOnce - - **Reclaim Policy**: Select **Delete** or **Retain** as required. For details, see :ref:`PV Reclaim Policy `. - -#. Click **Create**. - -**Using YAML** - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - annotations: - pv.kubernetes.io/provisioned-by: everest-csi-provisioner - everest.io/reclaim-policy: retain-volume-only # (Optional) The PV is deleted while the underlying volume is retained. - name: cce-evs-test - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - spec: - accessModes: - - ReadWriteOnce # Access mode. The value is fixed to ReadWriteOnce for EVS. - capacity: - storage: 10Gi # EVS disk capacity, in the unit of Gi. The value ranges from 1 to 32768. - csi: - driver: disk.csi.everest.io # Dependent storage driver for the mounting. - fsType: ext4 - volumeHandle: 459581af-e78c-4356-9e78-eaf9cd8525eb # Volume ID of the EVS disk. - volumeAttributes: - everest.io/disk-mode: SCSI # Device type of the EVS disk. Only SCSI is supported. - everest.io/disk-volume-type: SAS # EVS disk type. - storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner - everest.io/crypt-key-id: 0992dbda-6340-470e-a74e-4f0db288ed82 # (Optional) Encryption key ID. Mandatory for an encrypted disk. - persistentVolumeReclaimPolicy: Delete # Reclaim policy. - storageClassName: csi-disk # Storage class name. The value must be csi-disk. - -.. table:: **Table 2** Key parameters - - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+=============================================================================================================================================================================================================================================================================================+ - | everest.io/reclaim-policy: retain-volume-only | This field is optional. | - | | | - | | Currently, only **retain-volume-only** is supported. | - | | | - | | This field is valid only when the everest version is 1.2.9 or later and the reclaim policy is Delete. If the reclaim policy is Delete and the current value is **retain-volume-only**, the associated PV is deleted while the underlying storage volume is retained, when a PVC is deleted. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/region | Region where the cluster is located. | - | | | - | | For details about the value of **region**, see `Regions and Endpoints `__. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | - | | | - | | For details about the value of **zone**, see `Regions and Endpoints `__. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeHandle | Volume ID of the EVS disk. | - | | | - | | To obtain the volume ID, log in to the **Cloud Server Console**. In the navigation pane, choose **Elastic Volume Service** > **Disks**. Click the name of the target EVS disk to go to its details page. On the **Summary** tab page, click the copy button after **ID**. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/disk-volume-type | EVS disk type. All letters are in uppercase. | - | | | - | | - **SATA**: common I/O | - | | - **SAS**: high I/O | - | | - **SSD**: ultra-high I/O | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/crypt-key-id | Encryption key ID. This field is mandatory when the volume is an encrypted volume. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | persistentVolumeReclaimPolicy | A reclaim policy is supported when the cluster version is equal to or later than 1.19.10 and the everest version is equal to or later than 1.2.9. | - | | | - | | The Delete and Retain policies are supported. | - | | | - | | **Delete**: | - | | | - | | - If **everest.io/reclaim-policy** is not specified, both the PV and EVS disk are deleted when a PVC is deleted. | - | | - If **everest.io/reclaim-policy** is set to **retain-volume-only**, when a PVC is deleted, the PV is deleted but the EVS resources are retained. | - | | | - | | **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV resource is in the Released state and cannot be bound to the PVC again. | - | | | - | | If high data security is required, you are advised to select **Retain** to prevent data from being deleted by mistake. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Creating an SFS Volume ----------------------- - -.. note:: - - - The SFS file system and the cluster must be in the same VPC. - -**Using the CCE Console** - -#. Log in to the CCE console. -#. Click the cluster name and access the cluster console. Choose **Storage** from the navigation pane, and click the **PersistentVolumes (PVs)** tab. -#. Click **Create Volume** in the upper right corner. In the dialog box displayed, set the volume parameters. - - - **Volume Type**: Select **SFS**. - - Select SFS resources. - - **PV Name**: Enter a PV name. - - **Access Mode**: ReadWriteMany - - **Reclaim Policy**: Select **Delete** or **Retain** as required. For details, see :ref:`PV Reclaim Policy `. - - **Mount Options**: mount options. For details about the options, see :ref:`Setting Mount Options `. - -#. Click **Create**. - -**Using YAML** - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - annotations: - pv.kubernetes.io/provisioned-by: everest-csi-provisioner - everest.io/reclaim-policy: retain-volume-only # (Optional) The PV is deleted while the underlying volume is retained. - name: cce-sfs-test - spec: - accessModes: - - ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS. - capacity: - storage: 1Gi # File storage capacity. - csi: - driver: disk.csi.everest.io # Mount the dependent storage driver. - fsType: nfs - volumeHandle: 30b3d92a-0bc7-4610-b484-534660db81be # SFS file system ID. - volumeAttributes: - everest.io/share-export-location: # Path to shared file storage - storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner - persistentVolumeReclaimPolicy: Retain # Reclaim policy. - storageClassName: csi-nas # Storage class name - mountOptions: [] # Mount options - -.. table:: **Table 3** Key parameters - - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+=============================================================================================================================================================================================================================================================================================+ - | everest.io/reclaim-policy: retain-volume-only | This field is optional. | - | | | - | | Currently, only **retain-volume-only** is supported. | - | | | - | | This field is valid only when the everest version is 1.2.9 or later and the reclaim policy is Delete. If the reclaim policy is Delete and the current value is **retain-volume-only**, the associated PV is deleted while the underlying storage volume is retained, when a PVC is deleted. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeHandle | - If SFS Capacity-Oriented file storage is used, enter the file storage ID. | - | | | - | | On the management console, choose **Service List** > **Storage** > **Scalable File Service**. In the SFS file system list, click the name of the target file system and copy the content following **ID** on the page displayed. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/share-export-location | Shared path of the file system. | - | | | - | | On the management console, choose **Service List** > **Storage** > **Scalable File Service**. You can obtain the shared path of the file system from the **Mount Address** column. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | mountOptions | Mount options. | - | | | - | | If not specified, the following configurations are used by default. For details, see :ref:`SFS Volume Mount Options `. | - | | | - | | .. code-block:: | - | | | - | | mountOptions: | - | | - vers=3 | - | | - timeo=600 | - | | - nolock | - | | - hard | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/crypt-key-id | Encryption key ID. This field is mandatory when the volume is an encrypted volume. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | persistentVolumeReclaimPolicy | A reclaim policy is supported when the cluster version is equal to or later than 1.19.10 and the everest version is equal to or later than 1.2.9. | - | | | - | | The options are as follows: | - | | | - | | **Delete**: | - | | | - | | - If **everest.io/reclaim-policy** is not specified, both the PV and SFS volume are deleted when a PVC is deleted. | - | | - If **everest.io/reclaim-policy** is set to **retain-volume-only**, when a PVC is deleted, the PV is deleted but the SFS volume resources are retained. | - | | | - | | **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV resource is in the Released state and cannot be bound to the PVC again. | - | | | - | | If high data security is required, you are advised to select **Retain** to prevent data from being deleted by mistake. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Creating an OBS Volume ----------------------- - -.. note:: - - Secure containers do not support OBS volumes. - - A single user can create a maximum of 100 OBS buckets on the console. If you have a large number of CCE workloads and you want to mount an OBS bucket to every workload, you may easily run out of buckets. In this scenario, you are advised to use OBS through the OBS API or SDK and do not mount OBS buckets to the workload on the console. - -**Using the CCE Console** - -#. Log in to the CCE console. -#. Click the cluster name and access the cluster console. Choose **Storage** from the navigation pane, and click the **PersistentVolumes (PVs)** tab. -#. Click **Create Volume** in the upper right corner. In the dialog box displayed, set the volume parameters. - - - **Volume Type**: Select **OBS**. - - Select OBS resources. - - **PV Name**: Enter a PV name. - - **Access Mode**: ReadWriteMany - - **Reclaim Policy**: Select **Delete** or **Retain** as required. For details, see :ref:`PV Reclaim Policy `. - - **Secret**: You can customize the access key (AK/SK) for mounting an OBS volume. You can use the AK/SK to create a secret and mount the secret to the PV. For details, see :ref:`Using a Custom AK/SK to Mount an OBS Volume `. - - **Mount Options**: mount options. For details about the options, see :ref:`Setting Mount Options `. - -#. Click **Create**. - -**Using YAML** - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - annotations: - pv.kubernetes.io/provisioned-by: everest-csi-provisioner - everest.io/reclaim-policy: retain-volume-only # (Optional) The PV is deleted while the underlying volume is retained. - name: cce-obs-test - spec: - accessModes: - - ReadWriteMany # Access mode. The value must be ReadWriteMany for OBS. - capacity: - storage: 1Gi # Storage capacity. This parameter is set only to meet the PV format requirements. It can be set to any value. The actual OBS space size is not limited by this value. - csi: - driver: obs.csi.everest.io # Dependent storage driver for the mounting. - fsType: obsfs # OBS file type. - volumeHandle: cce-obs-bucket # OBS bucket name. - volumeAttributes: - everest.io/obs-volume-type: STANDARD - everest.io/region: eu-de - - storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner - nodePublishSecretRef: - name: test-user - namespace: default - persistentVolumeReclaimPolicy: Retain # Reclaim policy. - storageClassName: csi-obs # Storage class name. The value must be csi-obs for OBS. - mountOptions: [] # Mount options. - -.. table:: **Table 4** Key parameters - - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+=============================================================================================================================================================================================================================================================================================+ - | everest.io/reclaim-policy: retain-volume-only | This field is optional. | - | | | - | | Currently, only **retain-volume-only** is supported. | - | | | - | | This field is valid only when the everest version is 1.2.9 or later and the reclaim policy is Delete. If the reclaim policy is Delete and the current value is **retain-volume-only**, the associated PV is deleted while the underlying storage volume is retained, when a PVC is deleted. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | fsType | File type. The value can be **obsfs** or **s3fs**. If the value is **s3fs**, an OBS bucket is created and mounted using s3fs. If the value is **obsfs**, an OBS parallel file system is created and mounted using obsfs. You are advised to set this field to **obsfs**. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeHandle | OBS bucket name. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/obs-volume-type | Storage class, including **STANDARD** (standard bucket) and **WARM** (infrequent access bucket). | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/region | Region where the OBS bucket is deployed. | - | | | - | | For details about the value of **region**, see `Regions and Endpoints `__. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | nodePublishSecretRef | Access key (AK/SK) used for mounting the object storage volume. You can use the AK/SK to create a secret and mount it to the PV. For details, see :ref:`Using a Custom AK/SK to Mount an OBS Volume `. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | mountOptions | Mount options. For details, see :ref:`OBS Volume Mount Options `. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | persistentVolumeReclaimPolicy | A reclaim policy is supported when the cluster version is equal to or later than 1.19.10 and the everest version is equal to or later than 1.2.9. | - | | | - | | The Delete and Retain policies are supported. | - | | | - | | **Delete**: | - | | | - | | - If **everest.io/reclaim-policy** is not specified, both the PV and OBS volume are deleted when a PVC is deleted. | - | | - If **everest.io/reclaim-policy** is set to **retain-volume-only**, when a PVC is deleted, the PV is deleted but the object storage resources are retained. | - | | | - | | **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV resource is in the Released state and cannot be bound to the PVC again. | - | | | - | | If high data security is required, you are advised to select **Retain** to prevent data from being deleted by mistake. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Creating an SFS Turbo Volume ----------------------------- - -.. note:: - - SFS Turbo and the cluster must be in the same VPC. - -**Using the CCE Console** - -#. Log in to the CCE console. -#. Click the cluster name and access the cluster console. Choose **Storage** from the navigation pane, and click the **PersistentVolumes (PVs)** tab. -#. Click **Create Volume** in the upper right corner. In the dialog box displayed, set the volume parameters. - - - **Volume Type**: Select **SFS Turbo**. - - **SFS Turbo**: Select SFS Turbo resources. - - **PV Name**: Enter a PV name. - - **Access Mode**: ReadWriteMany - - **Reclaim Policy**: Select **Retain**. For details, see :ref:`PV Reclaim Policy `. - - **Mount Options**: mount options. For details about the options, see :ref:`Setting Mount Options `. - -#. Click **Create**. - -**Using YAML** - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - annotations: - pv.kubernetes.io/provisioned-by: everest-csi-provisioner - name: cce-sfsturbo-test - spec: - accessModes: - - ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS Turbo. - capacity: - storage: 100.00Gi # SFS Turbo volume capacity. - csi: - driver: sfsturbo.csi.everest.io # Dependent storage driver for the mounting. - fsType: nfs - volumeHandle: 6674bd0a-d760-49de-bb9e-805c7883f047 # SFS Turbo volume ID. - volumeAttributes: - everest.io/share-export-location: 192.168.0.85:/ # Shared path of the SFS Turbo volume. - storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner - persistentVolumeReclaimPolicy: Retain # Reclaim policy. - storageClassName: csi-sfsturbo # Storage class name. The value must be csi-sfsturbo for SFS Turbo. - mountOptions: [] # Mount options. - -.. table:: **Table 5** Key parameters - - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+===========================================================================================================================================================================================================================================+ - | volumeHandle | SFS Turbo volume ID. | - | | | - | | You can obtain the ID on the SFS Turbo storage instance details page on the SFS console. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/share-export-location | Shared path of the SFS Turbo volume. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | mountOptions | Mount options. | - | | | - | | If not specified, the following configurations are used by default. For details, see :ref:`SFS Volume Mount Options `. | - | | | - | | .. code-block:: | - | | | - | | mountOptions: | - | | - vers=3 | - | | - timeo=600 | - | | - nolock | - | | - hard | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | persistentVolumeReclaimPolicy | A reclaim policy is supported when the cluster version is equal to or later than 1.19.10 and the everest version is equal to or later than 1.2.9. | - | | | - | | The Delete and Retain policies are supported. | - | | | - | | **Delete**: | - | | | - | | - If **everest.io/reclaim-policy** is not specified, both the PV and SFS Turbo volume are deleted when a PVC is deleted. | - | | - If **everest.io/reclaim-policy** is set to **retain-volume-only**, when a PVC is deleted, the PV is deleted but the SFS Turbo resources are retained. | - | | | - | | **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV resource is in the Released state and cannot be bound to the PVC again. | - | | | - | | If high data security is required, you are advised to select **Retain** to prevent data from being deleted by mistake. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/scalable_file_service_sfs/configuring_sfs_volume_mount_options.rst b/umn/source/storage/scalable_file_service_sfs/configuring_sfs_volume_mount_options.rst new file mode 100644 index 0000000..b97b6a6 --- /dev/null +++ b/umn/source/storage/scalable_file_service_sfs/configuring_sfs_volume_mount_options.rst @@ -0,0 +1,180 @@ +:original_name: cce_10_0337.html + +.. _cce_10_0337: + +Configuring SFS Volume Mount Options +==================================== + +This section describes how to configure SFS volume mount options. You can configure mount options in a PV and bind the PV to a PVC. Alternatively, configure mount options in a StorageClass and use the StorageClass to create a PVC. In this way, PVs can be dynamically created and inherit mount options configured in the StorageClass by default. + +Prerequisites +------------- + +The everest add-on version must be **1.2.8 or later**. The add-on identifies the mount options and transfers them to the underlying storage resources, which determine whether the specified options are valid. + +Constraints +----------- + +Mount options cannot be configured for secure containers. + +.. _cce_10_0337__section14888047833: + +SFS Volume Mount Options +------------------------ + +The everest add-on in CCE presets the options described in :ref:`Table 1 ` for mounting SFS volumes. + +.. _cce_10_0337__table128754351546: + +.. table:: **Table 1** SFS volume mount options + + +-------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Value | Description | + +=========================+=======================+===============================================================================================================================================================================================+ + | keep-original-ownership | Leave it blank. | Whether to retain the ownership of the file mount point. If this option is used, the everest add-on must be v1.2.63 or v2.1.2 or later. | + | | | | + | | | - By default, this option is not added. and the mount point ownership is **root:root** when SFS is mounted. | + | | | | + | | | - If this option is added, the original ownership of the file system is retained when SFS is mounted. | + +-------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | vers | 3 | File system version. Currently, only NFSv3 is supported. Value: **3** | + +-------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | nolock | Leave it blank. | Whether to lock files on the server using the NLM protocol. If **nolock** is selected, the lock is valid for applications on one host. For applications on another host, the lock is invalid. | + +-------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | timeo | 600 | Waiting time before the NFS client retransmits a request. The unit is 0.1 seconds. Recommended value: **600** | + +-------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | hard/soft | Leave it blank. | Mounting mode. | + | | | | + | | | - **hard**: If the NFS request times out, the client keeps resending the request until the request is successful. | + | | | - **soft**: If the NFS request times out, the client returns an error to the invoking program. | + | | | | + | | | The default value is **hard**. | + +-------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +You can set other mount options if needed. For details, see `Mounting an NFS File System to ECSs (Linux) `__. + +Setting Mount Options in a PV +----------------------------- + +You can use the **mountOptions** field to set mount options in a PV. The options you can configure in **mountOptions** are listed in :ref:`SFS Volume Mount Options `. + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Set mount options in a PV. Example: + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolume + metadata: + annotations: + pv.kubernetes.io/provisioned-by: everest-csi-provisioner + everest.io/reclaim-policy: retain-volume-only # (Optional) The PV is deleted while the underlying volume is retained. + name: pv-sfs + spec: + accessModes: + - ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS. + capacity: + storage: 1Gi # SFS volume capacity. + csi: + driver: disk.csi.everest.io # Dependent storage driver for the mounting. + fsType: nfs + volumeHandle: # ID of the SFS Capacity-Oriented volume. + volumeAttributes: + everest.io/share-export-location: # Shared path of the SFS volume. + storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner + persistentVolumeReclaimPolicy: Retain # Reclaim policy. + storageClassName: csi-nas # Storage class name. + mountOptions: # Mount options. + - vers=3 + - nolock + - timeo=600 + - hard + +#. After a PV is created, you can create a PVC and bind it to the PV, and then mount the PV to the container in the workload. For details, see :ref:`Using an Existing SFS File System Through a Static PV `. + +#. Check whether the mount options take effect. + + In this example, the PVC is mounted to the workload that uses the **nginx:latest** image. You can run the **mount -l** command to check whether the mount options take effect. + + a. View the pod to which the SFS volume has been mounted. In this example, the workload name is **web-sfs**. + + .. code-block:: + + kubectl get pod | grep web-sfs + + Command output: + + .. code-block:: + + web-sfs-*** 1/1 Running 0 23m + + b. Run the following command to check the mount options (**web-sfs-**\*** is an example pod): + + .. code-block:: + + kubectl exec -it web-sfs-*** -- mount -l | grep nfs + + If the mounting information in the command output is consistent with the configured mount options, the mount options are set successfully. + + .. code-block:: + + on /data type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=**.**.**.**,mountvers=3,mountport=2050,mountproto=tcp,local_lock=all,addr=**.**.**.**) + +Setting Mount Options in a StorageClass +--------------------------------------- + +You can use the **mountOptions** field to set mount options in a StorageClass. The options you can configure in **mountOptions** are listed in :ref:`SFS Volume Mount Options `. + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Create a customized StorageClass. Example: + + .. code-block:: + + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: csi-sfs-mount-option + provisioner: everest-csi-provisioner + parameters: + csi.storage.k8s.io/csi-driver-name: nas.csi.everest.io + csi.storage.k8s.io/fstype: nfs + everest.io/share-access-to: # VPC ID of the cluster. + reclaimPolicy: Delete + volumeBindingMode: Immediate + mountOptions: # Mount options + - vers=3 + - nolock + - timeo=600 + - hard + +#. After the StorageClass is configured, you can use it to create a PVC. By default, the dynamically created PVs inherit the mount options configured in the StorageClass. For details, see :ref:`Using an SFS File System Through a Dynamic PV `. + +#. Check whether the mount options take effect. + + In this example, the PVC is mounted to the workload that uses the **nginx:latest** image. You can run the **mount -l** command to check whether the mount options take effect. + + a. View the pod to which the SFS volume has been mounted. In this example, the workload name is **web-sfs**. + + .. code-block:: + + kubectl get pod | grep web-sfs + + Command output: + + .. code-block:: + + web-sfs-*** 1/1 Running 0 23m + + b. Run the following command to check the mount options (**web-sfs-**\*** is an example pod): + + .. code-block:: + + kubectl exec -it web-sfs-*** -- mount -l | grep nfs + + If the mounting information in the command output is consistent with the configured mount options, the mount options are set successfully. + + .. code-block:: + + on /data type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=**.**.**.**,mountvers=3,mountport=2050,mountproto=tcp,local_lock=all,addr=**.**.**.**) diff --git a/umn/source/storage/scalable_file_service_sfs/index.rst b/umn/source/storage/scalable_file_service_sfs/index.rst new file mode 100644 index 0000000..faec562 --- /dev/null +++ b/umn/source/storage/scalable_file_service_sfs/index.rst @@ -0,0 +1,20 @@ +:original_name: cce_10_0111.html + +.. _cce_10_0111: + +Scalable File Service (SFS) +=========================== + +- :ref:`Overview ` +- :ref:`Using an Existing SFS File System Through a Static PV ` +- :ref:`Using an SFS File System Through a Dynamic PV ` +- :ref:`Configuring SFS Volume Mount Options ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + using_an_existing_sfs_file_system_through_a_static_pv + using_an_sfs_file_system_through_a_dynamic_pv + configuring_sfs_volume_mount_options diff --git a/umn/source/storage/scalable_file_service_sfs/overview.rst b/umn/source/storage/scalable_file_service_sfs/overview.rst new file mode 100644 index 0000000..59adb03 --- /dev/null +++ b/umn/source/storage/scalable_file_service_sfs/overview.rst @@ -0,0 +1,27 @@ +:original_name: cce_10_0617.html + +.. _cce_10_0617: + +Overview +======== + +Introduction +------------ + +CCE allows you to mount a volume created from a Scalable File Service (SFS) file system to a container to store data persistently. SFS volumes are commonly used in ReadWriteMany scenarios for large-capacity expansion and cost-sensitive services, such as media processing, content management, big data analysis, and workload process analysis. For services with massive volume of small files, SFS Turbo file systems are recommended. + +Expandable to petabytes, SFS provides fully hosted shared file storage, highly available and stable to handle data- and bandwidth-intensive applications + +- **Standard file protocols**: You can mount file systems as volumes to servers, the same as using local directories. +- **Data sharing**: The same file system can be mounted to multiple servers, so that data can be shared. +- **Private network**: Users can access data only in private networks of data centers. +- **Capacity and performance**: The capacity of a single file system is high (PB level) and the performance is excellent (ms-level read/write I/O latency). +- **Use cases**: Deployments/StatefulSets in the ReadWriteMany mode and jobs created for high-performance computing (HPC), media processing, content management, web services, big data analysis, and workload process analysis + +Application Scenarios +--------------------- + +SFS supports the following mounting modes based on application scenarios: + +- :ref:`Using an Existing SFS File System Through a Static PV `: static creation mode, where you use an existing SFS volume to create a PV and then mount storage to the workload through a PVC. This mode applies to scenarios where the underlying storage is available. +- :ref:`Using an SFS File System Through a Dynamic PV `: dynamic creation mode, where you do not need to create SFS volumes in advance. Instead, specify a StorageClass during PVC creation and an SFS volume and a PV will be automatically created. This mode applies to scenarios where no underlying storage is available. diff --git a/umn/source/storage/scalable_file_service_sfs/using_an_existing_sfs_file_system_through_a_static_pv.rst b/umn/source/storage/scalable_file_service_sfs/using_an_existing_sfs_file_system_through_a_static_pv.rst new file mode 100644 index 0000000..f227d4a --- /dev/null +++ b/umn/source/storage/scalable_file_service_sfs/using_an_existing_sfs_file_system_through_a_static_pv.rst @@ -0,0 +1,465 @@ +:original_name: cce_10_0619.html + +.. _cce_10_0619: + +Using an Existing SFS File System Through a Static PV +===================================================== + +SFS is a network-attached storage (NAS) that provides shared, scalable, and high-performance file storage. It applies to large-capacity expansion and cost-sensitive services. This section describes how to use an existing SFS file system to statically create PVs and PVCs and implement data persistence and sharing in workloads. + +Prerequisites +------------- + +- You have created a cluster and installed the CSI add-on (:ref:`everest `) in the cluster. +- If you want to create a cluster using commands, use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. +- You have created an SFS file system that is in the same VPC as the cluster. + +Constraints +----------- + +- Multiple PVs can use the same SFS or SFS Turbo file system with the following restrictions: + + - If multiple PVCs/PVs use the same underlying SFS or SFS Turbo file system, when you attempt to mount these PVCs/PVs to the same pod, all PVCs cannot be mounted to the pod and the pod startup fails. This is because the **volumeHandle** values of these PVs are the same. + - The **persistentVolumeReclaimPolicy** parameter in the PVs must be set to **Retain**. Otherwise, when a PV is deleted, the associated underlying volume may be deleted. In this case, other PVs associated with the underlying volume malfunction. + - When the underlying volume is repeatedly used, enable isolation and protection for ReadWriteMany at the application layer to prevent data overwriting and loss. + +Using an Existing SFS File System on the Console +------------------------------------------------ + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Statically create a PVC and PV. + + a. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **Create PVC** in the upper right corner. In the dialog box displayed, configure the PVC parameters. + + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=====================================================================================================================================================================================================================+ + | PVC Type | In this example, select **SFS**. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the PVC name, which must be unique in the same namespace. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | - If underlying storage is available, create a storage volume or use an existing storage volume to statically create a PVC based on whether a PV has been created. | + | | - If no underlying storage is available, select **Dynamically provision**. For details, see :ref:`Using an SFS File System Through a Dynamic PV `. | + | | | + | | In this example, select **Create new** to create a PV and PVC at the same time on the console. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PV\ :sup:`a` | Select an existing PV in the cluster. Create a PV in advance. For details, see "Creating a storage volume " in :ref:`Related Operations `. | + | | | + | | In this example, you do not need to set this parameter. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | SFS\ :sup:`b` | Click **Select SFS**. On the displayed page, select the SFS file system that meets your requirements and click **OK**. | + | | | + | | .. note:: | + | | | + | | Currently, only SFS 3.0 Capacity-Oriented is supported. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PV Name\ :sup:`b` | Enter the PV name, which must be unique in the same cluster. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode\ :sup:`b` | SFS volumes support only **ReadWriteMany**, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Reclaim Policy\ :sup:`b` | You can select **Delete** or **Retain** to specify the reclaim policy of the underlying storage when the PVC is deleted. For details, see :ref:`PV Reclaim Policy `. | + | | | + | | .. note:: | + | | | + | | If multiple PVs use the same underlying storage volume, use **Retain** to avoid cascading deletion of underlying volumes. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Options\ :sup:`b` | Enter the mounting parameter key-value pairs. For details, see :ref:`Configuring SFS Volume Mount Options `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + .. note:: + + a: The parameter is available when **Creation Method** is set to **Use existing**. + + b: The parameter is available when **Creation Method** is set to **Create new**. + + b. Click **Create** to create a PVC and a PV. + + You can choose **Storage** in the navigation pane and view the created PVC and PV on the **PersistentVolumeClaims (PVCs)** and **PersistentVolumes (PVs)** tab pages. + +#. Create an application. + + a. In the navigation pane on the left, click **Workloads**. In the right pane, click the **Deployments** tab. + + b. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **PVC**. + + Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0619__table2529244345: + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | PVC | Select an existing SFS volume. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the SFS file system. + + c. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence and Sharing `. + +(kubectl) Using an Existing SFS File System +------------------------------------------- + +#. Use kubectl to connect to the cluster. +#. Create a PV. + + a. .. _cce_10_0619__li162841212145314: + + Create the **pv-sfs.yaml** file. + + SFS Capacity-Oriented: + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolume + metadata: + annotations: + pv.kubernetes.io/provisioned-by: everest-csi-provisioner + everest.io/reclaim-policy: retain-volume-only # (Optional) The PV is deleted while the underlying volume is retained. + name: pv-sfs # PV name. + spec: + accessModes: + - ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS. + capacity: + storage: 1Gi # SFS volume capacity. + csi: + driver: disk.csi.everest.io # Dependent storage driver for the mounting. + fsType: nfs + volumeHandle: # SFS Capacity-Oriented volume ID. + volumeAttributes: + everest.io/share-export-location: # Shared path of the SFS volume. + storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner + persistentVolumeReclaimPolicy: Retain # Reclaim policy. + storageClassName: csi-nas # Storage class name. csi-nas indicates that SFS Capacity-Oriented is used. + mountOptions: [] # Mount options. + + .. table:: **Table 2** Key parameters + + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +===============================================+=======================+=====================================================================================================================================================================================================================================================================================================+ + | everest.io/reclaim-policy: retain-volume-only | No | Optional. | + | | | | + | | | Currently, only **retain-volume-only** is supported. | + | | | | + | | | This field is valid only when the everest version is 1.2.9 or later and the reclaim policy is **Delete**. If the reclaim policy is **Delete** and the current value is **retain-volume-only**, the associated PV is deleted while the underlying storage volume is retained, when a PVC is deleted. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | volumeHandle | Yes | - If an SFS Capacity-Oriented volume is used, enter the volume ID. | + | | | | + | | | Log in to the console, choose **Service List** > **Storage** > **Scalable File Service**, and select **SFS Turbo**. In the list, click the name of the target SFS file system. On the details page, copy the content following **ID**. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/share-export-location | Yes | Shared path of the file system. | + | | | | + | | | - For an SFS Capacity-Oriented file system, log in to the console, choose **Service List** > **Storage** > **Scalable File Service**, and obtain the shared path from the **Mount Address** column. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | mountOptions | Yes | Mount options. | + | | | | + | | | If not specified, the following configurations are used by default. For details, see :ref:`Configuring SFS Volume Mount Options `. | + | | | | + | | | .. code-block:: | + | | | | + | | | mountOptions: | + | | | - vers=3 | + | | | - timeo=600 | + | | | - nolock | + | | | - hard | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | persistentVolumeReclaimPolicy | Yes | A reclaim policy is supported when the cluster version is or later than 1.19.10 and the everest version is or later than 1.2.9. | + | | | | + | | | The **Delete** and **Retain** reclaim policies are supported. For details, see :ref:`PV Reclaim Policy `. If multiple PVs use the same SFS volume, use **Retain** to avoid cascading deletion of underlying volumes. | + | | | | + | | | **Delete**: | + | | | | + | | | - If **everest.io/reclaim-policy** is not specified, both the PV and SFS volume are deleted when a PVC is deleted. | + | | | - If **everest.io/reclaim-policy** is set to **retain-volume-only set**, when a PVC is deleted, the PV is deleted but the SFS volume resources are retained. | + | | | | + | | | **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV is in the **Released** status and cannot be bound to the PVC again. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + | | | | + | | | For SFS, this field is used only for verification (cannot be empty or **0**). Its value is fixed at **1**, and any value you set does not take effect for SFS file systems. | + +-----------------------------------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PV: + + .. code-block:: + + kubectl apply -f pv-sfs.yaml + +#. Create a PVC. + + a. Create the **pvc-sfs.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-sfs + namespace: default + annotations: + volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner + spec: + accessModes: + - ReadWriteMany # The value must be ReadWriteMany for SFS. + resources: + requests: + storage: 1Gi # SFS volume capacity. + storageClassName: csi-nas # Storage class name, which must be the same as the PV's storage class. + volumeName: pv-sfs # PV name. + + .. table:: **Table 3** Key parameters + + +-----------------------+-----------------------+----------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +=======================+=======================+==============================================================================================+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + | | | | + | | | The value must be the same as the storage size of the existing PV. | + +-----------------------+-----------------------+----------------------------------------------------------------------------------------------+ + | volumeName | Yes | PV name, which must be the same as the PV name in :ref:`1 `. | + +-----------------------+-----------------------+----------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PVC: + + .. code-block:: + + kubectl apply -f pvc-sfs.yaml + +#. Create an application. + + a. Create a file named **web-demo.yaml**. In this example, the SFS volume is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: web-demo + namespace: default + spec: + replicas: 2 + selector: + matchLabels: + app: web-demo + template: + metadata: + labels: + app: web-demo + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-sfs-volume # Volume name, which must be the same as the volume name in the volumes field. + mountPath: /data # Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: pvc-sfs-volume # Volume name, which can be customized. + persistentVolumeClaim: + claimName: pvc-sfs # Name of the created PVC. + + b. Run the following command to create an application to which the SFS volume is mounted: + + .. code-block:: + + kubectl apply -f web-demo.yaml + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence and Sharing `. + +.. _cce_10_0619__section11593165910013: + +Verifying Data Persistence and Sharing +-------------------------------------- + +#. View the deployed applications and files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-mjhm9 1/1 Running 0 46s + web-demo-846b489584-wvv5s 1/1 Running 0 46s + + b. Run the following commands in sequence to view the files in the **/data** path of the pods: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + If no result is returned for both pods, no file exists in the **/data** path. + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + + Expected output: + + .. code-block:: + + static + +#. **Verify data persistence.** + + a. Run the following command to delete the pod named **web-demo-846b489584-mjhm9**: + + .. code-block:: + + kubectl delete pod web-demo-846b489584-mjhm9 + + Expected output: + + .. code-block:: + + pod "web-demo-846b489584-mjhm9" deleted + + After the deletion, the Deployment controller automatically creates a replica. + + b. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + The expected output is as follows, in which **web-demo-846b489584-d4d4j** is the newly created pod: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 110s + web-demo-846b489584-wvv5s 1/1 Running 0 7m50s + + c. Run the following command to check whether the files in the **/data** path of the new pod have been modified: + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + static + + If the **static** file still exists, the data can be stored persistently. + +#. **Verify data sharing.** + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 7m + web-demo-846b489584-wvv5s 1/1 Running 0 13m + + b. Run the following command to create a file named **share** in the **/data** path of either pod: In this example, select the pod named **web-demo-846b489584-d4d4j**. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- touch /data/share + + Check the files in the **/data** path of the pod. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + share + static + + c. Check whether the **share** file exists in the **/data** path of another pod (**web-demo-846b489584-wvv5s**) as well to verify data sharing. + + .. code-block:: + + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + Expected output: + + .. code-block:: + + share + static + + After you create a file in the **/data** path of a pod, if the file is also created in the **/data** path of another pods, the two pods share the same volume. + +.. _cce_10_0619__section16505832153318: + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 4 `. + +.. _cce_10_0619__table1619535674020: + +.. table:: **Table 4** Related operations + + +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +================================+====================================================================================================================================================+============================================================================================================================================================================================================================================+ + | Creating a storage volume (PV) | Create a PV on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumes (PVs)** tab. Click **Create Volume** in the upper right corner. In the dialog box displayed, configure the parameters. | + | | | | + | | | - **Volume Type**: Select **SFS**. | + | | | - **SFS**: Click **Select SFS**. On the displayed page, select the SFS file system that meets your requirements and click **OK**. | + | | | - PV Name: Enter the PV name. The PV name must be unique in the same cluster. | + | | | - **Access Mode**: SFS volumes support only **ReadWriteMany**, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see :ref:`Volume Access Modes `. | + | | | - **Reclaim Policy**: **Delete** or **Retain**. For details, see :ref:`PV Reclaim Policy `. | + | | | | + | | | .. note:: | + | | | | + | | | If multiple PVs use the same underlying storage volume, use **Retain** to avoid cascading deletion of underlying volumes. | + | | | | + | | | - **Mount Options**: Enter the mounting parameter key-value pairs. For details, see :ref:`Configuring SFS Volume Mount Options `. | + | | | | + | | | #. Click **Create**. | + +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +--------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/scalable_file_service_sfs/using_an_sfs_file_system_through_a_dynamic_pv.rst b/umn/source/storage/scalable_file_service_sfs/using_an_sfs_file_system_through_a_dynamic_pv.rst new file mode 100644 index 0000000..fe108ef --- /dev/null +++ b/umn/source/storage/scalable_file_service_sfs/using_an_sfs_file_system_through_a_dynamic_pv.rst @@ -0,0 +1,332 @@ +:original_name: cce_10_0620.html + +.. _cce_10_0620: + +Using an SFS File System Through a Dynamic PV +============================================= + +This section describes how to use storage classes to dynamically create PVs and PVCs and implement data persistence and sharing in workloads. + +Automatically Creating an SFS File System on the Console +-------------------------------------------------------- + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Dynamically create a PVC and PV. + + a. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **Create PVC** in the upper right corner. In the dialog box displayed, configure the PVC parameters. + + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+==================================================================================================================================================================================================================================================================+ + | PVC Type | In this example, select **SFS**. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the PVC name, which must be unique in the same namespace. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | - If no underlying storage is available, select **Dynamically provision** to create a PVC, PV, and underlying storage on the console in cascading mode. | + | | - If underlying storage is available, create a storage volume or use an existing storage volume to statically create a PVC based on whether a PV has been created. For details, see :ref:`Using an Existing SFS File System Through a Static PV `. | + | | | + | | In this example, select **Dynamically provision**. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Storage Classes | The storage class for SFS volumes is **csi-sfs**. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode | SFS volumes support only **ReadWriteMany**, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Click **Create** to create a PVC and a PV. + + You can choose **Storage** in the navigation pane and view the created PVC and PV on the **PersistentVolumeClaims (PVCs)** and **PersistentVolumes (PVs)** tab pages. + +#. Create an application. + + a. In the navigation pane on the left, click **Workloads**. In the right pane, click the **Deployments** tab. + + b. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **PVC**. + + Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0620__cce_10_0619_table2529244345: + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | PVC | Select an existing SFS volume. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the SFS file system. + + c. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence and Sharing `. + +(kubectl) Automatically Creating an SFS File System +--------------------------------------------------- + +#. Use kubectl to connect to the cluster. +#. Use **StorageClass** to dynamically create a PVC and PV. + + a. Create the **pvc-sfs-auto.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-sfs-auto + namespace: default + annotations: + everest.io/crypt-key-id: # (Optional) ID of the key for encrypting file systems + everest.io/crypt-alias: sfs/default # (Optional) Key name. Mandatory for encrypting volumes. + everest.io/crypt-domain-id: # (Optional) ID of the tenant to which an encrypted volume belongs. Mandatory for encrypting volumes. + spec: + accessModes: + - ReadWriteMany # The value must be ReadWriteMany for SFS. + resources: + requests: + storage: 1Gi # SFS volume capacity. + storageClassName: csi-nas # The storage class type is SFS. + + .. table:: **Table 2** Key parameters + + +----------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +============================+=======================+==================================================================================================================================================================================================+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + | | | | + | | | For SFS, this field is used only for verification (cannot be empty or **0**). Its value is fixed at **1**, and any value you set does not take effect for SFS file systems. | + +----------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/crypt-key-id | No | This parameter is mandatory when an SFS system is encrypted. Enter the encryption key ID selected during SFS system creation. You can use a custom key or the default key named **sfs/default**. | + | | | | + | | | To obtain a key ID, log in to the DEW console, locate the key to be encrypted, and copy the key ID. | + +----------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/crypt-alias | No | Key name, which is mandatory when you create an encrypted volume. | + | | | | + | | | To obtain a key name, log in to the DEW console, locate the key to be encrypted, and copy the key name. | + +----------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/crypt-domain-id | No | ID of the tenant to which the encrypted volume belongs. This parameter is mandatory for creating an encrypted volume. | + | | | | + | | | To obtain a tenant ID, hover the cursor over the username in the upper right corner of the ECS console, choose **My Credentials**, and copy the account ID. | + +----------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PVC: + + .. code-block:: + + kubectl apply -f pvc-sfs-auto.yaml + +#. Create an application. + + a. Create a file named **web-demo.yaml**. In this example, the SFS volume is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: web-demo + namespace: default + spec: + replicas: 2 + selector: + matchLabels: + app: web-demo + template: + metadata: + labels: + app: web-demo + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-sfs-volume # Volume name, which must be the same as the volume name in the volumes field. + mountPath: /data # Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: pvc-sfs-volume # Volume name, which can be customized. + persistentVolumeClaim: + claimName: pvc-sfs-auto # Name of the created PVC. + + b. Run the following command to create an application to which the SFS volume is mounted: + + .. code-block:: + + kubectl apply -f web-demo.yaml + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence and Sharing `. + +.. _cce_10_0620__section11593165910013: + +Verifying Data Persistence and Sharing +-------------------------------------- + +#. View the deployed applications and files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-mjhm9 1/1 Running 0 46s + web-demo-846b489584-wvv5s 1/1 Running 0 46s + + b. Run the following commands in sequence to view the files in the **/data** path of the pods: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + If no result is returned for both pods, no file exists in the **/data** path. + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + + Expected output: + + .. code-block:: + + static + +#. **Verify data persistence.** + + a. Run the following command to delete the pod named **web-demo-846b489584-mjhm9**: + + .. code-block:: + + kubectl delete pod web-demo-846b489584-mjhm9 + + Expected output: + + .. code-block:: + + pod "web-demo-846b489584-mjhm9" deleted + + After the deletion, the Deployment controller automatically creates a replica. + + b. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + The expected output is as follows, in which **web-demo-846b489584-d4d4j** is the newly created pod: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 110s + web-demo-846b489584-wvv5s 1/1 Running 0 7m50s + + c. Run the following command to check whether the files in the **/data** path of the new pod have been modified: + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + static + + If the **static** file still exists, the data can be stored persistently. + +#. **Verify data sharing.** + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 7m + web-demo-846b489584-wvv5s 1/1 Running 0 13m + + b. Run the following command to create a file named **share** in the **/data** path of either pod: In this example, select the pod named **web-demo-846b489584-d4d4j**. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- touch /data/share + + Check the files in the **/data** path of the pod. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + share + static + + c. Check whether the **share** file exists in the **/data** path of another pod (**web-demo-846b489584-wvv5s**) as well to verify data sharing. + + .. code-block:: + + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + Expected output: + + .. code-block:: + + share + static + + After you create a file in the **/data** path of a pod, if the file is also created in the **/data** path of another pods, the two pods share the same volume. + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 3 `. + +.. _cce_10_0620__table1619535674020: + +.. table:: **Table 3** Related operations + + +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +=======================+====================================================================================================================================================+==============================================================================================================================================================+ + | Viewing events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/setting_mount_options.rst b/umn/source/storage/setting_mount_options.rst deleted file mode 100644 index 744454a..0000000 --- a/umn/source/storage/setting_mount_options.rst +++ /dev/null @@ -1,178 +0,0 @@ -:original_name: cce_10_0337.html - -.. _cce_10_0337: - -Setting Mount Options -===================== - -Scenario --------- - -You can mount cloud storage volumes to your containers and use these volumes as local directories. - -This section describes how to set mount options when mounting SFS and OBS volumes. You can set mount options in a PV and bind the PV to a PVC. Alternatively, set mount options in a StorageClass and use the StorageClass to create a PVC. In this way, PVs can be dynamically created and inherit mount options configured in the StorageClass by default. - -.. _cce_10_0337__section14888047833: - -SFS Volume Mount Options ------------------------- - -The everest add-on in CCE presets the options described in :ref:`Table 1 ` for mounting SFS volumes. You can set other mount options if needed. For details, see `Mounting an NFS File System to ECSs (Linux) `__. - -.. _cce_10_0337__table128754351546: - -.. table:: **Table 1** SFS volume mount options - - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Option | Description | - +===================================+===============================================================================================================================================================================================+ - | vers=3 | File system version. Currently, only NFSv3 is supported, Value: **3** | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | nolock | Whether to lock files on the server using the NLM protocol. If **nolock** is selected, the lock is valid for applications on one host. For applications on another host, the lock is invalid. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | timeo=600 | Waiting time before the NFS client retransmits a request. The unit is 0.1 seconds. Recommended value: **600** | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | hard/soft | Mounting mode. | - | | | - | | - **hard**: If the NFS request times out, the client keeps resending the request until the request is successful. | - | | - **soft**: If the NFS request times out, the client returns an error to the invoking program. | - | | | - | | The default value is **hard**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -.. _cce_10_0337__section1254912109811: - -OBS Volume Mount Options ------------------------- - -When mounting file storage, the everest add-on presets the options described in :ref:`Table 2 ` and :ref:`Table 3 ` by default. The options in :ref:`Table 2 ` are mandatory. - -.. _cce_10_0337__table1688593020213: - -.. table:: **Table 2** Mandatory mount options configured by default - - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ - | Option | Description | - +===================================+==========================================================================================================================================+ - | use_ino | If enabled, obsfs allocates the **inode** number. Enabled by default in read/write mode. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ - | big_writes | If configured, the maximum size of the cache can be modified. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ - | nonempty | Allows non-empty mount paths. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ - | allow_other | Allows other users to access the parallel file system. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ - | no_check_certificate | Disables server certificate verification. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ - | enable_noobj_cache | Enables cache entries for objects that do not exist, which can improve performance. Enabled by default in object bucket read/write mode. | - | | | - | | **This option is no longer set by default since everest 1.2.40.** | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ - | sigv2 | Specifies the signature version. Used by default in object buckets. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+ - -.. _cce_10_0337__table9886123010217: - -.. table:: **Table 3** Optional mount options configured by default - - +-----------------------+--------------------------------------------------------------------------------------------------------------------+ - | Option | Description | - +=======================+====================================================================================================================+ - | max_write=131072 | This parameter is valid only when **big_writes** is configured. The recommended value is **128 KB**. | - +-----------------------+--------------------------------------------------------------------------------------------------------------------+ - | ssl_verify_hostname=0 | Disables verifying the SSL certificate based on the host name. | - +-----------------------+--------------------------------------------------------------------------------------------------------------------+ - | max_background=100 | Allows setting the maximum number of waiting requests in the background. Used by default in parallel file systems. | - +-----------------------+--------------------------------------------------------------------------------------------------------------------+ - | public_bucket=1 | If set to **1**, public buckets are mounted anonymously. Enabled by default in object bucket read/write mode. | - +-----------------------+--------------------------------------------------------------------------------------------------------------------+ - -You can log in to the node to which the pod is scheduled and view all mount options used for mounting the OBS volume in the process details. - -- Object bucket: ps -ef \| grep s3fs - - .. code-block:: - - root 22142 1 0 Jun03 ? 00:00:00 /usr/bin/s3fs pvc-82fe2cbe-3838-43a2-8afb-f994e402fb9d /mnt/paas/kubernetes/kubelet/pods/0b13ff68-4c8e-4a1c-b15c-724fd4d64389/volumes/kubernetes.io~csi/pvc-82fe2cbe-3838-43a2-8afb-f994e402fb9d/mount -o url=https://{{endpoint}}:443 -o endpoint=xxxxxx -o passwd_file=/opt/everest-host-connector/1622707954357702943_obstmpcred/pvc-82fe2cbe-3838-43a2-8afb-f994e402fb9d -o nonempty -o big_writes -o enable_noobj_cache -o sigv2 -o allow_other -o no_check_certificate -o ssl_verify_hostname=0 -o max_write=131072 -o multipart_size=20 -o umask=0 - -- Parallel file system: ps -ef \| grep obsfs - - .. code-block:: - - root 1355 1 0 Jun03 ? 00:03:16 /usr/bin/obsfs pvc-86720bb9-5aa8-4cde-9231-5253994f8468 /mnt/paas/kubernetes/kubelet/pods/c959a91d-eced-4b41-91c6-96cbd65324f9/volumes/kubernetes.io~csi/pvc-86720bb9-5aa8-4cde-9231-5253994f8468/mount -o url=https://{{endpoint}}:443 -o endpoint=xxxxxx -o passwd_file=/opt/everest-host-connector/1622714415305160399_obstmpcred/pvc-86720bb9-5aa8-4cde-9231-5253994f8468 -o allow_other -o nonempty -o big_writes -o use_ino -o no_check_certificate -o ssl_verify_hostname=0 -o umask=0027 -o max_write=131072 -o max_background=100 -o uid=10000 -o gid=10000 - -Prerequisites -------------- - -- The everest add-on version must be **1.2.8 or later**. -- The add-on identifies the mount options and transfers them to the underlying storage resources, which determine whether the specified options are valid. - -Constraints ------------ - -Mount options cannot be configured for secure containers. - -Setting Mount Options in a PV ------------------------------ - -You can use the **mountOptions** field to set mount options in a PV. The options you can configure in **mountOptions** are listed in :ref:`SFS Volume Mount Options ` and :ref:`OBS Volume Mount Options `. - -.. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv-obs-example - annotations: - pv.kubernetes.io/provisioned-by: everest-csi-provisioner - spec: - mountOptions: - - umask=0027 - - uid=10000 - - gid=10000 - accessModes: - - ReadWriteMany - capacity: - storage: 1Gi - claimRef: - apiVersion: v1 - kind: PersistentVolumeClaim - name: pvc-obs-example - namespace: default - csi: - driver: obs.csi.everest.io - fsType: obsfs - volumeAttributes: - everest.io/obs-volume-type: STANDARD - everest.io/region: eu-de - storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner - volumeHandle: obs-normal-static-pv - persistentVolumeReclaimPolicy: Delete - storageClassName: csi-obs - -After a PV is created, you can create a PVC and bind it to the PV, and then mount the PV to the container in the workload. - -Setting Mount Options in a StorageClass ---------------------------------------- - -You can use the **mountOptions** field to set mount options in a StorageClass. The options you can configure in **mountOptions** are listed in :ref:`SFS Volume Mount Options ` and :ref:`OBS Volume Mount Options `. - -.. code-block:: - - apiVersion: storage.k8s.io/v1 - kind: StorageClass - metadata: - name: csi-obs-mount-option - mountOptions: - - umask=0027 - - uid=10000 - - gid=10000 - parameters: - csi.storage.k8s.io/csi-driver-name: obs.csi.everest.io - csi.storage.k8s.io/fstype: s3fs - everest.io/obs-volume-type: STANDARD - provisioner: everest-csi-provisioner - reclaimPolicy: Delete - volumeBindingMode: Immediate - -After the StorageClass is configured, you can use it to create a PVC. By default, the dynamically created PVs inherit the mount options set in the StorageClass. diff --git a/umn/source/storage/sfs_turbo_file_systems/configuring_sfs_turbo_mount_options.rst b/umn/source/storage/sfs_turbo_file_systems/configuring_sfs_turbo_mount_options.rst new file mode 100644 index 0000000..e69c248 --- /dev/null +++ b/umn/source/storage/sfs_turbo_file_systems/configuring_sfs_turbo_mount_options.rst @@ -0,0 +1,116 @@ +:original_name: cce_10_0626.html + +.. _cce_10_0626: + +Configuring SFS Turbo Mount Options +=================================== + +This section describes how to configure SFS Turbo mount options. For SFS Turbo, you can only set mount options in a PV and bind the PV by creating a PVC. + +Prerequisites +------------- + +The everest add-on version must be **1.2.8 or later**. The add-on identifies the mount options and transfers them to the underlying storage resources, which determine whether the specified options are valid. + +Constraints +----------- + +Mount options cannot be configured for Kata containers. + +.. _cce_10_0626__section14888047833: + +SFS Turbo Mount Options +----------------------- + +The everest add-on in CCE presets the options described in :ref:`Table 1 ` for mounting SFS Turbo volumes. + +.. _cce_10_0626__table128754351546: + +.. table:: **Table 1** SFS Turbo mount options + + +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Value | Description | + +=======================+=======================+===============================================================================================================================================================================================+ + | vers | 3 | File system version. Currently, only NFSv3 is supported. Value: **3** | + +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | nolock | Leave it blank. | Whether to lock files on the server using the NLM protocol. If **nolock** is selected, the lock is valid for applications on one host. For applications on another host, the lock is invalid. | + +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | timeo | 600 | Waiting time before the NFS client retransmits a request. The unit is 0.1 seconds. Recommended value: **600** | + +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | hard/soft | Leave it blank. | Mount mode. | + | | | | + | | | - **hard**: If the NFS request times out, the client keeps resending the request until the request is successful. | + | | | - **soft**: If the NFS request times out, the client returns an error to the invoking program. | + | | | | + | | | The default value is **hard**. | + +-----------------------+-----------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +You can set other mount options if needed. For details, see `Mounting an NFS File System to ECSs (Linux) `__. + +Configuring Mount Options in a PV +--------------------------------- + +You can use the **mountOptions** field to configure mount options in a PV. The options you can configure in **mountOptions** are listed in :ref:`SFS Turbo Mount Options `. + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Set mount options in a PV. Example: + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolume + metadata: + annotations: + pv.kubernetes.io/provisioned-by: everest-csi-provisioner + name: pv-sfsturbo # PV name. + spec: + accessModes: + - ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS Turbo. + capacity: + storage: 500Gi # SFS Turbo volume capacity. + csi: + driver: sfsturbo.csi.everest.io # Dependent storage driver for the mounting. + fsType: nfs + volumeHandle: {your_volume_id} # SFS Turbo volume ID + volumeAttributes: + everest.io/share-export-location: {your_location} # Shared path of the SFS Turbo volume. + everest.io/enterprise-project-id: {your_project_id} # Project ID of the SFS Turbo volume. + storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner + persistentVolumeReclaimPolicy: Retain # Reclaim policy. + storageClassName: csi-sfsturbo # SFS Turbo storage class name. + mountOptions: # Mount options. + - vers=3 + - nolock + - timeo=600 + - hard + +#. After a PV is created, you can create a PVC and bind it to the PV, and then mount the PV to the container in the workload. For details, see :ref:`Using an Existing SFS Turbo File System Through a Static PV `. + +#. Check whether the mount options take effect. + + In this example, the PVC is mounted to the workload that uses the **nginx:latest** image. You can run the **mount -l** command to check whether the mount options take effect. + + a. View the pod to which the SFS Turbo volume has been mounted. In this example, the workload name is **web-sfsturbo**. + + .. code-block:: + + kubectl get pod | grep web-sfsturbo + + Command output: + + .. code-block:: + + web-sfsturbo-*** 1/1 Running 0 23m + + b. Run the following command to check the mount options (**web-sfsturbo-\**\*** is an example pod): + + .. code-block:: + + kubectl exec -it web-sfsturbo-*** -- mount -l | grep nfs + + If the mounting information in the command output is consistent with the configured mount options, the mount options have been configured. + + .. code-block:: + + on /data type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=**.**.**.**,mountvers=3,mountport=20048,mountproto=tcp,local_lock=all,addr=**.**.**.**) diff --git a/umn/source/storage/sfs_turbo_file_systems/dynamically_creating_and_mounting_subdirectories_of_an_sfs_turbo_file_system.rst b/umn/source/storage/sfs_turbo_file_systems/dynamically_creating_and_mounting_subdirectories_of_an_sfs_turbo_file_system.rst new file mode 100644 index 0000000..f230dc5 --- /dev/null +++ b/umn/source/storage/sfs_turbo_file_systems/dynamically_creating_and_mounting_subdirectories_of_an_sfs_turbo_file_system.rst @@ -0,0 +1,248 @@ +:original_name: cce_bestpractice_00253_0.html + +.. _cce_bestpractice_00253_0: + +Dynamically Creating and Mounting Subdirectories of an SFS Turbo File System +============================================================================ + +Background +---------- + +The minimum capacity of an SFS Turbo file system is 500 GiB, and the SFS Turbo file system cannot be billed by usage. By default, the root directory of an SFS Turbo file system is mounted to a container which, in most case, does not require such a large capacity. + +The everest add-on allows you to dynamically create subdirectories in an SFS Turbo file system and mount these subdirectories to containers. In this way, an SFS Turbo file system can be shared by multiple containers to increase storage efficiency. + +Constraints +----------- + +- Only clusters of v1.15 or later are supported. +- The cluster must use the everest add-on of version 1.1.13 or later. +- Kata containers are not supported. +- When the everest add-on earlier than 1.2.69 or 2.1.11 is used, a maximum of 10 PVCs can be created concurrently at a time by using the subdirectory function. everest of 1.2.69 or later or of 2.1.11 or later is recommended. + +Creating an SFS Turbo Volume of the subpath Type +------------------------------------------------ + +.. caution:: + + Do not expand, disassociate, or delete a **subpath** volume. + +#. Create an SFS Turbo file system in the same VPC and subnet as the cluster. + +#. Create a YAML file of StorageClass, for example, **sfsturbo-subpath-sc.yaml**. + + The following is an example: + + .. code-block:: + + apiVersion: storage.k8s.io/v1 + allowVolumeExpansion: true + kind: StorageClass + metadata: + name: sfsturbo-subpath-sc + mountOptions: + - lock + parameters: + csi.storage.k8s.io/csi-driver-name: sfsturbo.csi.everest.io + csi.storage.k8s.io/fstype: nfs + everest.io/archive-on-delete: "true" + everest.io/share-access-to: 7ca2dba2-1234-1234-1234-626371a8fb3a + everest.io/share-expand-type: bandwidth + everest.io/share-export-location: 192.168.1.1:/sfsturbo/ + everest.io/share-source: sfs-turbo + everest.io/share-volume-type: STANDARD + everest.io/volume-as: subpath + everest.io/volume-id: 0d773f2e-1234-1234-1234-de6a35074696 + provisioner: everest-csi-provisioner + reclaimPolicy: Delete + volumeBindingMode: Immediate + + In this example: + + - **name**: indicates the name of the StorageClass. + - **mountOptions**: indicates the mount options. This field is optional. + + - In versions later than everest 1.1.13 and earlier than everest 1.2.8, only the **nolock** parameter can be configured. By default, the **nolock** parameter is used for the mount operation and does not need to be configured. If **nolock** is set to **false**, the **lock** field is used. + + - More options are available in everest 1.2.8 or a later version. For details, see `Setting Mount Options `__. **Do not set nolock to true. Otherwise, the mount operation will fail.** + + .. code-block:: + + mountOptions: + - vers=3 + - timeo=600 + - nolock + - hard + + - **everest.io/volume-as**: This parameter is set to **subpath** to use the **subpath** volume. + - **everest.io/share-access-to**: This parameter is optional. In a **subpath** volume, set this parameter to the ID of the VPC where the SFS Turbo file system is located. + - **everest.io/share-expand-type**: This parameter is optional. If the type of the SFS Turbo file system is **SFS Turbo Standard - Enhanced** or **SFS Turbo Performance - Enhanced**, set this parameter to **bandwidth**. + - **everest.io/share-export-location**: This parameter indicates the mount directory. It consists of the SFS Turbo shared path and sub-directory. The shared path can be obtained on the SFS Turbo console. The sub-directory is user-defined. The PVCs created using the StorageClass are located in this sub-directory. + - **everest.io/share-volume-type**: This parameter is optional. It specifies the SFS Turbo file system type. The value can be **STANDARD** or **PERFORMANCE**. For enhanced types, this parameter must be used together with **everest.io/share-expand-type** (whose value should be **bandwidth**). + - **everest.io/zone**: This parameter is optional. Set it to the AZ where the SFS Turbo file system is located. + - **everest.io/volume-id**: This parameter indicates the ID of the SFS Turbo volume. You can obtain the volume ID on the SFS Turbo page. + - **everest.io/archive-on-delete**: If this parameter is set to **true** and **Delete** is selected for **Reclaim Policy**, the original documents of the PV will be archived to the directory named **archived-**\ *{$PV name.timestamp}* before the PVC is deleted. If this parameter is set to **false**, the SFS Turbo subdirectory of the corresponding PV will be deleted. The default value is **true**, indicating that the original documents of the PV will be archived to the directory named **archived-**\ *{$PV name.timestamp}* before the PVC is deleted. + +3. Run **kubectl create -f sfsturbo-subpath-sc.yaml**. + +4. Create a PVC YAML file named **sfs-turbo-test.yaml**. + + The following is an example: + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: sfs-turbo-test + namespace: default + spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 50Gi + storageClassName: sfsturbo-subpath-sc + volumeMode: Filesystem + + In this example: + + - **name**: indicates the name of the PVC. + - **storageClassName**: specifies the name of the StorageClass. + - **storage**: In the subpath mode, it is useless to specify this parameter. The storage capacity is limited by the total capacity of the SFS Turbo file system. If the total capacity of the SFS Turbo file system is insufficient, expand the capacity on the SFS Turbo page in a timely manner. + +5. Run **kubectl create -f sfs-turbo-test.yaml**. + +.. note:: + + It is meaningless to conduct capacity expansion on an SFS Turbo volume created in the subpath mode. This operation does not expand the capacity of the SFS Turbo file system. Ensure that the total capacity of the SFS Turbo file system is not used up. + +Creating a Deployment and Mounting an Existing Volume to the Deployment +----------------------------------------------------------------------- + +#. Create a YAML file for the Deployment, for example, **deployment-test.yaml**. + + The following is an example: + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: test-turbo-subpath-example + namespace: default + generation: 1 + labels: + appgroup: '' + spec: + replicas: 1 + selector: + matchLabels: + app: test-turbo-subpath-example + template: + metadata: + labels: + app: test-turbo-subpath-example + spec: + containers: + - image: nginx:latest + name: container-0 + volumeMounts: + - mountPath: /tmp + name: pvc-sfs-turbo-example + restartPolicy: Always + imagePullSecrets: + - name: default-secret + volumes: + - name: pvc-sfs-turbo-example + persistentVolumeClaim: + claimName: sfs-turbo-test + + In this example: + + - **name**: indicates the name of the Deployment. + - **image**: specifies the image used by the Deployment. + - **mountPath**: indicates the mount path of the container. In this example, the volume is mounted to the **/tmp** directory. + - **claimName**: indicates the name of an existing PVC. + +2. Create the Deployment. + + **kubectl create -f deployment-test.yaml** + +Dynamically Creating a subpath Volume for a StatefulSet +------------------------------------------------------- + +#. Create a YAML file for a StatefulSet, for example, **statefulset-test.yaml**. + + The following is an example: + + .. code-block:: + + apiVersion: apps/v1 + kind: StatefulSet + metadata: + name: test-turbo-subpath + namespace: default + generation: 1 + labels: + appgroup: '' + spec: + replicas: 2 + selector: + matchLabels: + app: test-turbo-subpath + template: + metadata: + labels: + app: test-turbo-subpath + annotations: + metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"","path":"","port":"","names":""}]' + pod.alpha.kubernetes.io/initialized: 'true' + spec: + containers: + - name: container-0 + image: 'nginx:latest' + resources: {} + volumeMounts: + - name: sfs-turbo-160024548582479676 + mountPath: /tmp + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + imagePullPolicy: IfNotPresent + restartPolicy: Always + terminationGracePeriodSeconds: 30 + dnsPolicy: ClusterFirst + securityContext: {} + imagePullSecrets: + - name: default-secret + affinity: {} + schedulerName: default-scheduler + volumeClaimTemplates: + - metadata: + name: sfs-turbo-160024548582479676 + namespace: default + annotations: {} + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + storageClassName: sfsturbo-subpath-sc + serviceName: wwww + podManagementPolicy: OrderedReady + updateStrategy: + type: RollingUpdate + revisionHistoryLimit: 10 + + In this example: + + - **name**: indicates the name of the StatefulSet. + - **image**: specifies the image used by the StatefulSet. + - **mountPath**: indicates the mount path of the container. In this example, the volume is mounted to the **/tmp** directory. + - **spec.template.spec.containers.volumeMounts.name** and **spec.volumeClaimTemplates.metadata.name**: must be consistent because they are mapped to each other. + - **storageClassName**: indicates the name of the StorageClass. + +2. Create the StatefulSet. + + **kubectl create -f statefulset-test.yaml** diff --git a/umn/source/storage/sfs_turbo_file_systems/index.rst b/umn/source/storage/sfs_turbo_file_systems/index.rst new file mode 100644 index 0000000..cc2d0ec --- /dev/null +++ b/umn/source/storage/sfs_turbo_file_systems/index.rst @@ -0,0 +1,20 @@ +:original_name: cce_10_0125.html + +.. _cce_10_0125: + +SFS Turbo File Systems +====================== + +- :ref:`Overview ` +- :ref:`Using an Existing SFS Turbo File System Through a Static PV ` +- :ref:`Configuring SFS Turbo Mount Options ` +- :ref:`Dynamically Creating and Mounting Subdirectories of an SFS Turbo File System ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + overview + using_an_existing_sfs_turbo_file_system_through_a_static_pv + configuring_sfs_turbo_mount_options + dynamically_creating_and_mounting_subdirectories_of_an_sfs_turbo_file_system diff --git a/umn/source/storage/sfs_turbo_file_systems/overview.rst b/umn/source/storage/sfs_turbo_file_systems/overview.rst new file mode 100644 index 0000000..51129d9 --- /dev/null +++ b/umn/source/storage/sfs_turbo_file_systems/overview.rst @@ -0,0 +1,27 @@ +:original_name: cce_10_0624.html + +.. _cce_10_0624: + +Overview +======== + +Introduction +------------ + +CCE allows you to mount storage volumes created by SFS Turbo file systems to a path of a container to meet data persistence requirements. SFS Turbo file systems are fast, on-demand, and scalable, which are suitable for scenarios with a massive number of small files, such as DevOps, containerized microservices, and enterprise office applications. + +Expandable to 320 TB, SFS Turbo provides a fully hosted shared file storage, which is highly available and stable, to support small files and applications requiring low latency and high IOPS. + +- **Standard file protocols**: You can mount file systems as volumes to servers, the same as using local directories. +- **Data sharing**: The same file system can be mounted to multiple servers, so that data can be shared. +- **Private network**: Users can access data only in private networks of data centers. +- **Data isolation**: The on-cloud storage service provides exclusive cloud file storage, which delivers data isolation and ensures IOPS performance. +- **Use cases**: Deployments/StatefulSets in the ReadWriteMany mode, DaemonSets, and jobs created for high-traffic websites, log storage, DevOps, and enterprise OA applications + +Application Scenarios +--------------------- + +SFS Turbo supports the following mounting modes: + +- :ref:`Using an Existing SFS Turbo File System Through a Static PV `: static creation mode, where you use an existing SFS volume to create a PV and then mount storage to the workload through a PVC. +- :ref:`Dynamically Creating and Mounting Subdirectories of an SFS Turbo File System `: SFS Turbo allows you to dynamically create subdirectories and mount them to containers so that SFS Turbo can be shared and the SFS Turbo storage capacity can be used more economically and properly. diff --git a/umn/source/storage/sfs_turbo_file_systems/using_an_existing_sfs_turbo_file_system_through_a_static_pv.rst b/umn/source/storage/sfs_turbo_file_systems/using_an_existing_sfs_turbo_file_system_through_a_static_pv.rst new file mode 100644 index 0000000..7872c37 --- /dev/null +++ b/umn/source/storage/sfs_turbo_file_systems/using_an_existing_sfs_turbo_file_system_through_a_static_pv.rst @@ -0,0 +1,444 @@ +:original_name: cce_10_0625.html + +.. _cce_10_0625: + +Using an Existing SFS Turbo File System Through a Static PV +=========================================================== + +SFS Turbo is a shared file system with high availability and durability. It is suitable for applications that contain massive small files and require low latency, and high IOPS. This section describes how to use an existing SFS Turbo file system to statically create PVs and PVCs and implement data persistence and sharing in workloads. + +Prerequisites +------------- + +- You have created a cluster and installed the CSI add-on (:ref:`everest `) in the cluster. +- If you want to create a cluster using commands, use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. +- You have created an available SFS Turbo file system, and the SFS Turbo file system and the cluster are in the same VPC. + +Constraints +----------- + +- Multiple PVs can use the same SFS or SFS Turbo file system with the following restrictions: + + - If multiple PVCs/PVs use the same underlying SFS or SFS Turbo file system, when you attempt to mount these PVCs/PVs to the same pod, all PVCs cannot be mounted to the pod and the pod startup fails. This is because the **volumeHandle** values of these PVs are the same. + - The **persistentVolumeReclaimPolicy** parameter in the PVs must be set to **Retain**. Otherwise, when a PV is deleted, the associated underlying volume may be deleted. In this case, other PVs associated with the underlying volume malfunction. + - When the underlying volume is repeatedly used, enable isolation and protection for ReadWriteMany at the application layer to prevent data overwriting and loss. + +Using an Existing SFS Turbo File System on the Console +------------------------------------------------------ + +#. Log in to the CCE console and click the cluster name to access the cluster console. +#. Statically create a PVC and PV. + + a. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **Create PVC** in the upper right corner. In the dialog box displayed, configure the PVC parameters. + + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===========================================================================================================================================================================================================================+ + | PVC Type | In this section, select **SFS Turbo**. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PVC Name | Enter the PVC name, which must be unique in the same namespace. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Creation Method | You can create a storage volume or use an existing storage volume to statically create a PVC based on whether a PV has been created. | + | | | + | | In this example, select **Create new** to create a PV and PVC at the same time on the console. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PV\ :sup:`a` | Select an existing PV volume in the cluster. Create a PV in advance. For details, see "Creating a storage volume" in :ref:`Related Operations `. | + | | | + | | You do not need to specify this parameter in this example. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | SFS Turbo\ :sup:`b` | Click **Select SFS Turbo**. On the displayed page, select the SFS Turbo file system that meets your requirements and click **OK**. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | PV Name\ :sup:`b` | Enter the PV name, which must be unique in the same cluster. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Access Mode\ :sup:`b` | SFS Turbo volumes support only **ReadWriteMany**, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see :ref:`Volume Access Modes `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Reclaim Policy\ :sup:`b` | Only **Retain** is supported, indicating that the PV is not deleted when the PVC is deleted. For details, see :ref:`PV Reclaim Policy `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Options\ :sup:`b` | Enter the mounting parameter key-value pairs. For details, see :ref:`Configuring SFS Turbo Mount Options `. | + +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + .. note:: + + a: The parameter is available when **Creation Method** is set to **Use existing**. + + b: The parameter is available when **Creation Method** is set to **Create new**. + + b. Click **Create** to create a PVC and a PV. + + You can choose **Storage** in the navigation pane and view the created PVC and PV on the **PersistentVolumeClaims (PVCs)** and **PersistentVolumes (PVs)** tab pages. + +#. Create an application. + + a. In the navigation pane on the left, click **Workloads**. In the right pane, click the **Deployments** tab. + + b. Click **Create Workload** in the upper right corner. On the displayed page, click **Data Storage** in the **Container Settings** area and click **Add Volume** to select **PVC**. + + Mount and use storage volumes, as shown in :ref:`Table 1 `. For details about other parameters, see :ref:`Workloads `. + + .. _cce_10_0625__table2529244345: + + .. table:: **Table 1** Mounting a storage volume + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | PVC | Select an existing SFS Turbo volume. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Mount Path | Enter a mount path, for example, **/tmp**. | + | | | + | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**. Otherwise, containers will be malfunctional. Mount the volume to an empty directory. If the directory is not empty, ensure that there are no files that affect container startup. Otherwise, the files will be replaced, causing container startup failures or workload creation failures. | + | | | + | | .. important:: | + | | | + | | NOTICE: | + | | If a volume is mounted to a high-risk directory, use an account with minimum permissions to start the container. Otherwise, high-risk files on the host may be damaged. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Subpath | Enter a subpath, for example, **tmp**, indicating that data in the mount path of the container will be stored in the **tmp** folder of the volume. | + | | | + | | A subpath is used to mount a local volume so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Permission | - **Read-only**: You can only read the data in the mounted volumes. | + | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause data loss. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + In this example, the disk is mounted to the **/data** path of the container. The container data generated in this path is stored in the SFS Turbo file system. + + c. After the configuration, click **Create Workload**. + + After the workload is created, the data in the container mount directory will be persistently stored. Verify the storage by referring to :ref:`Verifying Data Persistence and Sharing `. + +(kubectl) Using an Existing SFS File System +------------------------------------------- + +#. Use kubectl to connect to the cluster. +#. Create a PV. + + a. .. _cce_10_0625__li162841212145314: + + Create the **pv-sfsturbo.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolume + metadata: + annotations: + pv.kubernetes.io/provisioned-by: everest-csi-provisioner + name: pv-sfsturbo # PV name. + spec: + accessModes: + - ReadWriteMany # Access mode. The value must be ReadWriteMany for SFS Turbo. + capacity: + storage: 500Gi # SFS Turbo volume capacity. + csi: + driver: sfsturbo.csi.everest.io # Dependent storage driver for the mounting. + fsType: nfs + volumeHandle: # SFS Turbo volume ID. + volumeAttributes: + everest.io/share-export-location: # Shared path of the SFS Turbo volume. + storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner + persistentVolumeReclaimPolicy: Retain # Reclaim policy. + storageClassName: csi-sfsturbo # Storage class name of the SFS Turbo file system. + mountOptions: [] # Mount options. + + .. table:: **Table 2** Key parameters + + +----------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +==================================+=======================+========================================================================================================================================================================================================================================================+ + | volumeHandle | Yes | SFS Turbo volume ID. | + | | | | + | | | How to obtain: Log in to the console, choose **Service List** > **Storage** > **Scalable File Service**, and select **SFS Turbo**. In the list, click the name of the target SFS Turbo volume. On the details page, copy the content following **ID**. | + +----------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | everest.io/share-export-location | Yes | Shared path of the SFS Turbo volume. | + | | | | + | | | Log in to the console, choose **Service List** > **Storage** > **Scalable File Service**, and select **SFS Turbo**. You can obtain the shared path of the file system from the **Mount Address** column. | + +----------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | mountOptions | No | Mount options. | + | | | | + | | | If not specified, the following configurations are used by default. For details, see :ref:`Configuring SFS Turbo Mount Options `. | + | | | | + | | | .. code-block:: | + | | | | + | | | mountOptions: | + | | | - vers=3 | + | | | - timeo=600 | + | | | - nolock | + | | | - hard | + +----------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | persistentVolumeReclaimPolicy | Yes | A reclaim policy is supported when the cluster version is or later than 1.19.10 and the everest version is or later than 1.2.9. | + | | | | + | | | Only the **Retain** reclaim policy is supported. For details, see :ref:`Verifying Data Persistence and Sharing `. | + | | | | + | | | **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV is in the **Released** status and cannot be bound to the PVC again. | + +----------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + +----------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | storageClassName | Yes | The storage class name of SFS Turbo volumes is **csi-sfsturbo**. | + +----------------------------------+-----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PV: + + .. code-block:: + + kubectl apply -f pv-sfsturbo.yaml + +#. Create a PVC. + + a. Create the **pvc-sfsturbo.yaml** file. + + .. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: pvc-sfsturbo + namespace: default + annotations: + volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner + spec: + accessModes: + - ReadWriteMany # The value must be ReadWriteMany for SFS Turbo. + resources: + requests: + storage: 500Gi # SFS Turbo volume capacity. + storageClassName: csi-sfsturbo # Storage class of the SFS Turbo volume, which must be the same as that of the PV. + volumeName: pv-sfsturbo # PV name. + + .. table:: **Table 3** Key parameters + + +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Mandatory | Description | + +=======================+=======================+=========================================================================================================================+ + | storage | Yes | Requested capacity in the PVC, in Gi. | + | | | | + | | | The value must be the same as the storage size of the existing PV. | + +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------+ + | storageClassName | Yes | Storage class name, which must be the same as the storage class of the PV in :ref:`1 `. | + | | | | + | | | The storage class name of SFS Turbo volumes is **csi-sfsturbo**. | + +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------+ + | volumeName | Yes | PV name, which must be the same as the PV name in :ref:`1 `. | + +-----------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------+ + + b. Run the following command to create a PVC: + + .. code-block:: + + kubectl apply -f pvc-sfsturbo.yaml + +#. Create an application. + + a. Create a file named **web-demo.yaml**. In this example, the SFS Turbo volume is mounted to the **/data** path. + + .. code-block:: + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: web-demo + namespace: default + spec: + replicas: 2 + selector: + matchLabels: + app: web-demo + template: + metadata: + labels: + app: web-demo + spec: + containers: + - name: container-1 + image: nginx:latest + volumeMounts: + - name: pvc-sfsturbo-volume #Volume name, which must be the same as the volume name in the volumes field. + mountPath: /data #Location where the storage volume is mounted. + imagePullSecrets: + - name: default-secret + volumes: + - name: pvc-sfsturbo-volume #Volume name, which can be customized. + persistentVolumeClaim: + claimName: pvc-sfsturbo #Name of the created PVC. + + b. Run the following command to create an application to which the SFS Turbo volume is mounted: + + .. code-block:: + + kubectl apply -f web-demo.yaml + + After the workload is created, you can try :ref:`Verifying Data Persistence and Sharing `. + +.. _cce_10_0625__section11593165910013: + +Verifying Data Persistence and Sharing +-------------------------------------- + +#. View the deployed applications and files. + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-mjhm9 1/1 Running 0 46s + web-demo-846b489584-wvv5s 1/1 Running 0 46s + + b. Run the following commands in sequence to view the files in the **/data** path of the pods: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + If no result is returned for both pods, no file exists in the **/data** path. + +#. Run the following command to create a file named **static** in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- touch /data/static + +#. Run the following command to view the files in the **/data** path: + + .. code-block:: + + kubectl exec web-demo-846b489584-mjhm9 -- ls /data + + Expected output: + + .. code-block:: + + static + +#. **Verify data persistence.** + + a. Run the following command to delete the pod named **web-demo-846b489584-mjhm9**: + + .. code-block:: + + kubectl delete pod web-demo-846b489584-mjhm9 + + Expected output: + + .. code-block:: + + pod "web-demo-846b489584-mjhm9" deleted + + After the deletion, the Deployment controller automatically creates a replica. + + b. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + The expected output is as follows, in which **web-demo-846b489584-d4d4j** is the newly created pod: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 110s + web-demo-846b489584-wvv5s 1/1 Running 0 7m50s + + c. Run the following command to check whether the files in the **/data** path of the new pod have been modified: + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + static + + If the **static** file still exists, the data can be stored persistently. + +#. **Verify data sharing.** + + a. Run the following command to view the created pod: + + .. code-block:: + + kubectl get pod | grep web-demo + + Expected output: + + .. code-block:: + + web-demo-846b489584-d4d4j 1/1 Running 0 7m + web-demo-846b489584-wvv5s 1/1 Running 0 13m + + b. Run the following command to create a file named **share** in the **/data** path of either pod: In this example, select the pod named **web-demo-846b489584-d4d4j**. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- touch /data/share + + Check the files in the **/data** path of the pod. + + .. code-block:: + + kubectl exec web-demo-846b489584-d4d4j -- ls /data + + Expected output: + + .. code-block:: + + share + static + + c. Check whether the **share** file exists in the **/data** path of another pod (**web-demo-846b489584-wvv5s**) as well to verify data sharing. + + .. code-block:: + + kubectl exec web-demo-846b489584-wvv5s -- ls /data + + Expected output: + + .. code-block:: + + share + static + + After you create a file in the **/data** path of a pod, if the file is also created in the **/data** path of another pods, the two pods share the same volume. + +.. _cce_10_0625__section16505832153318: + +Related Operations +------------------ + +You can also perform the operations listed in :ref:`Table 4 `. + +.. _cce_10_0625__table1619535674020: + +.. table:: **Table 4** Related operations + + +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operation | Description | Procedure | + +===============================================+====================================================================================================================================================+============================================================================================================================================================================================================================================+ + | Creating a storage volume (PV) | Create a PV on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumes (PVs)** tab. Click **Create Volume** in the upper right corner. In the dialog box displayed, configure the parameters. | + | | | | + | | | - **Volume Type**: Select **SFS Turbo**. | + | | | - **SFS Turbo**: Click **Select SFS Turbo**. On the page displayed, select the SFS Turbo volume that meets the requirements and click **OK**. | + | | | - **PV Name**: Enter the PV name, which must be unique in the same cluster. | + | | | - **Access Mode**: SFS volumes support only **ReadWriteMany**, indicating that a storage volume can be mounted to multiple nodes in read/write mode. For details, see :ref:`Volume Access Modes `. | + | | | - **Reclaim Policy**: Only **Retain** is supported. For details, see :ref:`PV Reclaim Policy `. | + | | | - **Mount Options**: Enter the mounting parameter key-value pairs. For details, see :ref:`Configuring SFS Turbo Mount Options `. | + | | | | + | | | #. Click **Create**. | + +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Expanding the capacity of an SFS Turbo volume | Quickly expand the capacity of a mounted SFS Turbo volume on the CCE console. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** tab. Click **More** in the **Operation** column of the target PVC and select **Scale-out**. | + | | | #. Enter the capacity to be added and click **OK**. | + +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing events | You can view event names, event types, number of occurrences, Kubernetes events, first occurrence time, and last occurrence time of the PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View Events** in the **Operation** column of the target PVC or PV to view events generated within one hour (event data is retained for one hour). | + +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Viewing a YAML file | You can view, copy, and download the YAML files of a PVC or PV. | #. Choose **Storage** from the navigation pane, and click the **PersistentVolumeClaims (PVCs)** or **PersistentVolumes (PVs)** tab. | + | | | #. Click **View YAML** in the **Operation** column of the target PVC or PV to view or download the YAML. | + +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/storage/storage_basics.rst b/umn/source/storage/storage_basics.rst new file mode 100644 index 0000000..cb8e140 --- /dev/null +++ b/umn/source/storage/storage_basics.rst @@ -0,0 +1,188 @@ +:original_name: cce_10_0378.html + +.. _cce_10_0378: + +Storage Basics +============== + +Volumes +------- + +On-disk files in a container are ephemeral, which presents the following problems to important applications running in the container: + +#. When a container is rebuilt, files in the container will be lost. +#. When multiple containers run in a pod at the same time, files need to be shared among the containers. + +Kubernetes volumes resolve both of these problems. Volumes, as part of a pod, cannot be created independently and can only be defined in pods. All containers in a pod can access its volumes, but the volumes must have been mounted to any directory in a container. + +The following figure shows how a storage volume is used between containers in a pod. + +|image1| + +The basic principles for using volumes are as follows: + +- Multiple volumes can be mounted to a pod. However, do not mount too many volumes to a pod. +- Multiple types of volumes can be mounted to a pod. +- Each volume mounted to a pod can be shared among containers in the pod. +- You are advised to use PVCs and PVs to mount volumes for Kubernetes. + +.. note:: + + The lifecycle of a volume is the same as that of the pod to which the volume is mounted. When the pod is deleted, the volume is also deleted. However, files in the volume may outlive the volume, depending on the volume type. + +Kubernetes provides various volume types, which can be classified as in-tree and out-of-tree. + ++-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Volume Classification | Description | ++===================================+===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ +| In-tree | Maintained through the Kubernetes code repository and built, edited, and released with Kubernetes binary files. Kubernetes does not accept this volume type anymore. | +| | | +| | Kubernetes-native volumes such as HostPath, EmptyDir, Secret, and ConfigMap are all the in-tree type. | +| | | +| | PVCs are a special in-tree volume. Kubernetes uses this type of volume to convert from in-tree to out-of-tree. PVCs allow you to request for PVs created using the underlying storage resources provided by different storage vendors. | ++-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Out-of-tree | Out-of-tree volumes include container storage interfaces (CSIs) and FlexVolumes (deprecated). Storage vendors only need to comply with certain specifications to create custom storage add-ons and PVs that can be used by Kubernetes, without adding add-on source code to the Kubernetes code repository. Cloud storage such as SFS and OBS is used by installing storage drivers in a cluster. You need to create PVs in the cluster and mount the PVs to pods using PVCs. | ++-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +PV and PVC +---------- + +Kubernetes provides PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs) to abstract details of how storage is provided from how it is consumed. You can request specific size of storage when needed, just like pods can request specific levels of resources (CPU and memory). + +- PV: describes a persistent storage volume in a cluster. A PV is a cluster-level resource just like a node. It applies to the entire Kubernetes cluster. A PV has a lifecycle independent of any individual Pod that uses the PV. +- PVC: describes a request for storage by a user. When configuring storage for an application, claim a storage request (that is, PVC). Kubernetes selects a PV that best meets the request and binds the PV to the PVC. A PVC to PV binding is a one-to-one mapping. When creating a PVC, describe the attributes of the requested persistent storage, such as the storage size and read/write permission. + +You can bind PVCs to PVs in a pod so that the pod can use storage resources. The following figure shows the relationship between PVs and PVCs. + + +.. figure:: /_static/images/en-us_image_0000001695896709.png + :alt: **Figure 1** PVC-to-PV binding + + **Figure 1** PVC-to-PV binding + +.. _cce_10_0378__section79711433131110: + +CSI +--- + +CSI is a standard for container storage interfaces and a storage plug-in implementation solution recommended by the Kubernetes community. :ref:`everest ` is a storage add-on developed based on CSI. It provides different types of persistent storage for containers. + +.. _cce_10_0378__section43881411172418: + +Volume Access Modes +------------------- + +Storage volumes can be mounted to the host system only in the mode supported by underlying storage resources. For example, a file storage system can be read and written by multiple nodes, but an EVS disk can be read and written by only one node. + +- **ReadWriteOnce**: A storage volume can be mounted to a single node in read-write mode. +- **ReadWriteMany**: A storage volume can be mounted to multiple nodes in read-write mode. + +.. table:: **Table 1** Access modes supported by storage volumes + + ============ ============= ============= + Storage Type ReadWriteOnce ReadWriteMany + ============ ============= ============= + EVS Y x + SFS x Y + OBS x Y + SFS Turbo x Y + ============ ============= ============= + +Mounting a Storage Volume +------------------------- + +You can mount volumes in the following ways: + +Use PVs to describe existing storage resources, and then create PVCs to use the storage resources in pods. You can also use the dynamic creation mode. That is, specify the :ref:`StorageClass ` when creating a PVC and use the provisioner in the StorageClass to automatically create a PV and bind the PV to the PVC. + +.. table:: **Table 2** Modes of mounting volumes + + +-----------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+--------------------------------+ + | Mounting Mode | Description | Supported Volume Type | Other Constraints | + +=======================================================================+========================================================================================================================================================================================================================================================================================================================================================================================+=============================+================================+ + | Statically creating storage volume (using existing storage) | Use existing storage (such as EVS disks and SFS file systems) to create PVs and mount the PVs to the workload through PVCs. Kubernetes binds PVCs to the matching PVs so that workloads can access storage services. | All volumes | None | + +-----------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+--------------------------------+ + | Dynamically creating storage volumes (automatically creating storage) | Specify a :ref:`StorageClass ` for a PVC. The storage provisioner creates underlying storage media as required to automatically create PVs and directly bind the PV to the PVC. | EVS, OBS, SFS, and local PV | None | + +-----------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+--------------------------------+ + | Dynamic mounting (VolumeClaimTemplate) | Achieved by using the `volumeClaimTemplates `__ field and depends on the dynamic PV creation capability of StorageClass. In this mode, each pod is associated with a unique PVC and PV. After a pod is rescheduled, the original data can still be mounted to it based on the PVC name. | EVS and local PV | Supported only by StatefulSets | + +-----------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------+--------------------------------+ + +.. _cce_10_0378__section19999142414413: + +PV Reclaim Policy +----------------- + +A PV reclaim policy is used to delete or reclaim underlying volumes when a PVC is deleted. The value can be **Delete** or **Retain**. + +- **Delete**: Deleting a PVC will remove the PV from Kubernetes, so the associated underlying storage assets from the external infrastructure. + +- **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV resources are in the **Released** state and cannot be directly bound to the PVC. + + You can manually delete and reclaim volumes by performing the following operations: + + #. Delete the PV. + #. Clear data on the associated underlying storage resources as required. + #. Delete the associated underlying storage resources. + + To reuse the underlying storage resources, create a PV. + +CCE also allows you to delete a PVC without deleting underlying storage resources. This function can be achieved only by using a YAML file: Set the PV reclaim policy to **Delete** and add **everest.io/reclaim-policy: retain-volume-only** to **annotations**. In this way, when the PVC is deleted, the PV is deleted, but the underlying storage resources are retained. + +The following YAML file takes EVS as an example: + +.. code-block:: + + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: test + namespace: default + annotations: + volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner + everest.io/disk-volume-type: SAS + labels: + failure-domain.beta.kubernetes.io/region: # Region of the node where the application is to be deployed + failure-domain.beta.kubernetes.io/zone: # AZ of the node where the application is to be deployed + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + storageClassName: csi-disk + volumeName: pv-evs-test + + --- + apiVersion: v1 + kind: PersistentVolume + metadata: + annotations: + pv.kubernetes.io/provisioned-by: everest-csi-provisioner + everest.io/reclaim-policy: retain-volume-only + name: pv-evs-test + labels: + failure-domain.beta.kubernetes.io/region: # Region of the node where the application is to be deployed + failure-domain.beta.kubernetes.io/zone: # AZ of the node where the application is to be deployed + spec: + accessModes: + - ReadWriteOnce + capacity: + storage: 10Gi + csi: + driver: disk.csi.everest.io + fsType: ext4 + volumeHandle: 2af98016-6082-4ad6-bedc-1a9c673aef20 + volumeAttributes: + storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner + everest.io/disk-mode: SCSI + everest.io/disk-volume-type: SAS + persistentVolumeReclaimPolicy: Delete + storageClassName: csi-disk + +Documentation +------------- + +- For more information about Kubernetes storage, see `Storage `__. +- For more information about CCE container storage, see :ref:`Overview `. + +.. |image1| image:: /_static/images/en-us_image_0000001647417776.png diff --git a/umn/source/storage/storageclass.rst b/umn/source/storage/storageclass.rst index 3739a38..0c8b8e6 100644 --- a/umn/source/storage/storageclass.rst +++ b/umn/source/storage/storageclass.rst @@ -5,26 +5,84 @@ StorageClass ============ -StorageClass describes the storage class used in the cluster. You need to specify StorageClass when creating a PVC or PV. As of now, CCE provides storage classes such as csi-disk, csi-nas, and csi-obs by default. When defining a PVC, you can use a StorageClassName to automatically create a PV of the corresponding type and automatically create underlying storage resources. +Introduction +------------ -You can run the following command to query the storage classes that CCE supports. You can use the CSI plug-in provided by CCE to customize a storage class, which functions similarly as the default storage classes in CCE. +StorageClass describes the classification of storage types in a cluster and can be represented as a configuration template for creating PVs. When creating a PVC or PV, specify StorageClass. + +As a user, you only need to specify **storageClassName** when defining a PVC to automatically create a PV and underlying storage, significantly reducing the workload of creating and maintaining a PV. + +In addition to the :ref:`default storage classes ` provided by CCE, you can also customize storage classes. + +- :ref:`Application Scenarios of Custom Storage ` +- :ref:`Custom Storage Class ` +- :ref:`Specifying a Default StorageClass ` + +.. _cce_10_0380__section77737156273: + +CCE Default Storage Classes +--------------------------- + +As of now, CCE provides storage classes such as csi-disk, csi-nas, and csi-obs by default. When defining a PVC, you can use a **storageClassName** to automatically create a PV of the corresponding type and automatically create underlying storage resources. + +Run the following kubectl command to obtain the storage classes that CCE supports. Use the CSI add-on provided by CCE to create a storage class. .. code-block:: # kubectl get sc NAME PROVISIONER AGE - csi-disk everest-csi-provisioner 17d # Storage class for EVS disks - csi-nas everest-csi-provisioner 17d # Storage class for SFS 1.0 file systems - csi-obs everest-csi-provisioner 17d # Storage class for OBS buckets + csi-disk everest-csi-provisioner 17d # EVS disk + csi-disk-topology everest-csi-provisioner 17d # EVS disks created with delayed + csi-nas everest-csi-provisioner 17d # SFS 1.0 + csi-obs everest-csi-provisioner 17d # OBS + csi-sfsturbo everest-csi-provisioner 17d # SFS Turbo -After a StorageClass is set, PVs can be automatically created and maintained. You only need to specify the StorageClass when creating a PVC, which greatly reduces the workload. +Each storage class contains the default parameters used for dynamically creating a PV. The following is an example of storage class for EVS disks: -In addition to the predefined storage classes provided by CCE, you can also customize storage classes. The following sections describe the application status, solutions, and methods of customizing storage classes. +.. code-block:: -Challenges ----------- + kind: StorageClass + apiVersion: storage.k8s.io/v1 + metadata: + name: csi-disk + provisioner: everest-csi-provisioner + parameters: + csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io + csi.storage.k8s.io/fstype: ext4 + everest.io/disk-volume-type: SAS + everest.io/passthrough: 'true' + reclaimPolicy: Delete + allowVolumeExpansion: true + volumeBindingMode: Immediate -When using storage resources in CCE, the most common method is to specify **storageClassName** to define the type of storage resources to be created when creating a PVC. The following configuration shows how to use a PVC to apply for an SAS (high I/O) EVS disk (block storage). ++-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Parameter | Description | ++===================================+=======================================================================================================================================================================================================================================+ +| provisioner | Specifies the storage resource provider, which is the everest add-on for CCE. Set this parameter to **everest-csi-provisioner**. | ++-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| parameters | Specifies the storage parameters, which vary with storage types. | ++-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| reclaimPolicy | Specifies the value of **persistentVolumeReclaimPolicy** for creating a PV. The value can be **Delete** or **Retain**. If **reclaimPolicy** is not specified when a StorageClass object is created, the value defaults to **Delete**. | +| | | +| | - **Delete**: indicates that a dynamically created PV will be automatically destroyed. | +| | - **Retain**: indicates that a dynamically created PV will not be automatically destroyed. | ++-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| allowVolumeExpansion | Specifies whether the PV of this storage class supports dynamic capacity expansion. The default value is **false**. Dynamic capacity expansion is implemented by the underlying storage add-on. This is only a switch. | ++-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| volumeBindingMode | Specifies the volume binding mode, that is, the time when a PV is dynamically created. The value can be **Immediate** or **WaitForFirstConsumer**. | +| | | +| | - **Immediate**: PV binding and dynamic creation are completed when a PVC is created. | +| | - **WaitForFirstConsumer**: PV binding and creation are delayed. The PV creation and binding processes are executed only when the PVC is used in the workload. | ++-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| mountOptions | This field must be supported by the underlying storage. If this field is not supported but is specified, the PV creation will fail. | ++-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0380__section18703932139: + +Application Scenarios of Custom Storage +--------------------------------------- + +When using storage resources in CCE, the most common method is to specify **storageClassName** to define the type of storage resources to be created when creating a PVC. The following configuration shows how to use a PVC to apply for a SAS (high I/O) EVS disk (block storage). .. code-block:: @@ -43,88 +101,73 @@ When using storage resources in CCE, the most common method is to specify **stor storage: 10Gi storageClassName: csi-disk -If you need to specify the EVS disk type, you can set the **everest.io/disk-volume-type** field. The value **SAS** is used as an example here, indicating the high I/O EVS disk type. Or you can choose **SATA** (common I/O) and **SSD** (ultra-high I/O). +To specify the EVS disk type on CCE, use the **everest.io/disk-volume-type** field. SAS indicates the EVS disk type. -This configuration method may not work if you want to: +The preceding is a basic method of using StorageClass. In real-world scenarios, you can use StorageClass to perform other operations. -- Set **storageClassName** only, which is simpler than specifying the EVS disk type by using **everest.io/disk-volume-type**. -- Avoid modifying YAML files or Helm charts. Some users switch from self-built or other Kubernetes services to CCE and have written YAML files of many applications. In these YAML files, different types of storage resources are specified by different StorageClassNames. When using CCE, they need to modify a large number of YAML files or Helm charts to use storage resources, which is labor-consuming and error-prone. -- Set the default **storageClassName** for all applications to use the default storage class. In this way, you can create storage resources of the default type without needing to specify **storageClassName** in the YAML file. ++-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------+ +| Application Scenario | Solution | Procedure | ++===========================================================================================================================================================================================================================================================================================================================================================+=============================================================================================================================================================================================================================================================================================+===========================================================================+ +| When **annotations** is used to specify storage configuration, the configuration is complex. For example, the **everest.io/disk-volume-type** field is used to specify the EVS disk type. | Define PVC annotations in the **parameters** field of StorageClass. When compiling a YAML file, you only need to specify **storageClassName**. | :ref:`Custom Storage Class ` | +| | | | +| | For example, you can define SAS EVS disk and SSD EVS disk as a storage class, respectively. If a storage class named **csi-disk-sas** is defined, it is used to create SAS storage. | | ++-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------+ +| When a user migrates services from a self-built Kubernetes cluster or other Kubernetes services to CCE, the storage class used in the original application YAML file is different from that used in CCE. As a result, a large number of YAML files or Helm chart packages need to be modified when the storage is used, which is complex and error-prone. | Create a storage class with the same name as that in the original application YAML file in the CCE centralization. After the migration, you do not need to modify the **storageClassName** in the application YAML file. | | +| | | | +| | For example, the EVS disk storage class used before the migration is **disk-standard**. After migrating services to a CCE cluster, you can copy the YAML file of the **csi-disk** storage class in the CCE cluster, change its name to **disk-standard**, and create another storage class. | | ++-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------+ +| **storageClassName** must be specified in the YAML file to use the storage. If not, the storage cannot be created. | If you set the default StorageClass in the cluster, you can create storage without specifying the **storageClassName** in the YAML file. | :ref:`Specifying a Default StorageClass ` | ++-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------+ -Solution --------- - -This section describes how to set a custom storage class in CCE and how to set the default storage class. You can specify different types of storage resources by setting **storageClassName**. - -- For the first scenario, you can define custom storageClassNames for SAS and SSD EVS disks. For example, define a storage class named **csi-disk-sas** for creating SAS disks. The following figure shows the differences before and after you use a custom storage class. - - |image1| - -- For the second scenario, you can define a storage class with the same name as that in the existing YAML file without needing to modify **storageClassName** in the YAML file. - -- For the third scenario, you can set the default storage class as described below to create storage resources without specifying **storageClassName** in YAML files. - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: pvc-evs-example - namespace: default - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi +.. _cce_10_0380__section92221021258: Custom Storage Class -------------------- -You can customize a high I/O storage class in a YAML file. For example, the name **csi-disk-sas** indicates that the disk type is SAS (high I/O). +This section uses the custom storage class of EVS disks as an example to describe how to define SAS EVS disk and SSD EVS disk as a storage class, respectively. For example, if you define a storage class named **csi-disk-sas**, which is used to create SAS storage, the differences are shown in the following figure. When compiling a YAML file, you only need to specify **storageClassName**. -.. code-block:: +|image1| - apiVersion: storage.k8s.io/v1 - kind: StorageClass - metadata: - name: csi-disk-sas # Name of the high I/O storage class, which can be customized. - parameters: - csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io - csi.storage.k8s.io/fstype: ext4 - everest.io/disk-volume-type: SAS # High I/O EVS disk type, which cannot be customized. - everest.io/passthrough: "true" - provisioner: everest-csi-provisioner - reclaimPolicy: Delete - volumeBindingMode: Immediate - allowVolumeExpansion: true # true indicates that capacity expansion is allowed. +- You can customize a high I/O storage class in a YAML file. For example, the name **csi-disk-sas** indicates that the disk type is SAS (high I/O). -For an ultra-high I/O storage class, you can set the class name to **csi-disk-ssd** to create SSD EVS disk (ultra-high I/O). + .. code-block:: -.. code-block:: + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: csi-disk-sas # Name of the high I/O storage class, which can be customized. + parameters: + csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io + csi.storage.k8s.io/fstype: ext4 + everest.io/disk-volume-type: SAS # High I/O EVS disk type, which cannot be customized. + everest.io/passthrough: "true" + provisioner: everest-csi-provisioner + reclaimPolicy: Delete + volumeBindingMode: Immediate + allowVolumeExpansion: true # true indicates that capacity expansion is allowed. - apiVersion: storage.k8s.io/v1 - kind: StorageClass - metadata: - name: csi-disk-ssd # Name of the ultra-high I/O storage class, which can be customized. - parameters: - csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io - csi.storage.k8s.io/fstype: ext4 - everest.io/disk-volume-type: SSD # Ultra-high I/O EVS disk type, which cannot be customized. - everest.io/passthrough: "true" - provisioner: everest-csi-provisioner - reclaimPolicy: Delete - volumeBindingMode: Immediate - allowVolumeExpansion: true +- For an ultra-high I/O storage class, you can set the class name to **csi-disk-ssd** to create SSD EVS disk (ultra-high I/O). + + .. code-block:: + + apiVersion: storage.k8s.io/v1 + kind: StorageClass + metadata: + name: csi-disk-ssd # Name of the ultra-high I/O storage class, which can be customized. + parameters: + csi.storage.k8s.io/csi-driver-name: disk.csi.everest.io + csi.storage.k8s.io/fstype: ext4 + everest.io/disk-volume-type: SSD # Ultra-high I/O EVS disk type, which cannot be customized. + everest.io/passthrough: "true" + provisioner: everest-csi-provisioner + reclaimPolicy: Delete + volumeBindingMode: Immediate + allowVolumeExpansion: true **reclaimPolicy**: indicates the reclaim policies of the underlying cloud storage. The value can be **Delete** or **Retain**. - **Delete**: When a PVC is deleted, both the PV and the EVS disk are deleted. -- **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After a PVC is deleted, the PV resource is in the Released state and cannot be bound to the PVC again. - -.. note:: - - The reclamation policy set here has no impact on the SFS Turbo storage. +- **Retain**: When a PVC is deleted, the PV and underlying storage resources are not deleted. Instead, you must manually delete these resources. After that, the PV is in the **Released** status and cannot be bound to the PVC again. If high data security is required, you are advised to select **Retain** to prevent data from being deleted by mistake. @@ -137,7 +180,7 @@ After the definition is complete, run the **kubectl create** commands to create # kubectl create -f ssd.yaml storageclass.storage.k8s.io/csi-disk-ssd created -Query the storage class again. Two more types of storage classes are displayed in the command output, as shown below. +Query **StorageClass** again. The command output is as follows: .. code-block:: @@ -151,48 +194,10 @@ Query the storage class again. Two more types of storage classes are displayed i csi-obs everest-csi-provisioner 17d csi-sfsturbo everest-csi-provisioner 17d -Other types of storage resources can be defined in the similar way. You can use kubectl to obtain the YAML file and modify it as required. +.. _cce_10_0380__section9720192252: -- File Storage - - .. code-block:: - - # kubectl get sc csi-nas -oyaml - kind: StorageClass - apiVersion: storage.k8s.io/v1 - metadata: - name: csi-nas - provisioner: everest-csi-provisioner - parameters: - csi.storage.k8s.io/csi-driver-name: nas.csi.everest.io - csi.storage.k8s.io/fstype: nfs - everest.io/share-access-level: rw - everest.io/share-access-to: 5e3864c6-e78d-4d00-b6fd-de09d432c632 # ID of the VPC to which the cluster belongs - everest.io/share-is-public: 'false' - everest.io/zone: xxxxx # AZ - reclaimPolicy: Delete - allowVolumeExpansion: true - volumeBindingMode: Immediate - -- Object storage - - .. code-block:: - - # kubectl get sc csi-obs -oyaml - kind: StorageClass - apiVersion: storage.k8s.io/v1 - metadata: - name: csi-obs - provisioner: everest-csi-provisioner - parameters: - csi.storage.k8s.io/csi-driver-name: obs.csi.everest.io - csi.storage.k8s.io/fstype: s3fs # Object storage type. s3fs indicates an object bucket, and obsfs indicates a parallel file system. - everest.io/obs-volume-type: STANDARD # Storage class of the OBS bucket - reclaimPolicy: Delete - volumeBindingMode: Immediate - -Setting a Default Storage Class -------------------------------- +Specifying a Default StorageClass +--------------------------------- You can specify a storage class as the default class. In this way, if you do not specify **storageClassName** when creating a PVC, the PVC is created using the default storage class. @@ -300,4 +305,4 @@ Verification View the PVC details on the CCE console. On the PV details page, you can see that the disk type is ultra-high I/O. -.. |image1| image:: /_static/images/en-us_image_0000001517903252.png +.. |image1| image:: /_static/images/en-us_image_0000001695737417.png diff --git a/umn/source/storage/using_local_disks_as_storage_volumes.rst b/umn/source/storage/using_local_disks_as_storage_volumes.rst deleted file mode 100644 index b3e1059..0000000 --- a/umn/source/storage/using_local_disks_as_storage_volumes.rst +++ /dev/null @@ -1,349 +0,0 @@ -:original_name: cce_10_0377.html - -.. _cce_10_0377: - -Using Local Disks as Storage Volumes -==================================== - -You can mount a file directory of the host where a container is located to a specified container path (the hostPath mode in Kubernetes) for persistent data storage. Alternatively, you can leave the source path empty (the emptyDir mode in Kubernetes), and a temporary directory of the host will be mounted to the mount point of the container for temporary storage. - -Using Local Volumes -------------------- - -CCE supports four types of local volumes. - -- :ref:`hostPath `: mounts a file directory of the host where the container is located to the specified mount point of the container. For example, if the container needs to access **/etc/hosts**, you can use a hostPath volume to map **/etc/hosts**. -- :ref:`emptyDir `: stores data temporarily. An emptyDir volume is first created when a pod is assigned to a node, and exists as long as that pod is running on that node. When a container pod is terminated, **EmptyDir** will be deleted and the data is permanently lost. -- :ref:`ConfigMap `: A ConfigMap can be mounted as a volume, and all contents stored in its key are mounted onto the specified container directory. A ConfigMap is a type of resource that stores configuration information required by a workload. Its content is user-defined. For details about how to create a ConfigMap, see :ref:`Creating a ConfigMap `. For details about how to use a ConfigMap, see :ref:`Using a ConfigMap `. -- :ref:`Secret mounting `: Data in the secret is mounted to a path of the container. A secret is a type of resource that holds sensitive data, such as authentication and key information. All content is user-defined. For details about how to create a secret, see :ref:`Creating a Secret `. For details about how to use a secret, see :ref:`Using a Secret `. - -The following describes how to mount these four types of volumes. - -.. _cce_10_0377__section196700523438: - -hostPath --------- - -You can mount a path on the host to a specified container path. A hostPath volume is usually used to **store workload logs permanently** or used by workloads that need to **access internal data structure of the Docker engine on the host**. - -#. Log in to the CCE console. - -#. When creating a workload, click **Data Storage** in the **Container Settings**. Click **Add Volume** and choose **hostPath** from the drop-down list. - -#. Set parameters for adding a local volume, as listed in :ref:`Table 1 `. - - .. _cce_10_0377__table14312815449: - - .. table:: **Table 1** Setting parameters for mounting a hostPath volume - - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | Storage Type | Select **hostPath**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Host Path | Path of the host to which the local volume is to be mounted, for example, **/etc/hosts**. | - | | | - | | .. note:: | - | | | - | | **Host Path** cannot be set to the root directory **/**. Otherwise, the mounting fails. Mount paths can be as follows: | - | | | - | | - /opt/xxxx (excluding **/opt/cloud**) | - | | - /mnt/xxxx (excluding **/mnt/paas**) | - | | - /tmp/xxx | - | | - /var/xxx (excluding key directories such as **/var/lib**, **/var/script**, and **/var/paas**) | - | | - /xxxx (It cannot conflict with the system directory, such as bin, lib, home, root, boot, dev, etc, lost+found, mnt, proc, sbin, srv, tmp, var, media, opt, selinux, sys, and usr.) | - | | | - | | Do not set this parameter to **/home/paas**, **/var/paas**, **/var/lib**, **/var/script**, **/mnt/paas**, or **/opt/cloud**. Otherwise, the system or node installation will fail. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Add Container Path | Configure the following parameters: | - | | | - | | a. **subPath**: Enter a subpath, for example, **tmp**. | - | | | - | | A subpath is used to mount a local disk so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | - | | | - | | b. **Container Path**: Enter the path of the container, for example, **/tmp**. | - | | | - | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the container to an empty directory. If the directory is not empty, ensure that there are no files affecting container startup in the directory. Otherwise, such files will be replaced, resulting in failures to start the container and create the workload. | - | | | - | | .. important:: | - | | | - | | NOTICE: | - | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | - | | | - | | c. Permission | - | | | - | | - **Read-only**: You can only read the data volumes mounted to the path. | - | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause a data loss. | - | | | - | | You can click |image1| to add multiple paths and subpaths. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -.. _cce_10_0377__section550555216467: - -emptyDir --------- - -emptyDir applies to temporary data storage, disaster recovery, and runtime data sharing. It will be deleted upon deletion or transfer of workload pods. - -#. Log in to the CCE console. - -#. When creating a workload, click **Data Storage** in the **Container Settings**. Click **Add Volume** and choose **emptyDir** from the drop-down list. - -#. Set the local volume type to **emptyDir** and set parameters for adding a local volume, as described in :ref:`Table 2 `. - - .. _cce_10_0377__table1867417102475: - - .. table:: **Table 2** Setting parameters for mounting an emptyDir volume - - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | Storage Type | Select **emptyDir**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Storage Medium | - **Default**: Data is stored in hard disks, which is applicable to a large amount of data with low requirements on reading and writing efficiency. | - | | - **Memory**: Selecting this option can improve the running speed, but the storage capacity is subject to the memory size. This mode applies to scenarios where the data volume is small and the read and write efficiency is high. | - | | | - | | .. note:: | - | | | - | | - If you select **Memory**, any files you write will count against your container's memory limit. Pay attention to the memory quota. If the memory usage exceeds the threshold, OOM may occur. | - | | - If **Memory** is selected, the size of an emptyDir volume is 50% of the pod specifications and cannot be changed. | - | | - If **Memory** is not selected, emptyDir volumes will not occupy the system memory. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Add Container Path | Configure the following parameters: | - | | | - | | a. **subPath**: Enter a subpath, for example, **tmp**. | - | | | - | | A subpath is used to mount a local disk so that the same data volume is used in a single pod. If this parameter is left blank, the root path is used by default. | - | | | - | | b. **Container Path**: Enter the path of the container, for example, **/tmp**. | - | | | - | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the container to an empty directory. If the directory is not empty, ensure that there are no files affecting container startup in the directory. Otherwise, such files will be replaced, resulting in failures to start the container and create the workload. | - | | | - | | .. important:: | - | | | - | | NOTICE: | - | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | - | | | - | | c. Permission | - | | | - | | - **Read-only**: You can only read the data volumes mounted to the path. | - | | - **Read/Write**: You can modify the data volumes mounted to the path. Newly written data is not migrated if the container is migrated, which may cause a data loss. | - | | | - | | You can click |image2| to add multiple paths and subpaths. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -.. _cce_10_0377__section18638191594712: - -ConfigMap ---------- - -The data stored in a ConfigMap can be referenced in a volume of type ConfigMap. You can mount such a volume to a specified container path. The platform supports the separation of workload codes and configuration files. ConfigMap volumes are used to store workload configuration parameters. Before that, you need to create ConfigMaps in advance. For details, see :ref:`Creating a ConfigMap `. - -#. Log in to the CCE console. - -#. When creating a workload, click **Data Storage** in the **Container Settings**. Click **Add Volume** and choose **ConfigMap** from the drop-down list. - -#. Set the local volume type to **ConfigMap** and set parameters for adding a local volume, as shown in :ref:`Table 3 `. - - .. _cce_10_0377__table1776324831114: - - .. table:: **Table 3** Setting parameters for mounting a ConfigMap volume - - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | Storage Type | Select **ConfigMap**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Option | Select the desired ConfigMap name. | - | | | - | | A ConfigMap must be created in advance. For details, see :ref:`Creating a ConfigMap `. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Add Container Path | Configure the following parameters: | - | | | - | | a. **subPath**: Enter a subpath, for example, **tmp**. | - | | | - | | - A subpath is used to mount a local volume so that the same data volume is used in a single pod. | - | | - The subpath can be the key and value of a ConfigMap or secret. If the subpath is a key-value pair that does not exist, the data import does not take effect. | - | | - The data imported by specifying a subpath will not be updated along with the ConfigMap/secret updates. | - | | | - | | b. **Container Path**: Enter the path of the container, for example, **/tmp**. | - | | | - | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the container to an empty directory. If the directory is not empty, ensure that there are no files affecting container startup in the directory. Otherwise, such files will be replaced, resulting in failures to start the container and create the workload. | - | | | - | | .. important:: | - | | | - | | NOTICE: | - | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | - | | | - | | c. Set the permission to **Read-only**. Data volumes in the path are read-only. | - | | | - | | You can click |image3| to add multiple paths and subpaths. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -.. _cce_10_0377__section10197243134710: - -Secret ------- - -You can mount a secret as a volume to the specified container path. Contents in a secret are user-defined. Before that, you need to create a secret. For details, see :ref:`Creating a Secret `. - -#. Log in to the CCE console. - -#. When creating a workload, click **Data Storage** in the **Container Settings**. Click **Add Volume** and choose **Secret** from the drop-down list. - -#. Set the local volume type to **Secret** and set parameters for adding a local volume, as shown in :ref:`Table 4 `. - - .. _cce_10_0377__table861818920109: - - .. table:: **Table 4** Setting parameters for mounting a secret volume - - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | Storage Type | Select **Secret**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Secret | Select the desired secret name. | - | | | - | | A secret must be created in advance. For details, see :ref:`Creating a Secret `. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Add Container Path | Configure the following parameters: | - | | | - | | a. **subPath**: Enter a subpath, for example, **tmp**. | - | | | - | | - A subpath is used to mount a local volume so that the same data volume is used in a single pod. | - | | - The subpath can be the key and value of a ConfigMap or secret. If the subpath is a key-value pair that does not exist, the data import does not take effect. | - | | - The data imported by specifying a subpath will not be updated along with the ConfigMap/secret updates. | - | | | - | | b. **Container Path**: Enter the path of the container, for example, **/tmp**. | - | | | - | | This parameter indicates the container path to which a data volume will be mounted. Do not mount the volume to a system directory such as **/** or **/var/run**; this action may cause container errors. You are advised to mount the container to an empty directory. If the directory is not empty, ensure that there are no files affecting container startup in the directory. Otherwise, such files will be replaced, resulting in failures to start the container and create the workload. | - | | | - | | .. important:: | - | | | - | | NOTICE: | - | | When the container is mounted to a high-risk directory, you are advised to use an account with minimum permissions to start the container; otherwise, high-risk files on the host machine may be damaged. | - | | | - | | c. Set the permission to **Read-only**. Data volumes in the path are read-only. | - | | | - | | You can click |image4| to add multiple paths and subpaths. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Mounting a hostPath Volume Using kubectl ----------------------------------------- - -You can use kubectl to mount a file directory of the host where the container is located to a specified mount path of the container. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **hostPath-pod-example.yaml** file, which is used to create a pod. - - **touch hostPath-pod-example.yaml** - - **vi hostPath-pod-example.yaml** - - Mount the hostPath volume for the Deployment. The following is an example: - - .. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: hostpath-pod-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: hostpath-pod-example - template: - metadata: - labels: - app: hostpath-pod-example - spec: - containers: - - image: nginx - name: container-0 - volumeMounts: - - mountPath: /tmp - name: hostpath-example - imagePullSecrets: - - name: default-secret - restartPolicy: Always - volumes: - - name: hostpath-example - hostPath: - path: /tmp/test - - .. table:: **Table 5** Local disk storage dependency parameters - - +-----------+------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===========+================================================================================================+ - | mountPath | Mount path of the container. In this example, the volume is mounted to the **/tmp** directory. | - +-----------+------------------------------------------------------------------------------------------------+ - | hostPath | Host path. In this example, the host path is **/tmp/test**. | - +-----------+------------------------------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - -#. Run the following command to create the pod: - - **kubectl create -f hostPath-pod-example.yaml** - -#. Verify the mounting. - - a. Query the pod name of the workload (**hostpath-pod-example** is used as an example). - - .. code-block:: - - kubectl get po|grep hostpath-pod-example - - Expected outputs: - - .. code-block:: - - hostpath-pod-example-55c8d4dc59-md5d9 1/1 Running 0 35s - - b. Create the **test1** file in the container mount path **/tmp**. - - .. code-block:: - - kubectl exec hostpath-pod-example-55c8d4dc59-md5d9 -- touch /tmp/test1 - - c. Verify that the file is created in the host path **/tmp/test/**. - - .. code-block:: - - ll /tmp/test/ - - Expected outputs: - - .. code-block:: - - -rw-r--r-- 1 root root 0 Jun 1 16:12 test1 - - d. Create the **test2** file in the host path **/tmp/test/**. - - .. code-block:: - - touch /tmp/test/test2 - - e. Verify that the file is created in the container mount path. - - .. code-block:: - - kubectl exec hostpath-pod-example-55c8d4dc59-md5d9 -- ls -l /tmp - - Expected outputs: - - .. code-block:: - - -rw-r--r-- 1 root root 0 Jun 1 08:12 test1 - -rw-r--r-- 1 root root 0 Jun 1 08:14 test2 - -.. |image1| image:: /_static/images/en-us_image_0000001568902637.png -.. |image2| image:: /_static/images/en-us_image_0000001517903168.png -.. |image3| image:: /_static/images/en-us_image_0000001517743600.png -.. |image4| image:: /_static/images/en-us_image_0000001569023013.png diff --git a/umn/source/storage_management_flexvolume_deprecated/flexvolume_overview.rst b/umn/source/storage_management_flexvolume_deprecated/flexvolume_overview.rst deleted file mode 100644 index c8a8dc2..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/flexvolume_overview.rst +++ /dev/null @@ -1,58 +0,0 @@ -:original_name: cce_10_0306.html - -.. _cce_10_0306: - -FlexVolume Overview -=================== - -In container storage, you can use different types of volumes and mount them to containers in pods as many as you want. - -In CCE, container storage is backed both by Kubernetes-native objects, such as emptyDir, hostPath, secret, and ConfigMap, and by cloud storage services. - -CCE clusters of **1.13 and earlier versions** use the :ref:`storage-driver ` add-on to connect to cloud storage services to support Kubernetes FlexVolume driver for container storage. The FlexVolume driver has been deprecated in favor of the Container Storage Interface (CSI). **The everest add-on for CSI is installed in CCE clusters of 1.15 and later versions by default.** For details, see :ref:`Overview `. - -.. note:: - - - In CCE clusters earlier than Kubernetes 1.13, end-to-end capacity expansion of container storage is not supported, and the PVC capacity is inconsistent with the storage capacity. - - **In a cluster of v1.13 or earlier**, when an upgrade or bug fix is available for storage functionalities, you only need to install or upgrade the storage-driver add-on. Upgrading the cluster or creating a cluster is not required. - -Notes and Constraints ---------------------- - -- For clusters created in CCE, Kubernetes v1.15.11 is a transitional version in which the FlexVolume plug-in (:ref:`storage-driver `) is compatible with the CSI plug-in (:ref:`everest `). Clusters of v1.17 and later versions do not support FlexVolume anymore. You need to use the everest add-on. -- The FlexVolume plug-in will be maintained by Kubernetes developers, but new functionality will only be added to CSI. You are advised not to create storage that connects to the FlexVolume plug-in (storage-driver) in CCE anymore. Otherwise, the storage resources may not function normally. - -Checking Storage Add-ons ------------------------- - -#. Log in to the CCE console. -#. In the navigation tree on the left, click **Add-ons**. -#. Click the **Add-on Instance** tab. -#. Select a cluster in the upper right corner. The default storage add-on installed during cluster creation is displayed. - -Differences Between CSI and FlexVolume Plug-ins ------------------------------------------------ - -.. table:: **Table 1** CSI and FlexVolume - - +---------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Kubernetes Solution | CCE Add-on | Feature | Recommendation | - +=====================+=================+=========================================================================================================================================================================================================================================================================================================================================================================================================================================+================================================================================================================================================================================================================================================================================+ - | CSI | everest | CSI was developed as a standard for exposing arbitrary block and file storage systems to containerized workloads. Using CSI, third-party storage providers can deploy plug-ins exposing new storage systems in Kubernetes without having to touch the core Kubernetes code. In CCE, the everest add-on is installed by default in clusters of Kubernetes v1.15 and later to connect to storage services (EVS, OBS, SFS, and SFS Turbo). | The :ref:`everest ` add-on is installed by default in clusters of **v1.15 and later**. CCE will mirror the Kubernetes community by providing continuous support for updated CSI capabilities. | - | | | | | - | | | The everest add-on consists of two parts: | | - | | | | | - | | | - **everest-csi-controller** for storage volume creation, deletion, capacity expansion, and cloud disk snapshots | | - | | | - **everest-csi-driver** for mounting, unmounting, and formatting storage volumes on nodes | | - | | | | | - | | | For details, see :ref:`everest `. | | - +---------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | FlexVolume | storage-driver | FlexVolume is an out-of-tree plug-in interface that has existed in Kubernetes since the early stage. CCE provided FlexVolume volumes through the storage-driver add-on installed in clusters of Kubernetes v1.13 and earlier versions. This add-on connects clusters to storage services (EVS, OBS, SFS, and SFS Turbo). | For the created clusters of **v1.13 or earlier**, the installed FlexVolume plug-in (CCE add-on :ref:`storage-driver `) can still be used. CCE stops providing update support for this add-on, and you are advised to :ref:`upgrade these clusters `. | - | | | | | - | | | For details, see :ref:`storage-driver `. | | - +---------------------+-----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -.. note:: - - - A cluster can only use either CSI or FlexVolume. - - The FlexVolume plug-in cannot be replaced by the CSI plug-in in clusters of v1.13 or earlier. You can only upgrade clusters of v1.13. For details, see :ref:`Cluster Upgrade `. diff --git a/umn/source/storage_management_flexvolume_deprecated/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst b/umn/source/storage_management_flexvolume_deprecated/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst deleted file mode 100644 index fff3b1a..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest.rst +++ /dev/null @@ -1,595 +0,0 @@ -:original_name: cce_10_0343.html - -.. _cce_10_0343: - -How Do I Change the Storage Class Used by a Cluster of v1.15 from FlexVolume to CSI Everest? -============================================================================================ - -In clusters later than v1.15.11-r1, CSI (the everest add-on) has taken over all functions of fuxi FlexVolume (the storage-driver add-on) for managing container storage. You are advised to use CSI Everest. - -To migrate your storage volumes, create a static PV to associate with the original underlying storage, and then create a PVC to associate with this static PV. When you upgrade your application, mount the new PVC to the original mounting path to migrate the storage volumes. - -.. warning:: - - Services will be interrupted during the migration. Therefore, properly plan the migration and back up data. - -Procedure ---------- - -#. (Optional) Back up data to prevent data loss in case of exceptions. - -#. .. _cce_10_0343__en-us_topic_0285037038_li1219802032512: - - Configure a YAML file of the PV in the CSI format according to the PV in the FlexVolume format and associate the PV with the existing storage. - - To be specific, run the following commands to configure the pv-example.yaml file, which is used to create a PV. - - **touch pv-example.yaml** - - **vi pv-example.yaml** - - Configuration example of **a PV for an EVS volume**: - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - annotations: - pv.kubernetes.io/provisioned-by: everest-csi-provisioner - name: pv-evs-example - spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - csi: - driver: disk.csi.everest.io - fsType: ext4 - volumeAttributes: - everest.io/disk-mode: SCSI - everest.io/disk-volume-type: SAS - storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner - volumeHandle: 0992dbda-6340-470e-a74e-4f0db288ed82 - persistentVolumeReclaimPolicy: Delete - storageClassName: csi-disk - - Pay attention to the fields in bold and red. The parameters are described as follows: - - .. table:: **Table 1** EVS volume configuration parameters - - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==========================================+====================================================================================================================================================+ - | failure-domain.beta.kubernetes.io/region | Region where the EVS disk is located. Use the same value as that of the FlexVolume PV. | - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS disk is located. Use the same value as that of the FlexVolume PV. | - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - | name | Name of the PV, which must be unique in the cluster. | - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | EVS volume capacity in the unit of Gi. Use the value of **spec.capacity.storage** of the FlexVolume PV. | - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - | driver | Storage driver used to attach the volume. Set the driver to **disk.csi.everest.io** for the EVS volume. | - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeHandle | Volume ID of the EVS disk. Use the value of **spec.flexVolume.options.volumeID** of the FlexVolume PV. | - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/disk-mode | EVS disk mode. Use the value of **spec.flexVolume.options.disk-mode** of the FlexVolume PV. | - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/disk-volume-type | EVS disk type. Use the value of **kubernetes.io/volumetype** in the storage class corresponding to **spec.storageClassName** of the FlexVolume PV. | - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Name of the Kubernetes storage class associated with the storage volume. Set this field to **csi-disk** for EVS disks. | - +------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+ - - Configuration example of **a PV for an SFS volume**: - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv-sfs-example - annotations: - pv.kubernetes.io/provisioned-by: everest-csi-provisioner - spec: - accessModes: - - ReadWriteMany - capacity: - storage: 10Gi - csi: - driver: nas.csi.everest.io - fsType: nfs - volumeAttributes: - everest.io/share-export-location: # Path to shared file storage - storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner - volumeHandle: 682f00bb-ace0-41d8-9b3e-913c9aa6b695 - persistentVolumeReclaimPolicy: Delete - storageClassName: csi-nas - - Pay attention to the fields in bold and red. The parameters are described as follows: - - .. table:: **Table 2** SFS volume configuration parameters - - +----------------------------------+--------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==================================+====================================================================================================================+ - | name | Name of the PV, which must be unique in the cluster. | - +----------------------------------+--------------------------------------------------------------------------------------------------------------------+ - | storage | File storage size in the unit of Gi. Use the value of **spec.capacity.storage** of the FlexVolume PV. | - +----------------------------------+--------------------------------------------------------------------------------------------------------------------+ - | driver | Storage driver used to attach the volume. Set the driver to **nas.csi.everest.io** for the file system. | - +----------------------------------+--------------------------------------------------------------------------------------------------------------------+ - | everest.io/share-export-location | Shared path of the file system. Use the value of **spec.flexVolume.options.deviceMountPath** of the FlexVolume PV. | - +----------------------------------+--------------------------------------------------------------------------------------------------------------------+ - | volumeHandle | File system ID. Use the value of **spec.flexVolume.options.volumeID** of the FlexVolume PV. | - +----------------------------------+--------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Name of the Kubernetes storage class. Set this field to **csi-nas**. | - +----------------------------------+--------------------------------------------------------------------------------------------------------------------+ - - Configuration example of **a PV for an OBS volume**: - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv-obs-example - annotations: - pv.kubernetes.io/provisioned-by: everest-csi-provisioner - spec: - accessModes: - - ReadWriteMany - capacity: - storage: 1Gi - csi: - driver: obs.csi.everest.io - fsType: s3fs - volumeAttributes: - everest.io/obs-volume-type: STANDARD - everest.io/region: eu-de - storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner - volumeHandle: obs-normal-static-pv - persistentVolumeReclaimPolicy: Delete - storageClassName: csi-obs - - Pay attention to the fields in bold and red. The parameters are described as follows: - - .. table:: **Table 3** OBS volume configuration parameters - - +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +============================+===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+ - | name | Name of the PV, which must be unique in the cluster. | - +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity, in the unit of Gi. Set this parameter to the fixed value **1Gi**. | - +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | driver | Storage driver used to attach the volume. Set the driver to **obs.csi.everest.io** for the OBS volume. | - +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | fsType | File type. Value options are **obsfs** or **s3fs**. If the value is **s3fs**, an OBS bucket is created and mounted using s3fs. If the value is **obsfs**, an OBS parallel file system is created and mounted using obsfs. Set this parameter according to the value of **spec.flexVolume.options.posix** of the FlexVolume PV. If the value of **spec.flexVolume.options.posix** is **true**, set this parameter to **obsfs**. If the value is **false**, set this parameter to **s3fs**. | - +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/obs-volume-type | Storage class, including **STANDARD** (standard bucket) and **WARM** (infrequent access bucket). Set this parameter according to the value of **spec.flexVolume.options.storage_class** of the FlexVolume PV. If the value of **spec.flexVolume.options.storage_class** is **standard**, set this parameter to **STANDARD**. If the value is **standard_ia**, set this parameter to **WARM**. | - +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/region | Region where the OBS bucket is located. Use the value of **spec.flexVolume.options.region** of the FlexVolume PV. | - +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeHandle | OBS bucket name. Use the value of **spec.flexVolume.options.volumeID** of the FlexVolume PV. | - +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Name of the Kubernetes storage class. Set this field to **csi-obs**. | - +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - Configuration example of **a PV for an SFS Turbo volume**: - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv-efs-example - annotations: - pv.kubernetes.io/provisioned-by: everest-csi-provisioner - spec: - accessModes: - - ReadWriteMany - capacity: - storage: 10Gi - csi: - driver: sfsturbo.csi.everest.io - fsType: nfs - volumeAttributes: - everest.io/share-export-location: 192.168.0.169:/ - storage.kubernetes.io/csiProvisionerIdentity: everest-csi-provisioner - volumeHandle: 8962a2a2-a583-4b7f-bb74-fe76712d8414 - persistentVolumeReclaimPolicy: Delete - storageClassName: csi-sfsturbo - - Pay attention to the fields in bold and red. The parameters are described as follows: - - .. table:: **Table 4** SFS Turbo volume configuration parameters - - +----------------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==================================+=========================================================================================================================+ - | name | Name of the PV, which must be unique in the cluster. | - +----------------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | storage | File system size. Use the value of **spec.capacity.storage** of the FlexVolume PV. | - +----------------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | driver | Storage driver used to attach the volume. Set it to **sfsturbo.csi.everest.io**. | - +----------------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | everest.io/share-export-location | Shared path of the SFS Turbo volume. Use the value of **spec.flexVolume.options.deviceMountPath** of the FlexVolume PV. | - +----------------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | volumeHandle | SFS Turbo volume ID. Use the value of **spec.flexVolume.options.volumeID** of the FlexVolume PV. | - +----------------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Name of the Kubernetes storage class. Set this field to **csi-sfsturbo** for SFS Turbo volumes. | - +----------------------------------+-------------------------------------------------------------------------------------------------------------------------+ - -#. .. _cce_10_0343__en-us_topic_0285037038_li1710710385418: - - Configure a YAML file of the PVC in the CSI format according to the PVC in the FlexVolume format and associate the PVC with the PV created in :ref:`2 `. - - To be specific, run the following commands to configure the pvc-example.yaml file, which is used to create a PVC. - - **touch pvc-example.yaml** - - **vi pvc-example.yaml** - - Configuration example of **a PVC for an EVS volume**: - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - annotations: - everest.io/disk-volume-type: SAS - volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner - name: pvc-evs-example - namespace: default - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - volumeName: pv-evs-example - storageClassName: csi-disk - - Pay attention to the fields in bold and red. The parameters are described as follows: - - .. table:: **Table 5** PVC configuration parameters for an EVS volume - - +------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==========================================+============================================================================================================================================================================================================================================+ - | failure-domain.beta.kubernetes.io/region | Region where the cluster is located. Use the same value as that of the FlexVolume PVC. | - +------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS disk is deployed. Use the same value as that of the FlexVolume PVC. | - +------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | everest.io/disk-volume-type | Storage class of the EVS disk. The value can be **SAS** or **SSD**. Set this parameter to the same value as that of the PV created in :ref:`2 `. | - +------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | name | PVC name, which must be unique in the namespace. The value must be unique in the namespace. (If the PVC is dynamically created by a stateful application, the value of this parameter must be the same as the name of the FlexVolume PVC.) | - +------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | namespace | Namespace to which the PVC belongs. Use the same value as that of the FlexVolume PVC. | - +------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Requested capacity of the PVC, which must be the same as the storage size of the existing PV. | - +------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. Set this parameter to the name of the static PV in :ref:`2 `. | - +------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Name of the Kubernetes storage class. Set this field to **csi-disk** for EVS disks. | - +------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - Configuration example of **a PVC for an SFS volume**: - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner - name: pvc-sfs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 10Gi - storageClassName: csi-nas - volumeName: pv-sfs-example - - Pay attention to the fields in bold and red. The parameters are described as follows: - - .. table:: **Table 6** PVC configuration parameters for an SFS volume - - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==================+============================================================================================================================================================================================================================================+ - | name | PVC name, which must be unique in the namespace. The value must be unique in the namespace. (If the PVC is dynamically created by a stateful application, the value of this parameter must be the same as the name of the FlexVolume PVC.) | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | namespace | Namespace to which the PVC belongs. Use the same value as that of the FlexVolume PVC. | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity, in the unit of Gi. The value must be the same as the storage size of the existing PV. | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Set this field to **csi-nas**. | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. Set this parameter to the name of the static PV in :ref:`2 `. | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - Configuration example of **a PVC for an OBS volume**: - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner - everest.io/obs-volume-type: STANDARD - csi.storage.k8s.io/fstype: s3fs - name: pvc-obs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 1Gi - storageClassName: csi-obs - volumeName: pv-obs-example - - Pay attention to the fields in bold and red. The parameters are described as follows: - - .. table:: **Table 7** PVC configuration parameters for an OBS volume - - +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +============================+============================================================================================================================================================================================================================================+ - | everest.io/obs-volume-type | OBS volume type, which can be **STANDARD** (standard bucket) and **WARM** (infrequent access bucket). Set this parameter to the same value as that of the PV created in :ref:`2 `. | - +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | csi.storage.k8s.io/fstype | File type, which can be **obsfs** or **s3fs**. The value must be the same as that of **fsType** of the static OBS volume PV. | - +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | name | PVC name, which must be unique in the namespace. The value must be unique in the namespace. (If the PVC is dynamically created by a stateful application, the value of this parameter must be the same as the name of the FlexVolume PVC.) | - +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | namespace | Namespace to which the PVC belongs. Use the same value as that of the FlexVolume PVC. | - +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity, in the unit of Gi. Set this parameter to the fixed value **1Gi**. | - +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Name of the Kubernetes storage class. Set this field to **csi-obs**. | - +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. Set this parameter to the name of the static PV created in :ref:`2 `. | - +----------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - Configuration example of **a PVC for an SFS Turbo volume**: - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-provisioner: everest-csi-provisioner - name: pvc-efs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 10Gi - storageClassName: csi-sfsturbo - volumeName: pv-efs-example - - Pay attention to the fields in bold and red. The parameters are described as follows: - - .. table:: **Table 8** PVC configuration parameters for an SFS Turbo volume - - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==================+============================================================================================================================================================================================================================================+ - | name | PVC name, which must be unique in the namespace. The value must be unique in the namespace. (If the PVC is dynamically created by a stateful application, the value of this parameter must be the same as the name of the FlexVolume PVC.) | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | namespace | Namespace to which the PVC belongs. Use the same value as that of the FlexVolume PVC. | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Name of the Kubernetes storage class. Set this field to **csi-sfsturbo**. | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity, in the unit of Gi. The value must be the same as the storage size of the existing PV. | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. Set this parameter to the name of the static PV created in :ref:`2 `. | - +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -#. .. _cce_10_0343__en-us_topic_0285037038_li487255772614: - - Upgrade the workload to use a new PVC. - - **For Deployments** - - a. Run the **kubectl create -f** commands to create a PV and PVC. - - **kubectl create -f pv-example.yaml** - - **kubectl create -f pvc-example.yaml** - - .. note:: - - Replace the example file name **pvc-example.yaml** in the preceding commands with the names of the YAML files configured in :ref:`2 ` and :ref:`3 `. - - b. Go to the CCE console. On the workload upgrade page, click **Upgrade** > **Advanced Settings** > **Data Storage** > **Cloud Storage**. - - c. Uninstall the old storage and add the PVC in the CSI format. Retain the original mounting path in the container. - - d. Click **Submit**. - - e. Wait until the pods are running. - - **For StatefulSets that use existing storage** - - a. Run the **kubectl create -f** commands to create a PV and PVC. - - **kubectl create -f pv-example.yaml** - - **kubectl create -f pvc-example.yaml** - - .. note:: - - Replace the example file name **pvc-example.yaml** in the preceding commands with the names of the YAML files configured in :ref:`2 ` and :ref:`3 `. - - b. Run the **kubectl edit** command to edit the StatefulSet and use the newly created PVC. - - **kubectl edit sts sts-example -n** xxx - - |image1| - - .. note:: - - Replace **sts-example** in the preceding command with the actual name of the StatefulSet to upgrade. **xxx** indicates the namespace to which the StatefulSet belongs. - - c. Wait until the pods are running. - - .. note:: - - The current console does not support the operation of adding new cloud storage for StatefulSets. Use the kubectl commands to replace the storage with the newly created PVC. - - **For StatefulSets that use dynamically allocated storage** - - a. Back up the PV and PVC in the flexVolume format used by the StatefulSet. - - **kubectl get pvc xxx -n {namespaces} -oyaml > pvc-backup.yaml** - - **kubectl get pv xxx -n {namespaces} -oyaml > pv-backup.yaml** - - b. Change the number of pods to **0**. - - c. On the storage page, disassociate the flexVolume PVC used by the StatefulSet. - - d. Run the **kubectl create -f** commands to create a PV and PVC. - - **kubectl create -f pv-example.yaml** - - **kubectl create -f pvc-example.yaml** - - .. note:: - - Replace the example file name **pvc-example.yaml** in the preceding commands with the names of the YAML files configured in :ref:`2 ` and :ref:`3 `. - - e. Change the number of pods back to the original value and wait until the pods are running. - - .. note:: - - The dynamic allocation of storage for StatefulSets is achieved by using **volumeClaimTemplates**. This field cannot be modified by Kubernetes. Therefore, data cannot be migrated by using a new PVC. - - The PVC naming rule of the **volumeClaimTemplates** is fixed. When a PVC that meets the naming rule exists, this PVC is used. - - Therefore, disassociate the original PVC first, and then create a PVC with the same name in the CSI format. - - 6. (Optional) Recreate the stateful application to ensure that a CSI PVC is used when the application is scaled out. Otherwise, FlexVolume PVCs are used in scaling out. - - - Run the following command to obtain the YAML file of the StatefulSet: - - **kubectl get sts xxx -n {namespaces} -oyaml > sts.yaml** - - - Run the following command to back up the YAML file of the StatefulSet: - - **cp sts.yaml sts-backup.yaml** - - - Modify the definition of **volumeClaimTemplates** in the YAML file of the StatefulSet. - - **vi sts.yaml** - - Configuration example of **volumeClaimTemplates for an EVS volume**: - - .. code-block:: - - volumeClaimTemplates: - - metadata: - name: pvc-161070049798261342 - namespace: default - creationTimestamp: null - annotations: - everest.io/disk-volume-type: SAS - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - storageClassName: csi-disk - - The parameter value must be the same as the PVC of the EVS volume created in :ref:`3 `. - - Configuration example of **volumeClaimTemplates for an SFS volume**: - - .. code-block:: - - volumeClaimTemplates: - - metadata: - name: pvc-161063441560279697 - namespace: default - creationTimestamp: null - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 10Gi - storageClassName: csi-nas - - The parameter value must be the same as the PVC of the SFS volume created in :ref:`3 `. - - Configuration example of **volumeClaimTemplates for an OBS volume**: - - .. code-block:: - - volumeClaimTemplates: - - metadata: - name: pvc-161070100417416148 - namespace: default - creationTimestamp: null - annotations: - csi.storage.k8s.io/fstype: s3fs - everest.io/obs-volume-type: STANDARD - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 1Gi - storageClassName: csi-obs - - The parameter value must be the same as the PVC of the OBS volume created in :ref:`3 `. - - - Delete the StatefulSet. - - **kubectl delete sts xxx -n** {namespaces} - - - Create the StatefulSet. - - **kubectl create -f sts.yaml** - -#. Check service functions. - - a. Check whether the application is running properly. - b. Checking whether the data storage is normal. - - .. note:: - - If a rollback is required, perform :ref:`4 `. Select the PVC in FlexVolume format and upgrade the application. - -#. Uninstall the PVC in the FlexVolume format. - - If the application functions normally, unbind the PVC in the FlexVolume format on the storage management page. - - You can also run the kubectl command to delete the PVC and PV of the FlexVolume format. - - .. caution:: - - Before deleting a PV, change the persistentVolumeReclaimPolicy of the PV to **Retain**. Otherwise, the underlying storage will be reclaimed after the PV is deleted. - - If the cluster has been upgraded before the storage migration, PVs may fail to be deleted. You can remove the PV protection field **finalizers** to delete PVs. - - kubectl patch pv {pv_name} -p '{"metadata":{"finalizers":null}}' - -.. |image1| image:: /_static/images/en-us_image_0000001518062756.png diff --git a/umn/source/storage_management_flexvolume_deprecated/index.rst b/umn/source/storage_management_flexvolume_deprecated/index.rst deleted file mode 100644 index d9419d8..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/index.rst +++ /dev/null @@ -1,24 +0,0 @@ -:original_name: cce_10_0305.html - -.. _cce_10_0305: - -Storage Management: FlexVolume (Deprecated) -=========================================== - -- :ref:`FlexVolume Overview ` -- :ref:`How Do I Change the Storage Class Used by a Cluster of v1.15 from FlexVolume to CSI Everest? ` -- :ref:`Using EVS Disks as Storage Volumes ` -- :ref:`Using SFS Turbo File Systems as Storage Volumes ` -- :ref:`Using OBS Buckets as Storage Volumes ` -- :ref:`Using SFS File Systems as Storage Volumes ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - flexvolume_overview - how_do_i_change_the_storage_class_used_by_a_cluster_of_v1.15_from_flexvolume_to_csi_everest - using_evs_disks_as_storage_volumes/index - using_sfs_turbo_file_systems_as_storage_volumes/index - using_obs_buckets_as_storage_volumes/index - using_sfs_file_systems_as_storage_volumes/index diff --git a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/index.rst b/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/index.rst deleted file mode 100644 index e2eedae..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/index.rst +++ /dev/null @@ -1,20 +0,0 @@ -:original_name: cce_10_0309.html - -.. _cce_10_0309: - -Using EVS Disks as Storage Volumes -================================== - -- :ref:`Overview ` -- :ref:`(kubectl) Automatically Creating an EVS Disk ` -- :ref:`(kubectl) Creating a PV from an Existing EVS Disk ` -- :ref:`(kubectl) Creating a Pod Mounted with an EVS Volume ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - overview - kubectl_automatically_creating_an_evs_disk - kubectl_creating_a_pv_from_an_existing_evs_disk - kubectl_creating_a_pod_mounted_with_an_evs_volume diff --git a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/kubectl_automatically_creating_an_evs_disk.rst b/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/kubectl_automatically_creating_an_evs_disk.rst deleted file mode 100644 index 950b26c..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/kubectl_automatically_creating_an_evs_disk.rst +++ /dev/null @@ -1,67 +0,0 @@ -:original_name: cce_10_0312.html - -.. _cce_10_0312: - -(kubectl) Automatically Creating an EVS Disk -============================================ - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **pvc-evs-auto-example.yaml** file, which is used to create a PVC. - - **touch pvc-evs-auto-example.yaml** - - **vi pvc-evs-auto-example.yaml** - - **Example YAML file for clusters of v1.9, v1.11, and v1.13:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: pvc-evs-auto-example - namespace: default - annotations: - volume.beta.kubernetes.io/storage-class: sas - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: eu-de-01 - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - - .. table:: **Table 1** Key parameters - - +------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==========================================+============================================================================================================+ - | volume.beta.kubernetes.io/storage-class | EVS disk type. The value is in lowercase. | - +------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/region | Region where the cluster is located. | - +------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | - +------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity in the unit of Gi. | - +------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | accessModes | Read/write mode of the volume. | - | | | - | | You can set this parameter to **ReadWriteMany** (shared volume) and **ReadWriteOnce** (non-shared volume). | - +------------------------------------------+------------------------------------------------------------------------------------------------------------+ - -#. Run the following command to create a PVC. - - **kubectl create -f pvc-evs-auto-example.yaml** - - After the command is executed, an EVS disk is created in the partition where the cluster is located. Choose **Storage** > **EVS** to view the EVS disk. Alternatively, you can view the EVS disk based on the volume name on the EVS console. diff --git a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/kubectl_creating_a_pod_mounted_with_an_evs_volume.rst b/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/kubectl_creating_a_pod_mounted_with_an_evs_volume.rst deleted file mode 100644 index a24f45b..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/kubectl_creating_a_pod_mounted_with_an_evs_volume.rst +++ /dev/null @@ -1,150 +0,0 @@ -:original_name: cce_10_0314.html - -.. _cce_10_0314: - -(kubectl) Creating a Pod Mounted with an EVS Volume -=================================================== - -Scenario --------- - -After an EVS volume is created or imported to CCE, you can mount it to a workload. - -.. important:: - - EVS disks cannot be attached across AZs. Before mounting a volume, you can run the **kubectl get pvc** command to query the available PVCs in the AZ where the current cluster is located. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **evs-deployment-example.yaml** file, which is used to create a Deployment. - - **touch evs-deployment-example.yaml** - - **vi evs-deployment-example.yaml** - - Example of mounting an EVS volume to a Deployment (PVC-based, shared volume): - - .. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: evs-deployment-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: evs-deployment-example - template: - metadata: - labels: - app: evs-deployment-example - spec: - containers: - - image: nginx - name: container-0 - volumeMounts: - - mountPath: /tmp - name: pvc-evs-example - imagePullSecrets: - - name: default-secret - restartPolicy: Always - volumes: - - name: pvc-evs-example - persistentVolumeClaim: - claimName: pvc-evs-auto-example - - .. table:: **Table 1** Key parameters - - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - | Parent Parameter | Parameter | Description | - +==================================================+===========+================================================================================================+ - | spec.template.spec.containers.volumeMounts | name | Name of the volume mounted to the container. | - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers.volumeMounts | mountPath | Mount path of the container. In this example, the volume is mounted to the **/tmp** directory. | - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - | spec.template.spec.volumes | name | Name of the volume. | - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - | spec.template.spec.volumes.persistentVolumeClaim | claimName | Name of an existing PVC. | - +--------------------------------------------------+-----------+------------------------------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - - Mounting an EVS volume to a StatefulSet (PVC template-based, non-shared volume): - - **Example YAML:** - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: deploy-evs-sas-in - spec: - replicas: 1 - selector: - matchLabels: - app: deploy-evs-sata-in - template: - metadata: - labels: - app: deploy-evs-sata-in - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: eu-de-01 - spec: - containers: - - name: container-0 - image: 'nginx:1.12-alpine-perl' - volumeMounts: - - name: bs-sas-mountoptionpvc - mountPath: /tmp - imagePullSecrets: - - name: default-secret - volumeClaimTemplates: - - metadata: - name: bs-sas-mountoptionpvc - annotations: - volume.beta.kubernetes.io/storage-class: sas - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxivol - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - serviceName: wwww - - .. table:: **Table 2** Key parameters - - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | Parent Parameter | Parameter | Description | - +===========================================+=============+====================================================================================================================================+ - | metadata | name | Name of the created workload. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers | image | Image of the workload. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers.volumeMount | mountPath | Mount path of the container. In this example, the volume is mounted to the **/tmp** directory. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec | serviceName | Service corresponding to the workload. For details about how to create a Service, see :ref:`Creating a StatefulSet `. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.volumeClaimTemplates.metadata.name** must be consistent because they have a mapping relationship. - -#. Run the following command to create the pod: - - **kubectl create -f evs-deployment-example.yaml** - - After the creation is complete, log in to the CCE console. In the navigation pane, choose **Resource Management** > **Storage** > **EVS**. Then, click the PVC name. On the PVC details page, you can view the binding relationship between the EVS volume and the PVC. diff --git a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_evs_disk.rst b/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_evs_disk.rst deleted file mode 100644 index 46639af..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_evs_disk.rst +++ /dev/null @@ -1,437 +0,0 @@ -:original_name: cce_10_0313.html - -.. _cce_10_0313: - -(kubectl) Creating a PV from an Existing EVS Disk -================================================= - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Log in to the EVS console, create an EVS disk, and record the volume ID, capacity, and disk type of the EVS disk. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create two YAML files for creating the PersistentVolume (PV) and PersistentVolumeClaim (PVC). Assume that the file names are **pv-evs-example.yaml** and **pvc-evs-example.yaml**. - - **touch pv-evs-example.yaml** **pvc-evs-example.yaml** - - +-------------------------------+--------------------------------+-----------------------------------------------------+ - | Kubernetes Cluster Version | Description | YAML Example | - +===============================+================================+=====================================================+ - | 1.11.7 <= K8s version <= 1.13 | Clusters from v1.11.7 to v1.13 | :ref:`Example YAML ` | - +-------------------------------+--------------------------------+-----------------------------------------------------+ - | 1.11 <= K8s version < 1.11.7 | Clusters from v1.11 to v1.11.7 | :ref:`Example YAML ` | - +-------------------------------+--------------------------------+-----------------------------------------------------+ - | K8s version = 1.9 | Clusters of v1.9 | :ref:`Example YAML ` | - +-------------------------------+--------------------------------+-----------------------------------------------------+ - - **Clusters from v1.11.7 to v1.13** - - - .. _cce_10_0313__li0648350102513: - - **Example YAML file for the PV:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: eu-de-01 - annotations: - pv.kubernetes.io/provisioned-by: flexvolume-huawei.com/fuxivol - name: pv-evs-example - spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - claimRef: - apiVersion: v1 - kind: PersistentVolumeClaim - name: pvc-evs-example - namespace: default - flexVolume: - driver: huawei.com/fuxivol - fsType: ext4 - options: - disk-mode: SCSI - fsType: ext4 - volumeID: 0992dbda-6340-470e-a74e-4f0db288ed82 - persistentVolumeReclaimPolicy: Delete - storageClassName: sas - - .. table:: **Table 1** Key parameters - - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==========================================+===========================================================================================================================================================================================================================================================================================================================+ - | failure-domain.beta.kubernetes.io/region | Region where the cluster is located. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | EVS volume capacity in the unit of Gi. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | EVS disk type. Supported values: Common I/O (SATA), High I/O (SAS), and Ultra-high I/O (SSD) | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | driver | Storage driver. | - | | | - | | For EVS disks, set this parameter to **huawei.com/fuxivol**. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeID | Volume ID of the EVS disk. | - | | | - | | To obtain the volume ID, log in to the CCE console, choose **Resource Management** > **Storage**, click the PVC name in the **EVS** tab page, and copy the PVC ID on the PVC details page. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | disk-mode | Device type of the EVS disk. The value is **VBD** or **SCSI**. | - | | | - | | For CCE clusters earlier than v1.11.7, you do not need to set this field. The value defaults to **VBD**. | - | | | - | | This field is mandatory for CCE clusters from v1.11.7 to v1.13 that use Linux x86. As the EVS volumes dynamically provisioned by a PVC are created from SCSI EVS disks, you are advised to choose **SCSI** when manually creating volumes (static PVs). Volumes in the VBD mode can still be used after cluster upgrades. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.apiVersion | The value is fixed at **v1**. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.kind | The value is fixed at **PersistentVolumeClaim**. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.name | PVC name. The value is the same as the name of the PVC created in the next step. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.namespace | Namespace of the PVC. The value is the same as the namespace of the PVC created in the next step. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - - **Example YAML file for the PVC:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: sas - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxivol - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: eu-de-01 - name: pvc-evs-example - namespace: default - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - volumeName: pv-evs-example - - .. table:: **Table 2** Key parameters - - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+=============================================================================================+ - | volume.beta.kubernetes.io/storage-class | Storage class, which must be the same as that of the existing PV. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | volume.beta.kubernetes.io/storage-provisioner | The field must be set to **flexvolume-huawei.com/fuxivol**. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/region | Region where the cluster is located. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | storage | Requested capacity in the PVC, in Gi. | - | | | - | | The value must be the same as the storage size of the existing PV. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - - **Clusters from v1.11 to v1.11.7** - - - .. _cce_10_0313__li19211184720504: - - **Example YAML file for the PV:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - name: pv-evs-example - spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - flexVolume: - driver: huawei.com/fuxivol - fsType: ext4 - options: - fsType: ext4 - volumeID: 0992dbda-6340-470e-a74e-4f0db288ed82 - persistentVolumeReclaimPolicy: Delete - storageClassName: sas - - .. table:: **Table 3** Key parameters - - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==========================================+===========================================================================================================================================================================================================================================================================================================================+ - | failure-domain.beta.kubernetes.io/region | Region where the cluster is located. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | EVS volume capacity in the unit of Gi. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | EVS disk type. Supported values: Common I/O (SATA), High I/O (SAS), and Ultra-high I/O (SSD) | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | driver | Storage driver. | - | | | - | | For EVS disks, set this parameter to **huawei.com/fuxivol**. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeID | Volume ID of the EVS disk. | - | | | - | | To obtain the volume ID, log in to the CCE console, choose **Resource Management** > **Storage**, click the PVC name in the **EVS** tab page, and copy the PVC ID on the PVC details page. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | disk-mode | Device type of the EVS disk. The value is **VBD** or **SCSI**. | - | | | - | | For CCE clusters earlier than v1.11.7, you do not need to set this field. The value defaults to **VBD**. | - | | | - | | This field is mandatory for CCE clusters from v1.11.7 to v1.13 that use Linux x86. As the EVS volumes dynamically provisioned by a PVC are created from SCSI EVS disks, you are advised to choose **SCSI** when manually creating volumes (static PVs). Volumes in the VBD mode can still be used after cluster upgrades. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - - **Example YAML file for the PVC:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: sas - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxivol - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: eu-de-01 - name: pvc-evs-example - namespace: default - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - volumeName: pv-evs-example - - .. table:: **Table 4** Key parameters - - +-----------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+============================================================================================================+ - | volume.beta.kubernetes.io/storage-class | Storage class. The value can be **sas** or **ssd**. The value must be the same as that of the existing PV. | - +-----------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | volume.beta.kubernetes.io/storage-provisioner | The field must be set to **flexvolume-huawei.com/fuxivol**. | - +-----------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/region | Region where the cluster is located. | - +-----------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | - +-----------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | storage | Requested capacity in the PVC, in Gi. | - | | | - | | The value must be the same as the storage size of the existing PV. | - +-----------------------------------------------+------------------------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. | - +-----------------------------------------------+------------------------------------------------------------------------------------------------------------+ - - **Clusters of v1.9** - - - .. _cce_10_0313__li813222310297: - - **Example YAML file for the PV:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - name: pv-evs-example - namespace: default - spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 10Gi - flexVolume: - driver: huawei.com/fuxivol - fsType: ext4 - options: - fsType: ext4 - kubernetes.io/namespace: default - volumeID: 0992dbda-6340-470e-a74e-4f0db288ed82 - persistentVolumeReclaimPolicy: Delete - storageClassName: sas - - .. table:: **Table 5** Key parameters - - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +==========================================+===========================================================================================================================================================================================================================================================================================================================+ - | failure-domain.beta.kubernetes.io/region | Region where the cluster is located. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | EVS volume capacity in the unit of Gi. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | EVS disk type. Supported values: Common I/O (SATA), High I/O (SAS), and Ultra-high I/O (SSD) | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | driver | Storage driver. | - | | | - | | For EVS disks, set this parameter to **huawei.com/fuxivol**. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeID | Volume ID of the EVS disk. | - | | | - | | To obtain the volume ID, log in to the CCE console, choose **Resource Management** > **Storage**, click the PVC name in the **EVS** tab page, and copy the PVC ID on the PVC details page. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | disk-mode | Device type of the EVS disk. The value is **VBD** or **SCSI**. | - | | | - | | For CCE clusters earlier than v1.11.7, you do not need to set this field. The value defaults to **VBD**. | - | | | - | | This field is mandatory for CCE clusters from v1.11.7 to v1.13 that use Linux x86. As the EVS volumes dynamically provisioned by a PVC are created from SCSI EVS disks, you are advised to choose **SCSI** when manually creating volumes (static PVs). Volumes in the VBD mode can still be used after cluster upgrades. | - +------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - - **Example YAML file for the PVC:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: sas - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxivol - labels: - failure-domain.beta.kubernetes.io/region: eu-de - failure-domain.beta.kubernetes.io/zone: - name: pvc-evs-example - namespace: default - spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 10Gi - volumeName: pv-evs-example - volumeNamespace: default - - .. table:: **Table 6** Key parameters - - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+=============================================================================================+ - | volume.beta.kubernetes.io/storage-class | Storage class, which must be the same as that of the existing PV. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | volume.beta.kubernetes.io/storage-provisioner | The field must be set to **flexvolume-huawei.com/fuxivol**. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/region | Region where the cluster is located. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | failure-domain.beta.kubernetes.io/zone | AZ where the EVS volume is created. It must be the same as the AZ planned for the workload. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | storage | Requested capacity in the PVC, in Gi. | - | | | - | | The value must be the same as the storage size of the existing PV. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. | - +-----------------------------------------------+---------------------------------------------------------------------------------------------+ - -#. Create the PV. - - **kubectl create -f pv-evs-example.yaml** - -#. Create the PVC. - - **kubectl create -f pvc-evs-example.yaml** - - After the operation is successful, choose **Resource Management** > **Storage** to view the created PVC. You can also view the EVS disk by name on the EVS console. - -#. (Optional) Add the metadata associated with the cluster to ensure that the EVS disk associated with the mounted static PV is not deleted when the node or cluster is deleted. - - .. caution:: - - If you skip this step in this example or when creating a static PV or PVC, ensure that the EVS disk associated with the static PV has been unbound from the node before you delete the node. - - a. .. _cce_10_0313__li6891526204113: - - Obtain the tenant token. For details, see `Obtaining a User Token `__. - - b. .. _cce_10_0313__li17017349418: - - Obtain the EVS access address **EVS_ENDPOINT**. For details, see `Regions and Endpoints `__. - - c. Add the metadata associated with the cluster to the EVS disk backing the static PV. - - .. code-block:: - - curl -X POST ${EVS_ENDPOINT}/v2/${project_id}/volumes/${volume_id}/metadata --insecure \ - -d '{"metadata":{"cluster_id": "${cluster_id}", "namespace": "${pvc_namespace}"}}' \ - -H 'Accept:application/json' -H 'Content-Type:application/json;charset=utf8' \ - -H 'X-Auth-Token:${TOKEN}' - - .. table:: **Table 7** Key parameters - - +---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============+========================================================================================================================================================================================================================================================+ - | EVS_ENDPOINT | EVS access address. Set this parameter to the value obtained in :ref:`6.b `. | - +---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | project_id | Project ID. | - +---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volume_id | ID of the associated EVS disk. Set this parameter to **volume_id** of the static PV to be created. You can also log in to the EVS console, click the name of the EVS disk to be imported, and obtain the ID from **Summary** on the disk details page. | - +---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | cluster_id | ID of the cluster where the EVS PV is to be created. On the CCE console, choose **Resource Management** > **Clusters**. Click the name of the cluster to be associated. On the cluster details page, obtain the cluster ID. | - +---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | pvc_namespace | Namespace where the PVC is to be bound. | - +---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | TOKEN | User token. Set this parameter to the value obtained in :ref:`6.a `. | - +---------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - For example, run the following commands: - - .. code-block:: - - curl -X POST https://evs.eu-de.otc.t-systems.com:443/v2/060576866680d5762f52c0150e726aa7/volumes/69c9619d-174c-4c41-837e-31b892604e14/metadata --insecure \ - -d '{"metadata":{"cluster_id": "71e8277e-80c7-11ea-925c-0255ac100442", "namespace": "default"}}' \ - -H 'Accept:application/json' -H 'Content-Type:application/json;charset=utf8' \ - -H 'X-Auth-Token:MIIPe******IsIm1ldG - - After the request is executed, run the following commands to check whether the EVS disk has been associated with the metadata of the cluster: - - .. code-block:: - - curl -X GET ${EVS_ENDPOINT}/v2/${project_id}/volumes/${volume_id}/metadata --insecure \ - -H 'X-Auth-Token:${TOKEN}' - - For example, run the following commands: - - .. code-block:: - - curl -X GET https://evs.eu-de.otc.t-systems.com/v2/060576866680d5762f52c0150e726aa7/volumes/69c9619d-174c-4c41-837e-31b892604e14/metadata --insecure \ - -H 'X-Auth-Token:MIIPeAYJ***9t1c31ASaQ==' - - The command output displays the current metadata of the EVS disk. - - .. code-block:: - - { - "metadata": { - "namespace": "default", - "cluster_id": "71e8277e-80c7-11ea-925c-0255ac100442", - "hw:passthrough": "true" - } - } diff --git a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/overview.rst b/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/overview.rst deleted file mode 100644 index ea9a6ee..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_evs_disks_as_storage_volumes/overview.rst +++ /dev/null @@ -1,24 +0,0 @@ -:original_name: cce_10_0310.html - -.. _cce_10_0310: - -Overview -======== - -To achieve persistent storage, CCE allows you to mount the storage volumes created from Elastic Volume Service (EVS) disks to a path of a container. When the container is migrated, the mounted EVS volumes are also migrated. By using EVS volumes, you can mount the remote file directory of storage system into a container so that data in the data volume is permanently preserved even when the container is deleted. - - -.. figure:: /_static/images/en-us_image_0000001517903060.png - :alt: **Figure 1** Mounting EVS volumes to CCE - - **Figure 1** Mounting EVS volumes to CCE - -Description ------------ - -- **User-friendly**: Similar to formatting disks for on-site servers in traditional layouts, you can format block storage (disks) mounted to cloud servers, and create file systems on them. -- **Data isolation**: Each server uses an independent block storage device (disk). -- **Private network**: User can access data only in private networks of data centers. -- **Capacity and performance**: The capacity of a single volume is limited (TB-level), but the performance is excellent (ms-level read/write I/O latency). -- **Restriction**: EVS disks that have partitions or have non-ext4 file systems cannot be imported. -- **Applications**: HPC, enterprise core applications running in clusters, enterprise application systems, and development and testing. These volumes are often used by single-pod Deployments and jobs, or exclusively by each pod in a StatefulSet. EVS disks are non-shared storage and cannot be attached to multiple nodes at the same time. If two pods are configured to use the same EVS disk and the two pods are scheduled to different nodes, one pod cannot be started because the EVS disk cannot be attached to it. diff --git a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/index.rst b/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/index.rst deleted file mode 100644 index e279a1a..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/index.rst +++ /dev/null @@ -1,22 +0,0 @@ -:original_name: cce_10_0322.html - -.. _cce_10_0322: - -Using OBS Buckets as Storage Volumes -==================================== - -- :ref:`Overview ` -- :ref:`(kubectl) Automatically Creating an OBS Volume ` -- :ref:`(kubectl) Creating a PV from an Existing OBS Bucket ` -- :ref:`(kubectl) Creating a Deployment Mounted with an OBS Volume ` -- :ref:`(kubectl) Creating a StatefulSet Mounted with an OBS Volume ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - overview - kubectl_automatically_creating_an_obs_volume - kubectl_creating_a_pv_from_an_existing_obs_bucket - kubectl_creating_a_deployment_mounted_with_an_obs_volume - kubectl_creating_a_statefulset_mounted_with_an_obs_volume diff --git a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_automatically_creating_an_obs_volume.rst b/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_automatically_creating_an_obs_volume.rst deleted file mode 100644 index 0fbff41..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_automatically_creating_an_obs_volume.rst +++ /dev/null @@ -1,65 +0,0 @@ -:original_name: cce_10_0325.html - -.. _cce_10_0325: - -(kubectl) Automatically Creating an OBS Volume -============================================== - -Scenario --------- - -During the use of OBS, expected OBS buckets can be automatically created and mounted as volumes. Currently, standard and infrequent access OBS buckets are supported, which correspond to **obs-standard** and **obs-standard-ia**, respectively. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **pvc-obs-auto-example.yaml** file, which is used to create a PVC. - - **touch pvc-obs-auto-example.yaml** - - **vi pvc-obs-auto-example.yaml** - - **Example YAML:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: obs-standard # OBS bucket type. The value can be obs-standard (standard) or obs-standard-ia (infrequent access). - name: pvc-obs-auto-example # PVC name - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 1Gi # Storage capacity in the unit of Gi. For OBS buckets, this parameter is used only for verification (fixed to 1, cannot be empty or 0). Any value you set does not take effect for OBS buckets. - - .. table:: **Table 1** Key parameters - - +-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +=========================================+================================================================================================================================================================================================================+ - | volume.beta.kubernetes.io/storage-class | Bucket type. Currently, **obs-standard** and **obs-standard-ia** are supported. | - +-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | name | Name of the PVC to be created. | - +-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | accessModes | Only **ReadWriteMany** is supported. **ReadWriteOnly** is not supported. | - +-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity in the unit of Gi. For OBS buckets, this field is used only for verification (cannot be empty or 0). Its value is fixed at **1**, and any value you set does not take effect for OBS buckets. | - +-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -#. Run the following command to create a PVC: - - **kubectl create -f pvc-obs-auto-example.yaml** - - After the command is executed, an OBS bucket is created in the VPC to which the cluster belongs. You can click the bucket name in **Storage** > **OBS** to view the bucket or view it on the OBS console. diff --git a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_creating_a_deployment_mounted_with_an_obs_volume.rst b/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_creating_a_deployment_mounted_with_an_obs_volume.rst deleted file mode 100644 index 10e71ef..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_creating_a_deployment_mounted_with_an_obs_volume.rst +++ /dev/null @@ -1,168 +0,0 @@ -:original_name: cce_10_0327.html - -.. _cce_10_0327: - -(kubectl) Creating a Deployment Mounted with an OBS Volume -========================================================== - -Scenario --------- - -After an OBS volume is created or imported to CCE, you can mount the volume to a workload. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **obs-deployment-example.yaml** file, which is used to create a pod. - - **touch obs-deployment-example.yaml** - - **vi obs-deployment-example.yaml** - - Example of mounting an OBS volume to a Deployment (PVC-based, shared volume): - - .. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: obs-deployment-example # Workload name - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: obs-deployment-example - template: - metadata: - labels: - app: obs-deployment-example - spec: - containers: - - image: nginx - name: container-0 - volumeMounts: - - mountPath: /tmp # Mount path - name: pvc-obs-example - restartPolicy: Always - imagePullSecrets: - - name: default-secret - volumes: - - name: pvc-obs-example - persistentVolumeClaim: - claimName: pvc-obs-auto-example # PVC name - - .. table:: **Table 1** Key parameters - - ========= =========================================== - Parameter Description - ========= =========================================== - name Name of the pod to be created. - app Name of the application running in the pod. - mountPath Mount path in the container. - ========= =========================================== - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - - Example of mounting an OBS volume to a StatefulSet (PVC template-based, dedicated volume): - - **Example YAML:** - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: deploy-obs-standard-in - namespace: default - generation: 1 - labels: - appgroup: '' - spec: - replicas: 1 - selector: - matchLabels: - app: deploy-obs-standard-in - template: - metadata: - labels: - app: deploy-obs-standard-in - annotations: - metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"","path":"","port":"","names":""}]' - pod.alpha.kubernetes.io/initialized: 'true' - spec: - containers: - - name: container-0 - image: 'nginx:1.12-alpine-perl' - env: - - name: PAAS_APP_NAME - value: deploy-obs-standard-in - - name: PAAS_NAMESPACE - value: default - - name: PAAS_PROJECT_ID - value: a2cd8e998dca42e98a41f596c636dbda - resources: {} - volumeMounts: - - name: obs-bs-standard-mountoptionpvc - mountPath: /tmp - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - imagePullPolicy: IfNotPresent - restartPolicy: Always - terminationGracePeriodSeconds: 30 - dnsPolicy: ClusterFirst - securityContext: {} - imagePullSecrets: - - name: default-secret - affinity: {} - schedulerName: default-scheduler - volumeClaimTemplates: - - metadata: - name: obs-bs-standard-mountoptionpvc - annotations: - volume.beta.kubernetes.io/storage-class: obs-standard - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxiobs - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 1Gi - serviceName: wwww - podManagementPolicy: OrderedReady - updateStrategy: - type: RollingUpdate - revisionHistoryLimit: 10 - - .. table:: **Table 2** Key parameters - - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +=============+====================================================================================================================================+ - | name | Name of the created workload. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | image | Image of the workload. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | mountPath | Mount path in the container. In this example, the volume is mounted to the **/tmp** directory. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | serviceName | Service corresponding to the workload. For details about how to create a Service, see :ref:`Creating a StatefulSet `. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.volumeClaimTemplates.metadata.name** must be consistent because they have a mapping relationship. - -#. Run the following command to create the pod: - - **kubectl create -f obs-deployment-example.yaml** - - After the creation is complete, choose **Storage** > **OBS** on the CCE console and click the PVC name. On the PVC details page, you can view the binding relationship between the OBS service and the PVC. diff --git a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst b/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst deleted file mode 100644 index 5efd49b..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_obs_bucket.rst +++ /dev/null @@ -1,225 +0,0 @@ -:original_name: cce_10_0326.html - -.. _cce_10_0326: - -(kubectl) Creating a PV from an Existing OBS Bucket -=================================================== - -Scenario --------- - -CCE allows you to use an existing OBS bucket to create a PersistentVolume (PV). You can create a PersistentVolumeClaim (PVC) and bind it to the PV. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Log in to the OBS console, create an OBS bucket, and record the bucket name and storage class. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create two YAML files for creating the PV and PVC. Assume that the file names are **pv-obs-example.yaml** and **pvc-obs-example.yaml**. - - **touch pv-obs-example.yaml** **pvc-obs-example.yaml** - - +-----------------------------+------------------------------+-----------------------------------------------------+ - | Kubernetes Cluster Version | Description | YAML Example | - +=============================+==============================+=====================================================+ - | 1.11 <= K8s version <= 1.13 | Clusters from v1.11 to v1.13 | :ref:`Example YAML ` | - +-----------------------------+------------------------------+-----------------------------------------------------+ - | K8s version = 1.9 | Clusters of v1.9 | :ref:`Example YAML ` | - +-----------------------------+------------------------------+-----------------------------------------------------+ - - **Clusters from v1.11 to v1.13** - - - .. _cce_10_0326__li45671840132016: - - **Example YAML file for the PV:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv-obs-example - annotations: - pv.kubernetes.io/provisioned-by: flexvolume-huawei.com/fuxiobs - spec: - accessModes: - - ReadWriteMany - capacity: - storage: 1Gi - claimRef: - apiVersion: v1 - kind: PersistentVolumeClaim - name: pvc-obs-example - namespace: default - flexVolume: - driver: huawei.com/fuxiobs - fsType: obs - options: - fsType: obs - region: eu-de - storage_class: STANDARD - volumeID: test-obs - persistentVolumeReclaimPolicy: Delete - storageClassName: obs-standard - - .. table:: **Table 1** Key parameters - - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+===============================================================================================================================================================================================+ - | driver | Storage driver used to mount the volume. Set the driver to **huawei.com/fuxiobs** for the OBS volume. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage_class | Storage class, including **STANDARD** (standard bucket) and **STANDARD_IA** (infrequent access bucket). | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | region | Region where the cluster is located. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeID | OBS bucket name. | - | | | - | | To obtain the name, log in to the CCE console, choose **Resource Management** > **Storage**, click the PVC name in the **OBS** tab page, and copy the PV name on the **PV Details** tab page. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity in the unit of Gi. The value is fixed at **1Gi**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Storage class supported by OBS, including **obs-standard** (standard bucket) and **obs-standard-ia** (infrequent access bucket). | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.apiVersion | The value is fixed at **v1**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.kind | The value is fixed at **PersistentVolumeClaim**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.name | The value is the same as the name of the PVC created in the next step. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.namespace | The value is the same as the namespace of the PVC created in the next step. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - - **Example YAML file for the PVC:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: obs-standard - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxiobs - name: pvc-obs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 1Gi - volumeName: pv-obs-example - - .. table:: **Table 2** Key parameters - - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+=====================================================================================+ - | volume.beta.kubernetes.io/storage-class | Storage class supported by OBS, including **obs-standard** and **obs-standard-ia**. | - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - | volume.beta.kubernetes.io/storage-provisioner | Must be set to **flexvolume-huawei.com/fuxiobs**. | - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. | - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - | storage | Storage capacity in the unit of Gi. The value is fixed at **1Gi**. | - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - - **Clusters of v1.9** - - - .. _cce_10_0326__li154036581589: - - **Example YAML file for the PV:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv-obs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - capacity: - storage: 1Gi - flexVolume: - driver: huawei.com/fuxiobs - fsType: obs - options: - fsType: obs - kubernetes.io/namespace: default - region: eu-de - storage_class: STANDARD - volumeID: test-obs - persistentVolumeReclaimPolicy: Delete - storageClassName: obs-standard - - .. table:: **Table 3** Key parameters - - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+===============================================================================================================================================================================================+ - | driver | Storage driver used to mount the volume. Set the driver to **huawei.com/fuxiobs** for the OBS volume. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage_class | Storage class, including **STANDARD** (standard bucket) and **STANDARD_IA** (infrequent access bucket). | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | region | Region where the cluster is located. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeID | OBS bucket name. | - | | | - | | To obtain the name, log in to the CCE console, choose **Resource Management** > **Storage**, click the PVC name in the **OBS** tab page, and copy the PV name on the **PV Details** tab page. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity in the unit of Gi. The value is fixed at **1Gi**. | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Storage class supported by OBS, including **obs-standard** (standard bucket) and **obs-standard-ia** (infrequent access bucket). | - +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - - **Example YAML file for the PVC:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: obs-standard - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxiobs - name: pvc-obs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 1Gi - volumeName: pv-obs-example - volumeNamespace: default - - .. table:: **Table 4** Key parameters - - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+=====================================================================================+ - | volume.beta.kubernetes.io/storage-class | Storage class supported by OBS, including **obs-standard** and **obs-standard-ia**. | - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - | volume.beta.kubernetes.io/storage-provisioner | Must be set to **flexvolume-huawei.com/fuxiobs**. | - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. | - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - | storage | Storage capacity in the unit of Gi. The value is fixed at **1Gi**. | - +-----------------------------------------------+-------------------------------------------------------------------------------------+ - -#. Create the PV. - - **kubectl create -f pv-obs-example.yaml** - -#. Create the PVC. - - **kubectl create -f pvc-obs-example.yaml** diff --git a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_creating_a_statefulset_mounted_with_an_obs_volume.rst b/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_creating_a_statefulset_mounted_with_an_obs_volume.rst deleted file mode 100644 index 1fb3491..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/kubectl_creating_a_statefulset_mounted_with_an_obs_volume.rst +++ /dev/null @@ -1,90 +0,0 @@ -:original_name: cce_10_0328.html - -.. _cce_10_0328: - -(kubectl) Creating a StatefulSet Mounted with an OBS Volume -=========================================================== - -Scenario --------- - -CCE allows you to use an existing OBS volume to create a StatefulSet through a PersistentVolumeClaim (PVC). - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Create an OBS volume by referring to :ref:`(kubectl) Automatically Creating an OBS Volume ` and obtain the PVC name. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create a YAML file for creating the workload. Assume that the file name is **obs-statefulset-example.yaml**. - - **touch obs-statefulset-example.yaml** - - **vi obs-statefulset-example.yaml** - - **Example YAML:** - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: obs-statefulset-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: obs-statefulset-example - serviceName: qwqq - template: - metadata: - annotations: - metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"","path":"","port":"","names":""}]' - pod.alpha.kubernetes.io/initialized: "true" - creationTimestamp: null - labels: - app: obs-statefulset-example - spec: - affinity: {} - containers: - image: nginx:latest - imagePullPolicy: Always - name: container-0 - volumeMounts: - - mountPath: /tmp - name: pvc-obs-example - imagePullSecrets: - - name: default-secret - volumes: - - name: pvc-obs-example - persistentVolumeClaim: - claimName: cce-obs-demo - - .. table:: **Table 1** Key parameters - - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +=============+====================================================================================================================================+ - | replicas | Number of pods. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | name | Name of the created workload. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | image | Image used by the workload. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | mountPath | Mount path in the container. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | serviceName | Service corresponding to the workload. For details about how to create a Service, see :ref:`Creating a StatefulSet `. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | claimName | Name of an existing PVC. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - -#. Create the StatefulSet. - - **kubectl create -f obs-statefulset-example.yaml** diff --git a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/overview.rst b/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/overview.rst deleted file mode 100644 index 372cf52..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_obs_buckets_as_storage_volumes/overview.rst +++ /dev/null @@ -1,37 +0,0 @@ -:original_name: cce_10_0323.html - -.. _cce_10_0323: - -Overview -======== - -CCE allows you to mount a volume created from an Object Storage Service (OBS) bucket to a container to store data persistently. Object storage is commonly used in cloud workloads, data analysis, content analysis, and hotspot objects. - - -.. figure:: /_static/images/en-us_image_0000001517743540.png - :alt: **Figure 1** Mounting OBS volumes to CCE - - **Figure 1** Mounting OBS volumes to CCE - -Storage Class -------------- - -Object storage offers three storage classes, Standard, Infrequent Access, and Archive, to satisfy different requirements for storage performance and costs. - -- The Standard storage class features low access latency and high throughput. It is therefore applicable to storing a large number of hot files (frequently accessed every month) or small files (less than 1 MB). The application scenarios include big data analytics, mobile apps, hot videos, and picture processing on social media. -- The Infrequent Access storage class is ideal for storing data that is semi-frequently accessed (less than 12 times a year), with requirements for quick response. The application scenarios include file synchronization or sharing, and enterprise-level backup. It provides the same durability, access latency, and throughput as the Standard storage class but at a lower cost. However, the Infrequent Access storage class has lower availability than the Standard storage class. -- The Archive storage class is suitable for archiving data that is rarely-accessed (averagely once a year). The application scenarios include data archiving and long-term data backup. The Archive storage class is secure and durable at an affordable low cost, which can be used to replace tape libraries. However, it may take hours to restore data from the Archive storage class. - -Description ------------ - -- **Standard APIs**: With HTTP RESTful APIs, OBS allows you to use client tools or third-party tools to access object storage. -- **Data sharing**: Servers, embedded devices, and IoT devices can use the same path to access shared object data in OBS. -- **Public/Private networks**: OBS allows data to be accessed from public networks to meet Internet application requirements. -- **Capacity and performance**: No capacity limit; high performance (read/write I/O latency within 10 ms). -- **Use cases**: Deployments/StatefulSets in the ReadOnlyMany mode and jobs created for big data analysis, static website hosting, online video on demand (VoD), gene sequencing, intelligent video surveillance, backup and archiving, and enterprise cloud boxes (web disks). You can create object storage by using the OBS console, tools, and SDKs. - -Reference ---------- - -CCE clusters can also be mounted with OBS buckets of third-party tenants, including OBS parallel file systems (preferred) and OBS object buckets. diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/index.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/index.rst deleted file mode 100644 index d0cc819..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/index.rst +++ /dev/null @@ -1,22 +0,0 @@ -:original_name: cce_10_0315.html - -.. _cce_10_0315: - -Using SFS File Systems as Storage Volumes -========================================= - -- :ref:`Overview ` -- :ref:`(kubectl) Automatically Creating an SFS Volume ` -- :ref:`(kubectl) Creating a PV from an Existing SFS File System ` -- :ref:`(kubectl) Creating a Deployment Mounted with an SFS Volume ` -- :ref:`(kubectl) Creating a StatefulSet Mounted with an SFS Volume ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - overview - kubectl_automatically_creating_an_sfs_volume - kubectl_creating_a_pv_from_an_existing_sfs_file_system - kubectl_creating_a_deployment_mounted_with_an_sfs_volume - kubectl_creating_a_statefulset_mounted_with_an_sfs_volume diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_automatically_creating_an_sfs_volume.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_automatically_creating_an_sfs_volume.rst deleted file mode 100644 index cb320c6..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_automatically_creating_an_sfs_volume.rst +++ /dev/null @@ -1,60 +0,0 @@ -:original_name: cce_10_0318.html - -.. _cce_10_0318: - -(kubectl) Automatically Creating an SFS Volume -============================================== - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **pvc-sfs-auto-example.yaml** file, which is used to create a PVC. - - **touch pvc-sfs-auto-example.yaml** - - **vi pvc-sfs-auto-example.yaml** - - **Example YAML file:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: nfs-rw - name: pvc-sfs-auto-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 10Gi - - .. table:: **Table 1** Key parameters - - +-----------------------------------------+---------------------------------------------------------------------------------------+ - | Parameter | Description | - +=========================================+=======================================================================================+ - | volume.beta.kubernetes.io/storage-class | File storage class. Currently, the standard file protocol type (nfs-rw) is supported. | - +-----------------------------------------+---------------------------------------------------------------------------------------+ - | name | Name of the PVC to be created. | - +-----------------------------------------+---------------------------------------------------------------------------------------+ - | accessModes | Only **ReadWriteMany** is supported. **ReadWriteOnly** is not supported. | - +-----------------------------------------+---------------------------------------------------------------------------------------+ - | storage | Storage capacity in the unit of Gi. | - +-----------------------------------------+---------------------------------------------------------------------------------------+ - -#. Run the following command to create a PVC: - - **kubectl create -f pvc-sfs-auto-example.yaml** - - After the command is executed, a file system is created in the VPC to which the cluster belongs. Choose **Storage** > **SFS** on the CCE console or log in to the SFS console to view the file system. diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_deployment_mounted_with_an_sfs_volume.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_deployment_mounted_with_an_sfs_volume.rst deleted file mode 100644 index 3470ba8..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_deployment_mounted_with_an_sfs_volume.rst +++ /dev/null @@ -1,145 +0,0 @@ -:original_name: cce_10_0320.html - -.. _cce_10_0320: - -(kubectl) Creating a Deployment Mounted with an SFS Volume -========================================================== - -Scenario --------- - -After an SFS volume is created or imported to CCE, you can mount the volume to a workload. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **sfs-deployment-example.yaml** file, which is used to create a pod. - - **touch sfs-deployment-example.yaml** - - **vi sfs-deployment-example.yaml** - - Example of mounting an SFS volume to a Deployment (PVC-based, shared volume): - - .. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: sfs-deployment-example # Workload name - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: sfs-deployment-example - template: - metadata: - labels: - app: sfs-deployment-example - spec: - containers: - - image: nginx - name: container-0 - volumeMounts: - - mountPath: /tmp # Mount path - name: pvc-sfs-example - imagePullSecrets: - - name: default-secret - restartPolicy: Always - volumes: - - name: pvc-sfs-example - persistentVolumeClaim: - claimName: pvc-sfs-auto-example # PVC name - - .. table:: **Table 1** Key parameters - - +--------------------------------------------------+-----------+---------------------------------------------------------------------------+ - | Parent Parameter | Parameter | Description | - +==================================================+===========+===========================================================================+ - | metadata | name | Name of the pod to be created. | - +--------------------------------------------------+-----------+---------------------------------------------------------------------------+ - | spec.template.spec.containers.volumeMounts | mountPath | Mount path in the container. In this example, the mount path is **/tmp**. | - +--------------------------------------------------+-----------+---------------------------------------------------------------------------+ - | spec.template.spec.volumes.persistentVolumeClaim | claimName | Name of an existing PVC. | - +--------------------------------------------------+-----------+---------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - - Example of mounting an SFS volume to a StatefulSet (PVC template-based, dedicated volume): - - **Example YAML:** - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: deploy-sfs-nfs-rw-in - namespace: default - labels: - appgroup: '' - spec: - replicas: 2 - selector: - matchLabels: - app: deploy-sfs-nfs-rw-in - template: - metadata: - labels: - app: deploy-sfs-nfs-rw-in - spec: - containers: - - name: container-0 - image: 'nginx:1.12-alpine-perl' - volumeMounts: - - name: bs-nfs-rw-mountoptionpvc - mountPath: /aaa - imagePullSecrets: - - name: default-secret - volumeClaimTemplates: - - metadata: - name: bs-nfs-rw-mountoptionpvc - annotations: - volume.beta.kubernetes.io/storage-class: nfs-rw - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxinfs - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 1Gi - serviceName: wwww - - .. table:: **Table 2** Key parameters - - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | Parent Parameter | Parameter | Description | - +===========================================+=============+====================================================================================================================================+ - | metadata | name | Name of the created workload. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers | image | Image of the workload. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers.volumeMount | mountPath | Mount path in the container. In this example, the mount path is **/tmp**. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec | serviceName | Service corresponding to the workload. For details about how to create a Service, see :ref:`Creating a StatefulSet `. | - +-------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.volumeClaimTemplates.metadata.name** must be consistent because they have a mapping relationship. - -#. Run the following command to create the pod: - - **kubectl create -f sfs-deployment-example.yaml** - - After the creation is complete, log in to the CCE console. In the navigation pane, choose **Resource Management** > **Storage** > **SFS**. Click the PVC name. On the PVC details page, you can view the binding relationship between SFS and PVC. diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst deleted file mode 100644 index bed800b..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_file_system.rst +++ /dev/null @@ -1,222 +0,0 @@ -:original_name: cce_10_0319.html - -.. _cce_10_0319: - -(kubectl) Creating a PV from an Existing SFS File System -======================================================== - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Log in to the SFS console, create a file system, and record the file system ID, shared path, and capacity. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create two YAML files for creating the PV and PVC. Assume that the file names are **pv-sfs-example.yaml** and **pvc-sfs-example.yaml**. - - **touch pv-sfs-example.yaml** **pvc-sfs-example.yaml** - - +----------------------------+------------------------------+-----------------------------------------------------+ - | Kubernetes Cluster Version | Description | YAML Example | - +============================+==============================+=====================================================+ - | 1.11 <= K8s version < 1.13 | Clusters from v1.11 to v1.13 | :ref:`Example YAML ` | - +----------------------------+------------------------------+-----------------------------------------------------+ - | K8s version = 1.9 | Clusters of v1.9 | :ref:`Example YAML ` | - +----------------------------+------------------------------+-----------------------------------------------------+ - - **Clusters from v1.11 to v1.13** - - - .. _cce_10_0319__li1252510101515: - - **Example YAML file for the PV:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv-sfs-example - annotations: - pv.kubernetes.io/provisioned-by: flexvolume-huawei.com/fuxinfs - spec: - accessModes: - - ReadWriteMany - capacity: - storage: 10Gi - claimRef: - apiVersion: v1 - kind: PersistentVolumeClaim - name: pvc-sfs-example - namespace: default - flexVolume: - driver: huawei.com/fuxinfs - fsType: nfs - options: - deviceMountPath: # Shared storage path of your file. - fsType: nfs - volumeID: f6976f9e-2493-419b-97ca-d7816008d91c - persistentVolumeReclaimPolicy: Delete - storageClassName: nfs-rw - - .. table:: **Table 1** Key parameters - - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=====================================================================================================================================================================================+ - | driver | Storage driver used to mount the volume. Set the driver to **huawei.com/fuxinfs** for the file system. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | deviceMountPath | Shared path of the file system. | - | | | - | | On the management console, choose **Service List** > **Storage** > **Scalable File Service**. You can obtain the shared path of the file system from the **Mount Address** column. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeID | File system ID. | - | | | - | | To obtain the ID, log in to the CCE console, choose **Resource Management** > **Storage**, click the PVC name in the **SFS** tab page, and copy the PVC ID on the PVC details page. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | File system size. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Read/write mode supported by the file system. Currently, **nfs-rw** and **nfs-ro** are supported. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.apiVersion | The value is fixed at **v1**. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.kind | The value is fixed at **PersistentVolumeClaim**. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.name | The value is the same as the name of the PVC created in the next step. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.namespace | Namespace of the PVC. The value is the same as the namespace of the PVC created in the next step. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - - **Example YAML file for the PVC:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: nfs-rw - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxinfs - name: pvc-sfs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 10Gi - volumeName: pv-sfs-example - - .. table:: **Table 2** Key parameters - - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+===============================================================================================================================================+ - | volume.beta.kubernetes.io/storage-class | Read/write mode supported by the file system. **nfs-rw** and **nfs-ro** are supported. The value must be the same as that of the existing PV. | - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | volume.beta.kubernetes.io/storage-provisioner | Must be set to **flexvolume-huawei.com/fuxinfs**. | - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity, in the unit of Gi. The value must be the same as the storage size of the existing PV. | - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. | - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - - **Clusters of v1.9** - - - .. _cce_10_0319__li10858156164514: - - **Example YAML file for the PV:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv-sfs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - capacity: - storage: 10Gi - flexVolume: - driver: huawei.com/fuxinfs - fsType: nfs - options: - deviceMountPath: # Shared storage path of your file. - fsType: nfs - kubernetes.io/namespace: default - volumeID: f6976f9e-2493-419b-97ca-d7816008d91c - persistentVolumeReclaimPolicy: Delete - storageClassName: nfs-rw - - .. table:: **Table 3** Key parameters - - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=====================================================================================================================================================================================+ - | driver | Storage driver used to mount the volume. Set the driver to **huawei.com/fuxinfs** for the file system. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | deviceMountPath | Shared path of the file system. | - | | | - | | On the management console, choose **Service List** > **Storage** > **Scalable File Service**. You can obtain the shared path of the file system from the **Mount Address** column. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeID | File system ID. | - | | | - | | To obtain the ID, log in to the CCE console, choose **Resource Management** > **Storage**, click the PVC name in the **SFS** tab page, and copy the PVC ID on the PVC details page. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | File system size. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Read/write mode supported by the file system. Currently, **nfs-rw** and **nfs-ro** are supported. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - - **Example YAML file for the PVC:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: nfs-rw - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxinfs - name: pvc-sfs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 10Gi - volumeName: pv-sfs-example - volumeNamespace: default - - .. table:: **Table 4** Key parameters - - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+===============================================================================================================================================+ - | volume.beta.kubernetes.io/storage-class | Read/write mode supported by the file system. **nfs-rw** and **nfs-ro** are supported. The value must be the same as that of the existing PV. | - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | volume.beta.kubernetes.io/storage-provisioner | The field must be set to **flexvolume-huawei.com/fuxinfs**. | - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity, in the unit of Gi. The value must be the same as the storage size of the existing PV. | - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. | - +-----------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------+ - - .. note:: - - The VPC to which the file system belongs must be the same as the VPC of the ECS VM to which the workload is planned. - -#. Create the PV. - - **kubectl create -f pv-sfs-example.yaml** - -#. Create the PVC. - - **kubectl create -f pvc-sfs-example.yaml** diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_statefulset_mounted_with_an_sfs_volume.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_statefulset_mounted_with_an_sfs_volume.rst deleted file mode 100644 index 2e25ca0..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/kubectl_creating_a_statefulset_mounted_with_an_sfs_volume.rst +++ /dev/null @@ -1,92 +0,0 @@ -:original_name: cce_10_0321.html - -.. _cce_10_0321: - -(kubectl) Creating a StatefulSet Mounted with an SFS Volume -=========================================================== - -Scenario --------- - -CCE allows you to use an existing SFS volume to create a StatefulSet through a PersistentVolumeClaim (PVC). - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Create an SFS volume by referring to :ref:`(kubectl) Automatically Creating an SFS Volume ` and record the volume name. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create a YAML file for creating the workload. Assume that the file name is **sfs-statefulset-example**.\ **yaml**. - - **touch sfs-statefulset-example.yaml** - - **vi sfs-statefulset-example.yaml** - - **Example YAML:** - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: sfs-statefulset-example - namespace: default - spec: - replicas: 2 - selector: - matchLabels: - app: sfs-statefulset-example - serviceName: qwqq - template: - metadata: - annotations: - metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"","path":"","port":"","names":""}]' - pod.alpha.kubernetes.io/initialized: "true" - labels: - app: sfs-statefulset-example - spec: - affinity: {} - containers: - - image: nginx:latest - name: container-0 - volumeMounts: - - mountPath: /tmp - name: pvc-sfs-example - imagePullSecrets: - - name: default-secret - volumes: - - name: pvc-sfs-example - persistentVolumeClaim: - claimName: cce-sfs-demo - - .. table:: **Table 1** Key parameters - - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | Parent Parameter | Parameter | Description | - +==================================================+=============+====================================================================================================================================+ - | spec | replicas | Number of pods. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | metadata | name | Name of the created workload. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers | image | Image used by the workload. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.containers.volumeMounts | mountPath | Mount path in the container. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec | serviceName | Service corresponding to the workload. For details about how to create a Service, see :ref:`Creating a StatefulSet `. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | spec.template.spec.volumes.persistentVolumeClaim | claimName | Name of an existing PVC. | - +--------------------------------------------------+-------------+------------------------------------------------------------------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - -#. Create the StatefulSet. - - **kubectl create -f sfs-statefulset-example .yaml** diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/overview.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/overview.rst deleted file mode 100644 index 58b0917..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_file_systems_as_storage_volumes/overview.rst +++ /dev/null @@ -1,23 +0,0 @@ -:original_name: cce_10_0316.html - -.. _cce_10_0316: - -Overview -======== - -CCE allows you to mount a volume created from a Scalable File Service (SFS) file system to a container to store data persistently. SFS volumes are commonly used in ReadWriteMany scenarios, such as media processing, content management, big data analysis, and workload process analysis. - - -.. figure:: /_static/images/en-us_image_0000001568822709.png - :alt: **Figure 1** Mounting SFS volumes to CCE - - **Figure 1** Mounting SFS volumes to CCE - -Description ------------ - -- **Standard file protocols**: You can mount file systems as volumes to servers, the same as using local directories. -- **Data sharing**: The same file system can be mounted to multiple servers, so that data can be shared. -- **Private network**: User can access data only in private networks of data centers. -- **Capacity and performance**: The capacity of a single file system is high (PB level) and the performance is excellent (ms-level read/write I/O latency). -- **Use cases**: Deployments/StatefulSets in the ReadWriteMany mode and jobs created for high-performance computing (HPC), media processing, content management, web services, big data analysis, and workload process analysis diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/index.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/index.rst deleted file mode 100644 index ac171ac..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/index.rst +++ /dev/null @@ -1,20 +0,0 @@ -:original_name: cce_10_0329.html - -.. _cce_10_0329: - -Using SFS Turbo File Systems as Storage Volumes -=============================================== - -- :ref:`Overview ` -- :ref:`(kubectl) Creating a PV from an Existing SFS Turbo File System ` -- :ref:`(kubectl) Creating a Deployment Mounted with an SFS Turbo Volume ` -- :ref:`(kubectl) Creating a StatefulSet Mounted with an SFS Turbo Volume ` - -.. toctree:: - :maxdepth: 1 - :hidden: - - overview - kubectl_creating_a_pv_from_an_existing_sfs_turbo_file_system - kubectl_creating_a_deployment_mounted_with_an_sfs_turbo_volume - kubectl_creating_a_statefulset_mounted_with_an_sfs_turbo_volume diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/kubectl_creating_a_deployment_mounted_with_an_sfs_turbo_volume.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/kubectl_creating_a_deployment_mounted_with_an_sfs_turbo_volume.rst deleted file mode 100644 index e9355c9..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/kubectl_creating_a_deployment_mounted_with_an_sfs_turbo_volume.rst +++ /dev/null @@ -1,82 +0,0 @@ -:original_name: cce_10_0333.html - -.. _cce_10_0333: - -(kubectl) Creating a Deployment Mounted with an SFS Turbo Volume -================================================================ - -Scenario --------- - -After an SFS Turbo volume is created or imported to CCE, you can mount the volume to a workload. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Run the following commands to configure the **efs-deployment-example.yaml** file, which is used to create a Deployment: - - **touch efs-deployment-example.yaml** - - **vi efs-deployment-example.yaml** - - Example of mounting an SFS Turbo volume to a Deployment (PVC-based, shared volume): - - .. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: efs-deployment-example # Workload name - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: efs-deployment-example - template: - metadata: - labels: - app: efs-deployment-example - spec: - containers: - - image: nginx - name: container-0 - volumeMounts: - - mountPath: /tmp # Mount path - name: pvc-efs-example - restartPolicy: Always - imagePullSecrets: - - name: default-secret - volumes: - - name: pvc-efs-example - persistentVolumeClaim: - claimName: pvc-sfs-auto-example # PVC name - - .. table:: **Table 1** Key parameters - - +-----------+---------------------------------------------------------------------------+ - | Parameter | Description | - +===========+===========================================================================+ - | name | Name of the created Deployment. | - +-----------+---------------------------------------------------------------------------+ - | app | Name of the application running in the Deployment. | - +-----------+---------------------------------------------------------------------------+ - | mountPath | Mount path in the container. In this example, the mount path is **/tmp**. | - +-----------+---------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - -#. Run the following command to create the pod: - - **kubectl create -f efs-deployment-example.yaml** - - After the creation is complete, choose **Storage** > **SFS Turbo** on the CCE console and click the PVC name. On the PVC details page, you can view the binding relationship between SFS Turbo and PVC. diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_turbo_file_system.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_turbo_file_system.rst deleted file mode 100644 index 7892ff7..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/kubectl_creating_a_pv_from_an_existing_sfs_turbo_file_system.rst +++ /dev/null @@ -1,129 +0,0 @@ -:original_name: cce_10_0332.html - -.. _cce_10_0332: - -(kubectl) Creating a PV from an Existing SFS Turbo File System -============================================================== - -Scenario --------- - -CCE allows you to use an existing SFS Turbo file system to create a PersistentVolume (PV). After the creation is successful, you can create a PersistentVolumeClaim (PVC) and bind it to the PV. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Log in to the SFS console, create a file system, and record the file system ID, shared path, and capacity. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create two YAML files for creating the PV and PVC. Assume that the file names are **pv-efs-example.yaml** and **pvc-efs-example.yaml**. - - **touch pv-efs-example.yaml** **pvc-efs-example.yaml** - - - **Example YAML file for the PV:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pv-efs-example - annotations: - pv.kubernetes.io/provisioned-by: flexvolume-huawei.com/fuxiefs - spec: - accessModes: - - ReadWriteMany - capacity: - storage: 100Gi - claimRef: - apiVersion: v1 - kind: PersistentVolumeClaim - name: pvc-efs-example - namespace: default - flexVolume: - driver: huawei.com/fuxiefs - fsType: efs - options: - deviceMountPath: # Shared storage path of your SFS Turbo file. - fsType: efs - volumeID: 8962a2a2-a583-4b7f-bb74-fe76712d8414 - persistentVolumeReclaimPolicy: Delete - storageClassName: efs-standard - - .. table:: **Table 1** Key parameters - - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=======================================================================================================================================================================================================+ - | driver | Storage driver used to mount the volume. Set it to **huawei.com/fuxiefs**. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | deviceMountPath | Shared path of the SFS Turbo volume. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeID | SFS Turbo volume ID. | - | | | - | | To obtain the ID, log in to the CCE console, choose **Resource Management** > **Storage**, click the PVC name in the **SFS Turbo** tab page, and copy the PVC ID on the PVC details page. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | File system size. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storageClassName | Volume type supported by SFS Turbo. The value can be **efs-standard** and **efs-performance**. Currently, SFS Turbo does not support dynamic creation; therefore, this parameter is not used for now. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.apiVersion | The value is fixed at **v1**. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.kind | The value is fixed at **PersistentVolumeClaim**. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.name | The value is the same as the name of the PVC created in the next step. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | spec.claimRef.namespace | The value is the same as the namespace of the PVC created in the next step. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - - **Example YAML file for the PVC:** - - .. code-block:: - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - annotations: - volume.beta.kubernetes.io/storage-class: efs-standard - volume.beta.kubernetes.io/storage-provisioner: flexvolume-huawei.com/fuxiefs - name: pvc-efs-example - namespace: default - spec: - accessModes: - - ReadWriteMany - resources: - requests: - storage: 100Gi - volumeName: pv-efs-example - - .. table:: **Table 2** Key parameters - - +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===============================================+==========================================================================================================================================================+ - | volume.beta.kubernetes.io/storage-class | Read/write mode supported by SFS Turbo. The value can be **efs-standard** or **efs-performance**. The value must be the same as that of the existing PV. | - +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volume.beta.kubernetes.io/storage-provisioner | The field must be set to **flexvolume-huawei.com/fuxiefs**. | - +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ - | storage | Storage capacity, in the unit of Gi. The value must be the same as the storage size of the existing PV. | - +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ - | volumeName | Name of the PV. | - +-----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+ - - .. note:: - - The VPC to which the SFS Turbo file system belongs must be the same as the VPC of the ECS VM planned for the workload. Ports 111, 445, 2049, 2051, and 20048 must be enabled in the security groups. - -#. Create the PV. - - **kubectl create -f pv-efs-example.yaml** - -#. Create the PVC. - - **kubectl create -f pvc-efs-example.yaml** diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/kubectl_creating_a_statefulset_mounted_with_an_sfs_turbo_volume.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/kubectl_creating_a_statefulset_mounted_with_an_sfs_turbo_volume.rst deleted file mode 100644 index 7d3a0fc..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/kubectl_creating_a_statefulset_mounted_with_an_sfs_turbo_volume.rst +++ /dev/null @@ -1,117 +0,0 @@ -:original_name: cce_10_0334.html - -.. _cce_10_0334: - -(kubectl) Creating a StatefulSet Mounted with an SFS Turbo Volume -================================================================= - -Scenario --------- - -CCE allows you to use an existing SFS Turbo volume to create a StatefulSet. - -Notes and Constraints ---------------------- - -The following configuration example applies to clusters of Kubernetes 1.13 or earlier. - -Procedure ---------- - -#. Create an SFS Turbo volume and record the volume name. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create a YAML file for creating the workload. Assume that the file name is **efs-statefulset-example.yaml**. - - **touch efs-statefulset-example.yaml** - - **vi efs-statefulset-example.yaml** - - **Example YAML:** - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: efs-statefulset-example - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: efs-statefulset-example - template: - metadata: - annotations: - metrics.alpha.kubernetes.io/custom-endpoints: '[{"api":"","path":"","port":"","names":""}]' - pod.alpha.kubernetes.io/initialized: 'true' - labels: - app: efs-statefulset-example - spec: - containers: - - image: 'nginx:1.0.0' - name: container-0 - resources: - requests: {} - limits: {} - env: - - name: PAAS_APP_NAME - value: efs-statefulset-example - - name: PAAS_NAMESPACE - value: default - - name: PAAS_PROJECT_ID - value: b18296881cc34f929baa8b9e95abf88b - volumeMounts: - - name: efs-statefulset-example - mountPath: /tmp - readOnly: false - subPath: '' - imagePullSecrets: - - name: default-secret - terminationGracePeriodSeconds: 30 - volumes: - - persistentVolumeClaim: - claimName: cce-efs-import-jnr481gm-3y5o - name: efs-statefulset-example - affinity: {} - tolerations: - - key: node.kubernetes.io/not-ready - operator: Exists - effect: NoExecute - tolerationSeconds: 300 - - key: node.kubernetes.io/unreachable - operator: Exists - effect: NoExecute - tolerationSeconds: 300 - podManagementPolicy: OrderedReady - serviceName: test - updateStrategy: - type: RollingUpdate - - .. table:: **Table 1** Key parameters - - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +=============+====================================================================================================================================+ - | replicas | Number of pods. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | name | Name of the created workload. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | image | Image used by the workload. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | mountPath | Mount path in the container. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | serviceName | Service corresponding to the workload. For details about how to create a Service, see :ref:`Creating a StatefulSet `. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - | claimName | Name of an existing PVC. | - +-------------+------------------------------------------------------------------------------------------------------------------------------------+ - - .. note:: - - **spec.template.spec.containers.volumeMounts.name** and **spec.template.spec.volumes.name** must be consistent because they have a mapping relationship. - -#. Create the StatefulSet. - - **kubectl create -f efs-statefulset-example.yaml** diff --git a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst b/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst deleted file mode 100644 index a99cb85..0000000 --- a/umn/source/storage_management_flexvolume_deprecated/using_sfs_turbo_file_systems_as_storage_volumes/overview.rst +++ /dev/null @@ -1,23 +0,0 @@ -:original_name: cce_10_0330.html - -.. _cce_10_0330: - -Overview -======== - -CCE allows you to mount a volume created from an SFS Turbo file system to a container to store data persistently. Provisioned on demand and fast, SFS Turbo is suitable for DevOps, container microservices, and enterprise OA scenarios. - - -.. figure:: /_static/images/en-us_image_0000001568902669.png - :alt: **Figure 1** Mounting SFS Turbo volumes to CCE - - **Figure 1** Mounting SFS Turbo volumes to CCE - -Description ------------ - -- **Standard file protocols**: You can mount file systems as volumes to servers, the same as using local directories. -- **Data sharing**: The same file system can be mounted to multiple servers, so that data can be shared. -- **Private network**: User can access data only in private networks of data centers. -- **Data isolation**: The on-cloud storage service provides exclusive cloud file storage, which delivers data isolation and ensures IOPS performance. -- **Use cases**: Deployments/StatefulSets in the ReadWriteMany mode, DaemonSets, and jobs created for high-traffic websites, log storage, DevOps, and enterprise OA applications diff --git a/umn/source/workloads/accessing_a_container.rst b/umn/source/workloads/accessing_a_container.rst index 6fd23a8..b0327ba 100644 --- a/umn/source/workloads/accessing_a_container.rst +++ b/umn/source/workloads/accessing_a_container.rst @@ -8,7 +8,7 @@ Accessing a Container Scenario -------- -If you encounter unexpected problems when using a container, you can log in to the container for debugging. +If you encounter unexpected problems when using a container, you can log in to the container to debug it. Logging In to a Container Using kubectl --------------------------------------- @@ -28,7 +28,7 @@ Logging In to a Container Using kubectl NAME READY STATUS RESTARTS AGE nginx-59d89cb66f-mhljr 1/1 Running 0 11m -#. Query the name of the container in the pod. +#. Query the container name in the pod. .. code-block:: @@ -40,7 +40,7 @@ Logging In to a Container Using kubectl container-1 -#. Run the following command to log in to the container named **container-1** in **nginx-59d89cb66f-mhljrPod**: +#. Run the following command to log in to the **container-1** container in the **nginx-59d89cb66f-mhljr** pod: .. code-block:: diff --git a/umn/source/workloads/configuring_a_container/configuring_an_image_pull_policy.rst b/umn/source/workloads/configuring_a_container/configuring_an_image_pull_policy.rst index 0eb5389..101f23c 100644 --- a/umn/source/workloads/configuring_a_container/configuring_an_image_pull_policy.rst +++ b/umn/source/workloads/configuring_a_container/configuring_an_image_pull_policy.rst @@ -32,7 +32,7 @@ The image pull policy can also be set to **Always**, indicating that the image i imagePullSecrets: - name: default-secret -You can also set the image pull policy when creating a workload on the CCE console. As shown in the following figure, if you select **Always**, the image is always pulled. If you do not select it, the policy will be **IfNotPresent**, which means that the image is not pulled. +You can also set the image pull policy when creating a workload on the CCE console. If you select **Always**, the image is always pulled. If you do not select it, the policy will be **IfNotPresent**, which means that the image is not pulled. .. important:: diff --git a/umn/source/workloads/configuring_a_container/configuring_the_workload_upgrade_policy.rst b/umn/source/workloads/configuring_a_container/configuring_the_workload_upgrade_policy.rst index 9894d82..fbb088d 100644 --- a/umn/source/workloads/configuring_a_container/configuring_the_workload_upgrade_policy.rst +++ b/umn/source/workloads/configuring_a_container/configuring_the_workload_upgrade_policy.rst @@ -15,35 +15,27 @@ You can set different upgrade policies: Upgrade Parameters ------------------ -- **Max. Surge** (maxSurge) - - Specifies the maximum number of pods that can exist over **spec.replicas**. The default value is 25%. For example, if **spec.replicas** is set to **4**, no more than 5 pods can exist during the upgrade process, that is, the upgrade step is 1. The absolute number is calculated from the percentage by rounding up. The value can also be set to an absolute number. - - This parameter is supported only by Deployments. - -- **Max. Unavailable Pods** (maxUnavailable) - - Specifies the maximum number of pods that can be unavailable during the update process. The default value is 25%. For example, if **spec.replicas** is set to **4**, at least 3 pods exist during the upgrade process, that is, the deletion step is 1. The value can also be set to an absolute number. - - This parameter is supported only by Deployments. - -- **Min. Ready Seconds** (minReadySeconds) - - A pod is considered available only when the minimum readiness time is exceeded without any of its containers crashing. The default value is **0** (the pod is considered available immediately after it is ready). - -- **Revision History Limit** (revisionHistoryLimit) - - Specifies the number of old ReplicaSets to retain to allow rollback. These old ReplicaSets consume resources in etcd and crowd the output of **kubectl get rs**. The configuration of each Deployment revision is stored in its ReplicaSets. Therefore, once the old ReplicaSet is deleted, you lose the ability to roll back to that revision of Deployment. By default, 10 old ReplicaSets will be kept, but the ideal value depends on the frequency and stability of the new Deployments. - -- **Max. Upgrade Duration** (progressDeadlineSeconds) - - Specifies the number of seconds that the system waits for a Deployment to make progress before reporting a Deployment progress failure. It is surfaced as a condition with Type=Progressing, Status=False, and Reason=ProgressDeadlineExceeded in the status of the resource. The Deployment controller will keep retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment controller will roll back a Deployment as soon as it observes such a condition. - - If this parameter is specified, the value of this parameter must be greater than that of **.spec.minReadySeconds**. - -- **Scale-In Time Window** (terminationGracePeriodSeconds) - - Graceful deletion time. The default value is 30 seconds. When a pod is deleted, a SIGTERM signal is sent and the system waits for the applications in the container to terminate. If the application is not terminated within the time specified by **terminationGracePeriodSeconds**, a SIGKILL signal is sent to forcibly terminate the pod. ++------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------+ +| Parameter | Description | Constraint | ++======================================================+=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================+=================================================================+ +| Max. Surge (maxSurge) | Specifies the maximum number of pods that can exist compared with **spec.replicas**. The default value is **25%**. | This parameter is supported only by Deployments and DaemonSets. | +| | | | +| | For example, if **spec.replicas** is set to **4**, a maximum of five pods can exist during the upgrade. That is, the upgrade is performed at a step of 1. During the actual upgrade, the value is converted into a number and rounded up. The value can also be set to an absolute number. | | ++------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------+ +| Max. Unavailable Pods (maxUnavailable) | Specifies the maximum number of pods that can be unavailable compared with **spec.replicas**. The default value is **25%** | This parameter is supported only by Deployments and DaemonSets. | +| | | | +| | For example, if **spec.replicas** is set to **4**, at least three pods exist during the upgrade. That is, the deletion is performed at a step of 1. The value can also be set to an absolute number. | | ++------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------+ +| **Min. Ready Seconds** (minReadySeconds) | A pod is considered available only when the minimum readiness time is exceeded without any of its containers crashing. The default value is **0** (the pod is considered available immediately after it is ready). | None | ++------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------+ +| Revision History Limit (revisionHistoryLimit) | Specifies the number of old ReplicaSets to retain to allow rollback. These old ReplicaSets consume resources in etcd and crowd the output of **kubectl get rs**. The configuration of each Deployment revision is stored in its ReplicaSets. Therefore, once the old ReplicaSet is deleted, you lose the ability to roll back to that revision of Deployment. By default, 10 old ReplicaSets will be kept, but the ideal value depends on the frequency and stability of the new Deployments. | None | ++------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------+ +| Max. Upgrade Duration (progressDeadlineSeconds) | Specifies the number of seconds that the system waits for a Deployment to make progress before reporting a Deployment progress failure. It is surfaced as a condition with Type=Progressing, Status=False, and Reason=ProgressDeadlineExceeded in the status of the resource. The Deployment controller will keep retrying the Deployment. In the future, once automatic rollback will be implemented, the Deployment controller will roll back a Deployment as soon as it observes such a condition. | None | +| | | | +| | If this parameter is specified, the value of this parameter must be greater than that of **.spec.minReadySeconds**. | | ++------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------+ +| Scale-In Time Window (terminationGracePeriodSeconds) | Graceful deletion time. The default value is 30 seconds. When a pod is deleted, a SIGTERM signal is sent and the system waits for the applications in the container to terminate. If the application is not terminated within the time specified by **terminationGracePeriodSeconds**, a SIGKILL signal is sent to forcibly terminate the pod. | None | ++------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------+ Upgrade Example --------------- diff --git a/umn/source/workloads/configuring_a_container/index.rst b/umn/source/workloads/configuring_a_container/index.rst index 8578dea..3eea92b 100644 --- a/umn/source/workloads/configuring_a_container/index.rst +++ b/umn/source/workloads/configuring_a_container/index.rst @@ -5,30 +5,30 @@ Configuring a Container ======================= -- :ref:`Setting Basic Container Information ` -- :ref:`Using a Third-Party Image ` +- :ref:`Configuring Time Zone Synchronization ` +- :ref:`Configuring an Image Pull Policy ` +- :ref:`Using Third-Party Images ` - :ref:`Setting Container Specifications ` - :ref:`Setting Container Lifecycle Parameters ` - :ref:`Setting Health Check for a Container ` - :ref:`Setting an Environment Variable ` -- :ref:`Enabling ICMP Security Group Rules ` -- :ref:`Configuring an Image Pull Policy ` -- :ref:`Configuring Time Zone Synchronization ` - :ref:`Configuring the Workload Upgrade Policy ` - :ref:`Scheduling Policy (Affinity/Anti-affinity) ` +- :ref:`Taints and Tolerations ` +- :ref:`Labels and Annotations ` .. toctree:: :maxdepth: 1 :hidden: - setting_basic_container_information - using_a_third-party_image + configuring_time_zone_synchronization + configuring_an_image_pull_policy + using_third-party_images setting_container_specifications setting_container_lifecycle_parameters setting_health_check_for_a_container setting_an_environment_variable - enabling_icmp_security_group_rules - configuring_an_image_pull_policy - configuring_time_zone_synchronization configuring_the_workload_upgrade_policy scheduling_policy_affinity_anti-affinity + taints_and_tolerations + labels_and_annotations diff --git a/umn/source/workloads/pod_labels_and_annotations.rst b/umn/source/workloads/configuring_a_container/labels_and_annotations.rst similarity index 76% rename from umn/source/workloads/pod_labels_and_annotations.rst rename to umn/source/workloads/configuring_a_container/labels_and_annotations.rst index 47233eb..fe209db 100644 --- a/umn/source/workloads/pod_labels_and_annotations.rst +++ b/umn/source/workloads/configuring_a_container/labels_and_annotations.rst @@ -2,14 +2,16 @@ .. _cce_10_0386: -Pod Labels and Annotations -========================== +Labels and Annotations +====================== Pod Annotations --------------- CCE allows you to add annotations to a YAML file to realize some advanced pod functions. The following table describes the annotations you can add. +.. _cce_10_0386__table194691458405: + .. table:: **Table 1** Pod annotations +----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ @@ -21,15 +23,19 @@ CCE allows you to add annotations to a YAML file to realize some advanced pod fu | | | | | | - Collecting none of the stdout logs: | | | | | | - | | kubernetes.AOM.log.stdout: '[]' | | + | | .. code-block:: | | + | | | | + | | kubernetes.AOM.log.stdout: '[]' | | | | | | | | - Collecting stdout logs of container-1 and container-2: | | | | | | - | | kubernetes.AOM.log.stdout: '["container-1","container-2"]' | | + | | .. code-block:: | | + | | | | + | | kubernetes.AOM.log.stdout: '["container-1","container-2"]' | | +----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ | metrics.alpha.kubernetes.io/custom-endpoints | Parameter for reporting AOM monitoring metrics that you specify. | None | | | | | - | | For details, see :ref:`Custom Monitoring `. | | + | | For details, see :ref:`Monitoring Custom Metrics on AOM `. | | +----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ | kubernetes.io/ingress-bandwidth | Ingress bandwidth of a pod. | None | | | | | @@ -43,9 +49,11 @@ CCE allows you to add annotations to a YAML file to realize some advanced pod fu Pod Labels ---------- -When you create a workload on the console, the following labels are added to the pod by default. The value of **app** is the workload name. You can add labels as required. +When you create a workload on the console, the following labels are added to the pod by default. The value of **app** is the workload name. -The pod labels added here will be added to the **selector.matchLabels** parameter in the workload definition. The following is an example YAML file: +|image1| + +Example YAML: .. code-block:: @@ -62,3 +70,19 @@ The pod labels added here will be added to the **selector.matchLabels** paramete version: v1 spec: ... + +You can also add other labels to the pod for affinity and anti-affinity scheduling. In the following figure, three pod labels (release, env, and role) are defined for workload APP 1, APP 2, and APP 3. The values of these labels vary with workload. + +- APP 1: [release:alpha;env:development;role:frontend] +- APP 2: [release:beta;env:testing;role:frontend] +- APP 3: [release:alpha;env:production;role:backend] + + +.. figure:: /_static/images/en-us_image_0000001647417504.png + :alt: **Figure 1** Label example + + **Figure 1** Label example + +For example, if **key/value** is set to **role/backend**, APP 3 will be selected for affinity scheduling. For details, see :ref:`Workload Affinity (podAffinity) `. + +.. |image1| image:: /_static/images/en-us_image_0000001715625689.png diff --git a/umn/source/workloads/configuring_a_container/scheduling_policy_affinity_anti-affinity.rst b/umn/source/workloads/configuring_a_container/scheduling_policy_affinity_anti-affinity.rst index eff8a90..9c1ed40 100644 --- a/umn/source/workloads/configuring_a_container/scheduling_policy_affinity_anti-affinity.rst +++ b/umn/source/workloads/configuring_a_container/scheduling_policy_affinity_anti-affinity.rst @@ -5,14 +5,85 @@ Scheduling Policy (Affinity/Anti-affinity) ========================================== -A nodeSelector provides a very simple way to constrain pods to nodes with particular labels, as mentioned in :ref:`Creating a DaemonSet `. The affinity and anti-affinity feature greatly expands the types of constraints you can express. +Kubernetes supports node affinity and pod affinity/anti-affinity. You can configure custom rules to achieve affinity and anti-affinity scheduling. For example, you can deploy frontend pods and backend pods together, deploy the same type of applications on a specific node, or deploy different applications on different nodes. -Kubernetes supports node-level and pod-level affinity and anti-affinity. You can configure custom rules to achieve affinity and anti-affinity scheduling. For example, you can deploy frontend pods and backend pods together, deploy the same type of applications on a specific node, or deploy different applications on different nodes. +Kubernetes affinity applies to nodes and pods. + +- :ref:`nodeAffinity `: similar to pod nodeSelector, and they both schedule pods only to the nodes with specified labels. The difference between nodeAffinity and nodeSelector lies in that nodeAffinity features stronger expression than nodeSelector and allows you to specify preferentially selected soft constraints. The two types of node affinity are as follows: + + - requiredDuringSchedulingIgnoredDuringExecution: hard constraint that **must be met**. The scheduler can perform scheduling only when the rule is met. This function is similar to nodeSelector, but it features stronger syntax expression. For details, see :ref:`Node Affinity (nodeAffinity) `. + - preferredDuringSchedulingIgnoredDuringExecution: soft constraint that is **met as much as possible**. The scheduler attempts to find the node that meets the rule. If no matching node is found, the scheduler still schedules the pod. For details, see :ref:`Node Preference Rule `. + +- :ref:`Workload Affinity (podAffinity) `/:ref:`Workload Anti-affinity (podAntiAffinity) `: The nodes to which a pod can be scheduled are determined based on the label of the pod running on a node, but not the label of the node. Similar to node affinity, workload affinity and anti-affinity are also of requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution types. + + .. note:: + + Workload affinity and anti-affinity require a certain amount of computing time, which significantly slows down scheduling in large-scale clusters. Do not enable workload affinity and anti-affinity in a cluster that contains hundreds of nodes. + +You can create the preceding affinity policies on the console. For details, see :ref:`Configuring Scheduling Policies `. + +.. _cce_10_0232__section182211754174317: + +Configuring Scheduling Policies +------------------------------- + +#. Log in to the CCE console. + +#. When creating a workload, click **Scheduling** in the **Advanced Settings** area. + + .. table:: **Table 1** Node affinity settings + + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+===========================================================================================================================================================+ + | Required | Hard constraint, which corresponds to requiredDuringSchedulingIgnoredDuringExecution for specifying the conditions that must be met. | + | | | + | | If multiple rules **that must be met** are added, scheduling will be performed when only one rule is met. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Preferred | Soft constraint, which corresponds to preferredDuringSchedulingIgnoredDuringExecution for specifying the conditions that must be met as many as possible. | + | | | + | | If multiple rules **that must be met as much as possible** are added, scheduling will be performed even if one or none of the rules is met. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------+ + +#. Click |image1| under **Node Affinity**, **Workload Affinity**, or **Workload Anti-Affinity** to add scheduling policies. In the dialog box that is displayed, directly add policies. Alternatively, you can specify nodes or AZs to be scheduled on the console. + + Specifying nodes and AZs is also implemented through labels. The console frees you from manually entering node labels. The **kubernetes.io/hostname** label is used when you specify a node, and the **failure-domain.beta.kubernetes.io/zone** label is used when you specify an AZ. + + .. table:: **Table 2** Parameters for configuring the scheduling policy + + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=========================================================================================================================================+ + | Label | Node label. You can use the default label or customize a label. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + | Operator | The following relations are supported: **In**, **NotIn**, **Exists**, **DoesNotExist**, **Gt**, and **Lt**. | + | | | + | | - **In**: The label of the affinity or anti-affinity object is in the label value list (**values** field). | + | | - **NotIn**: The label of the affinity or anti-affinity object is not in the label value list (**values** field). | + | | - **Exists**: The affinity or anti-affinity object has a specified label name. | + | | - **DoesNotExist**: The affinity or anti-affinity object does not have the specified label name. | + | | - **Gt**: (available only for node affinity) The label value of the scheduled node is greater than the list value (string comparison). | + | | - **Lt**: (available only for node affinity) The label value of the scheduling node is less than the list value (string comparison). | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + | Label Value | Label value. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + | Namespace | This parameter is available only in a workload affinity or anti-affinity scheduling policy. | + | | | + | | Namespace for which the scheduling policy takes effect. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + | Topology Key | This parameter can be used only in a workload affinity or anti-affinity scheduling policy. | + | | | + | | Select the scope specified by **topologyKey** and then select the content defined by the policy. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + | Weight | This parameter can be set only in a **Preferred** scheduling policy. | + +-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------+ + +.. _cce_10_0232__section1665272918139: Node Affinity (nodeAffinity) ---------------------------- -Labels are the basis of affinity rules. Check the labels on the nodes in a cluster. +Workload node affinity rules are implemented using node labels. When a node is created in a CCE cluster, certain labels are automatically added. You can run the **kubectl describe node** command to view the labels. The following is an example: .. code-block:: @@ -34,13 +105,13 @@ Labels are the basis of affinity rules. Check the labels on the nodes in a clust os.name=EulerOS_2.0_SP5 os.version=3.10.0-862.14.1.5.h328.eulerosv2r7.x86_64 -These labels are automatically added by CCE during node creation. The following describes a few that are frequently used during scheduling. +In workload scheduling, common node labels are as follows: - **failure-domain.beta.kubernetes.io/region**: region where the node is located. - **failure-domain.beta.kubernetes.io/zone**: availability zone to which the node belongs. - **kubernetes.io/hostname**: host name of the node. -When you deploy pods, you can use a nodeSelector, as described in :ref:`DaemonSet `, to constrain pods to nodes with specific labels. The following example shows how to use a nodeSelector to deploy pods only on the nodes with the **gpu=true** label. +Kubernetes provides the **nodeSelector** field. When creating a workload, you can set this field to specify that the pod can be deployed only on a node with the specific label. The following example shows how to use a nodeSelector to deploy the pod only on the node with the **gpu=true** label. .. code-block:: @@ -53,7 +124,16 @@ When you deploy pods, you can use a nodeSelector, as described in :ref:`DaemonSe gpu: true ... -Node affinity rules can achieve the same results, as shown in the following example. +Node affinity rules can achieve the same results. Compared with nodeSelector, node affinity rules seem more complex, but with a more expressive syntax. You can use the **spec.affinity.nodeAffinity** field to set node affinity. There are two types of node affinity: + +- **requiredDuringSchedulingIgnoredDuringExecution**: Kubernetes cannot schedule the pod unless the rule is met. +- **PreferredDuringSchedulingIgnoredDuringExecution**: Kubernetes tries to find a node that meets the rule. If a matching node is not available, Kubernetes still schedules the pod. + +.. note:: + + In these two types of node affinity, **requiredDuringScheduling** or **preferredDuringScheduling** indicates that the pod can be scheduled to a node only when all the defined rules are met (required). **IgnoredDuringExecution** indicates that if the node label changes after Kubernetes schedules the pod, the pod continues to run and will not be rescheduled. + +The following is an example of setting node affinity: .. code-block:: @@ -95,22 +175,7 @@ Node affinity rules can achieve the same results, as shown in the following exam values: - "true" -Even though the node affinity rule requires more lines, it is more expressive, which will be further described later. - -**requiredDuringSchedulingIgnoredDuringExecution** seems to be complex, but it can be easily understood as a combination of two parts. - -- requiredDuringScheduling indicates that pods can be scheduled to the node only when all the defined rules are met (required). -- IgnoredDuringExecution indicates that pods already running on the node do not need to meet the defined rules. That is, a label on the node is ignored, and pods that require the node to contain that label will not be re-scheduled. - -In addition, the value of **operator** is **In**, indicating that the label value must be in the values list. Other available operator values are as follows: - -- **NotIn**: The label value is not in a list. -- **Exists**: A specific label exists. -- **DoesNotExist**: A specific label does not exist. -- **Gt**: The label value is greater than a specified value (string comparison). -- **Lt**: The label value is less than a specified value (string comparison). - -Note that there is no such thing as nodeAntiAffinity because operators **NotIn** and **DoesNotExist** provide the same function. +In this example, the scheduled node must contain a label with the key named **gpu**. The value of **operator** is to **In**, indicating that the label value must be in the **values** list. That is, the key value of the **gpu** label of the node is **true**. For details about other values of **operator**, see :ref:`Operator Value Description `. Note that there is no such thing as nodeAntiAffinity because operators **NotIn** and **DoesNotExist** provide the same function. The following describes how to check whether the rule takes effect. Assume that a cluster has three nodes. @@ -148,6 +213,8 @@ Create the Deployment. You can find that all pods are deployed on the **192.168. gpu-6df65c44cf-jzjvs 1/1 Running 0 15s 172.16.0.36 192.168.0.212 gpu-6df65c44cf-zv5cl 1/1 Running 0 15s 172.16.0.38 192.168.0.212 +.. _cce_10_0232__section168955237561: + Node Preference Rule -------------------- @@ -239,16 +306,22 @@ From the preceding output, you can find that no pods of the Deployment are sched In the preceding example, the node scheduling priority is as follows. Nodes with both **SSD** and **gpu=true** labels have the highest priority. Nodes with the **SSD** label but no **gpu=true** label have the second priority (weight: 80). Nodes with the **gpu=true** label but no **SSD** label have the third priority. Nodes without any of these two labels have the lowest priority. -.. figure:: /_static/images/en-us_image_0000001569022881.png +.. figure:: /_static/images/en-us_image_0000001695896365.png :alt: **Figure 1** Scheduling priority **Figure 1** Scheduling priority +.. _cce_10_0232__section3218151791419: + Workload Affinity (podAffinity) ------------------------------- Node affinity rules affect only the affinity between pods and nodes. Kubernetes also supports configuring inter-pod affinity rules. For example, the frontend and backend of an application can be deployed together on one node to reduce access latency. There are also two types of inter-pod affinity rules: **requiredDuringSchedulingIgnoredDuringExecution** and **preferredDuringSchedulingIgnoredDuringExecution**. +.. note:: + + For workload affinity, topologyKey cannot be left blank when requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution are used. + Assume that the backend of an application has been created and has the **app=backend** label. .. code-block:: @@ -314,7 +387,7 @@ Deploy the frontend and you can find that the frontend is deployed on the same n frontend-67ff9b7b97-hxm5t 1/1 Running 0 6s 172.16.0.71 192.168.0.100 frontend-67ff9b7b97-z8pdb 1/1 Running 0 6s 172.16.0.72 192.168.0.100 -The **topologyKey** field specifies the selection range. The scheduler selects nodes within the range based on the affinity rule defined. The effect of **topologyKey** is not fully demonstrated in the preceding example because all the nodes have the **kubernetes.io/hostname** label, that is, all the nodes are within the range. +The **topologyKey** field is used to divide topology domains to specify the selection range. If the label keys and values of nodes are the same, the nodes are considered to be in the same topology domain. Then, the contents defined in the following rules are selected. The effect of **topologyKey** is not fully demonstrated in the preceding example because all the nodes have the **kubernetes.io/hostname** label, that is, all the nodes are within the range. To see how **topologyKey** works, assume that the backend of the application has two pods, which are running on different nodes. @@ -341,7 +414,7 @@ Add the **prefer=true** label to nodes **192.168.0.97** and **192.168.0.94**. 192.168.0.94 Ready 91m v1.15.6-r1-20.3.0.2.B001-15.30.2 true 192.168.0.97 Ready 91m v1.15.6-r1-20.3.0.2.B001-15.30.2 true -Define **topologyKey** in the **podAffinity** section as **prefer**. The node topology domains are divided as shown in :ref:`Figure 2 `. +If the **topologyKey** of **podAffinity** is set to **prefer**, the node topology domains are divided as shown in :ref:`Figure 2 `. .. code-block:: @@ -358,12 +431,12 @@ Define **topologyKey** in the **podAffinity** section as **prefer**. The node to .. _cce_10_0232__fig511152614544: -.. figure:: /_static/images/en-us_image_0000001517903036.png - :alt: **Figure 2** Topology domain example +.. figure:: /_static/images/en-us_image_0000001647576692.png + :alt: **Figure 2** Topology domains - **Figure 2** Topology domain example + **Figure 2** Topology domains -During scheduling, node topology domains are divided based on the **prefer** label. In this example, **192.168.0.97** and **192.168.0.94** are divided into the same topology domain. If pods with the **app=backend** label run in **192.168.0.97**, all frontend pods are deployed in **192.168.0.97** or **192.168.0.94**. +During scheduling, node topology domains are divided based on the **prefer** label. In this example, **192.168.0.97** and **192.168.0.94** are divided into the same topology domain. If a pod with the **app=backend** label runs in the topology domain, even if not all nodes in the topology domain run the pod with the **app=backend** label (in this example, only the **192.168.0.97** node has such a pod), **frontend** is also deployed in this topology domain (**192.168.0.97** or **192.168.0.94**). .. code-block:: @@ -378,12 +451,18 @@ During scheduling, node topology domains are divided based on the **prefer** lab frontend-67ff9b7b97-hxm5t 1/1 Running 0 6s 172.16.0.71 192.168.0.97 frontend-67ff9b7b97-z8pdb 1/1 Running 0 6s 172.16.0.72 192.168.0.97 +.. _cce_10_0232__section59542620588: + Workload Anti-Affinity (podAntiAffinity) ---------------------------------------- Unlike the scenarios in which pods are preferred to be scheduled onto the same node, sometimes, it could be the exact opposite. For example, if certain pods are deployed together, they will affect the performance. -In the following example, an anti-affinity rule is defined. This rule indicates that node topology domains are divided based on the **kubernetes.io/hostname** label. If a pod with the **app=frontend** label already exists on a node in the topology domain, pods with the same label cannot be scheduled to other nodes in the topology domain. +.. note:: + + For workload anti-affinity, when requiredDuringSchedulingIgnoredDuringExecution is used, the default access controller LimitPodHardAntiAffinityTopology of Kubernetes requires that topologyKey can only be **kubernetes.io/hostname**. To use other custom topology logic, modify or disable the access controller. + +The following is an example of defining an anti-affinity rule. This rule divides node topology domains by the **kubernetes.io/hostname** label. If a pod with the **app=frontend** label already exists on a node in the topology domain, pods with the same label cannot be scheduled to other nodes in the topology domain. .. code-block:: @@ -418,7 +497,7 @@ In the following example, an anti-affinity rule is defined. This rule indicates affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - - topologyKey: kubernetes.io/hostname # Node topology domain + - topologyKey: kubernetes.io/hostname # Topology domain of the node labelSelector: # Pod label matching rule matchExpressions: - key: app @@ -426,7 +505,7 @@ In the following example, an anti-affinity rule is defined. This rule indicates values: - frontend -Create an anti-affinity rule and view the deployment result. In the example, node topology domains are divided by the **kubernetes.io/hostname** label. Among nodes with the **kubernetes.io/hostname** label, the label value of each node is different. Therefore, there is only one node in a topology domain. If a frontend pod already exists in a topology (a node in this example), the same pods will not be scheduled to this topology. In this example, there are only four nodes. Therefore, another pod is pending and cannot be scheduled. +Create an anti-affinity rule and view the deployment result. In the example, node topology domains are divided by the **kubernetes.io/hostname** label. The label values of nodes with the **kubernetes.io/hostname** label are different, so there is only one node in a topology domain. If a **frontend** pod already exists in a topology domain, pods with the same label will not be scheduled to the topology domain. In this example, there are only four nodes. Therefore, there is one pod which is in the **Pending** state and cannot be scheduled. .. code-block:: @@ -441,54 +520,18 @@ Create an anti-affinity rule and view the deployment result. In the example, nod frontend-6f686d8d87-q7cfq 1/1 Running 0 18s 172.16.0.47 192.168.0.212 frontend-6f686d8d87-xl8hx 1/1 Running 0 18s 172.16.0.23 192.168.0.94 -Configuring Scheduling Policies -------------------------------- +.. _cce_10_0232__section333404214910: -#. Log in to the CCE console. +Operator Value Description +-------------------------- -#. When creating a workload, click **Scheduling** in the **Advanced Settings** area. +You can use the **operator** field to set the logical relationship of the usage rule. The value of **operator** can be: - .. table:: **Table 1** Node affinity settings +- **In**: The label of the affinity or anti-affinity object is in the label value list (**values** field). +- **NotIn**: The label of the affinity or anti-affinity object is not in the label value list (**values** field). +- **Exists**: The affinity or anti-affinity object has a specified label name. +- **DoesNotExist**: The affinity or anti-affinity object does not have the specified label name. +- **Gt**: (available only for node affinity) The label value of the scheduled node is greater than the list value (string comparison). +- **Lt**: (available only for node affinity) The label value of the scheduling node is less than the list value (string comparison). - +-----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===========+===========================================================================================================================================================================================================================================================================+ - | Required | This is a hard rule that must be met for scheduling. It corresponds to **requiredDuringSchedulingIgnoredDuringExecution** in Kubernetes. Multiple required rules can be set, and scheduling will be performed if only one of them is met. | - +-----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Preferred | This is a soft rule specifying preferences that the scheduler will try to enforce but will not guarantee. It corresponds to **preferredDuringSchedulingIgnoredDuringExecution** in Kubernetes. Scheduling is performed when one rule is met or none of the rules are met. | - +-----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -#. Under **Node Affinity**, **Workload Affinity**, and **Workload Anti-Affinity**, click |image1| to add scheduling policies. In the dialog box displayed, add a policy directly or by specifying a node or an AZ. - - Specifying a node or an AZ is essentially implemented through labels. The **kubernetes.io/hostname** label is used when you specify a node, and the **failure-domain.beta.kubernetes.io/zone** label is used when you specify an AZ. - - .. table:: **Table 2** Scheduling policy configuration - - +-----------------------------------+------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+============================================================================================================+ - | Label | Node label. You can use the default label or customize a label. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------+ - | Operator | The following relations are supported: **In**, **NotIn**, **Exists**, **DoesNotExist**, **Gt**, and **Lt** | - | | | - | | - **In**: A label exists in the label list. | - | | - **NotIn**: A label does not exist in the label list. | - | | - **Exists**: A specific label exists. | - | | - **DoesNotExist**: A specific label does not exist. | - | | - **Gt**: The label value is greater than a specified value (string comparison). | - | | - **Lt**: The label value is less than a specified value (string comparison). | - +-----------------------------------+------------------------------------------------------------------------------------------------------------+ - | Label Value | Label value. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------+ - | Namespace | This parameter is available only in a workload affinity or anti-affinity scheduling policy. | - | | | - | | Namespace for which the scheduling policy takes effect. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------+ - | Topology Key | This parameter can be used only in a workload affinity or anti-affinity scheduling policy. | - | | | - | | Select the scope specified by **topologyKey** and then select the content defined by the policy. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------+ - | Weight | This parameter can be set only in a **Preferred** scheduling policy. | - +-----------------------------------+------------------------------------------------------------------------------------------------------------+ - -.. |image1| image:: /_static/images/en-us_image_0000001518062612.png +.. |image1| image:: /_static/images/en-us_image_0000001647576696.png diff --git a/umn/source/workloads/configuring_a_container/setting_an_environment_variable.rst b/umn/source/workloads/configuring_a_container/setting_an_environment_variable.rst index d2c21ee..3c33fec 100644 --- a/umn/source/workloads/configuring_a_container/setting_an_environment_variable.rst +++ b/umn/source/workloads/configuring_a_container/setting_an_environment_variable.rst @@ -20,22 +20,31 @@ The function of setting environment variables on CCE is the same as that of spec Environment variables can be set in the following modes: -- **Custom** -- **Added from ConfigMap**: Import all keys in a ConfigMap as environment variables. -- **Added from ConfigMap key**: Import a key in a ConfigMap as the value of an environment variable. For example, if you import **configmap_value** of **configmap_key** in a ConfigMap as the value of environment variable **key1**, an environment variable named **key1** with its value **is configmap_value** exists in the container. +- **Custom**: Enter the environment variable name and parameter value. +- **Added from ConfigMap key**: Import all keys in a ConfigMap as environment variables. +- **Added from ConfigMap**: Import a key in a ConfigMap as the value of an environment variable. As shown in :ref:`Figure 1 `, if you import **configmap_value** of **configmap_key** in a ConfigMap as the value of environment variable **key1**, an environment variable named **key1** whose value is **configmap_value** exists in the container. - **Added from secret**: Import all keys in a secret as environment variables. -- **Added from secret key**: Import the value of a key in a secret as the value of an environment variable. For example, if you import **secret_value** of **secret_key** in secret **secret-example** as the value of environment variable **key2**, an environment variable named **key2** with its value **secret_value** exists in the container. -- **Variable value/reference**: Use the field defined by a pod as the value of the environment variable, for example, the pod name. -- **Resource Reference**: Use the field defined by a container as the value of the environment variable, for example, the CPU limit of the container. +- **Added from secret key**: Import the value of a key in a secret as the value of an environment variable. As shown in :ref:`Figure 1 `, if you import **secret_value** of **secret_key** in secret **secret-example** as the value of environment variable **key2**, an environment variable named **key2** whose value is **secret_value** exists in the container. +- **Variable value/reference**: Use the field defined by a pod as the value of the environment variable. As shown in :ref:`Figure 1 `, if the pod name is imported as the value of environment variable **key3**, an environment variable named **key3** exists in the container and its value is the pod name. +- **Resource Reference**: The value of **Request** or **Limit** defined by the container is used as the value of the environment variable. As shown in :ref:`Figure 1 `, if you import the CPU limit of container-1 as the value of environment variable **key4**, an environment variable named **key4** exists in the container and its value is the CPU limit of container-1. Adding Environment Variables ---------------------------- -#. Log in to the CCE console. When creating a workload, select **Environment Variables** under **Container Settings**. +#. Log in to the CCE console. -#. Set environment variables. +#. Click the cluster name to go to the cluster console, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. - |image1| +#. When creating a workload, modify the container information in the **Container Settings** area and click the **Environment Variables** tab. + +#. Configure environment variables. + + .. _cce_10_0113__fig164568529317: + + .. figure:: /_static/images/en-us_image_0000001695896581.png + :alt: **Figure 1** Configuring environment variables + + **Figure 1** Configuring environment variables YAML Example ------------ @@ -138,5 +147,3 @@ The environment variables in the pod are as follows: key4=1 # limits.cpu defined by container1. The value is rounded up, in unit of cores. configmap_key=configmap_value # Added from ConfigMap. The key value in the original ConfigMap key is directly imported. secret_key=secret_value # Added from key. The key value in the original secret is directly imported. - -.. |image1| image:: /_static/images/en-us_image_0000001569022913.png diff --git a/umn/source/workloads/configuring_a_container/setting_basic_container_information.rst b/umn/source/workloads/configuring_a_container/setting_basic_container_information.rst deleted file mode 100644 index 57a934d..0000000 --- a/umn/source/workloads/configuring_a_container/setting_basic_container_information.rst +++ /dev/null @@ -1,46 +0,0 @@ -:original_name: cce_10_0396.html - -.. _cce_10_0396: - -Setting Basic Container Information -=================================== - -A workload is an abstract model of a group of pods. One pod can encapsulate one or more containers. You can click **Add Container** in the upper right corner to add multiple container images and set them separately. - -.. table:: **Table 1** Image parameters - - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Parameter | Description | - +===================================+=====================================================================================================================================================================================================================================================================================+ - | Container Name | Name the container. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Image Name | Click **Select Image** and select the image used by the container. | - | | | - | | If you need to use a third-party image, see :ref:`Using a Third-Party Image `. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Image Tag | Select the image tag to be deployed. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Pull Policy | Image update or pull policy. If you select **Always**, the image is pulled from the image repository each time. If you do not select **Always**, the existing image of the node is preferentially used. If the image does not exist, the image is pulled from the image repository. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | CPU Quota | - **Request**: minimum number of CPU cores required by a container. The default value is 0.25 cores. | - | | - **Limit**: maximum number of CPU cores available for a container. Do not leave **Limit** unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Memory Quota | - **Request**: minimum amount of memory required by a container. The default value is 512 MiB. | - | | - **Limit**: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated. | - | | | - | | For more information about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | GPU Quota | It is configurable only when the cluster contains GPU nodes. | - | | | - | | - **All**: The GPU is not used. | - | | - **Dedicated**: GPU resources are exclusively used by the container. | - | | - **Shared**: percentage of GPU resources used by the container. For example, if this parameter is set to **10%**, the container uses 10% of GPU resources. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Privileged Container | Programs in a privileged container have certain privileges. | - | | | - | | If **Privileged Container** is enabled, the container is assigned privileges. For example, privileged containers can manipulate network devices on the host machine and modify kernel parameters. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Init Container | Indicates whether to use the container as an init container. | - | | | - | | An init container is a special container that run before app containers in a pod. For details, see `Init Container `__. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/workloads/configuring_a_container/setting_container_lifecycle_parameters.rst b/umn/source/workloads/configuring_a_container/setting_container_lifecycle_parameters.rst index 1700061..cfd2795 100644 --- a/umn/source/workloads/configuring_a_container/setting_container_lifecycle_parameters.rst +++ b/umn/source/workloads/configuring_a_container/setting_container_lifecycle_parameters.rst @@ -51,7 +51,7 @@ If the commands and arguments used to run a container are set during application +===================================+=============================================================================================================================================+ | Command | Enter an executable command, for example, **/run/server**. | | | | - | | If there are multiple commands, separate them with spaces. If the command contains a space, you need to add a quotation mark (""). | + | | If there are multiple executable commands, write them in different lines. | | | | | | .. note:: | | | | @@ -92,7 +92,7 @@ Post-Start Processing | | | | | - **Path**: (optional) request URL. | | | - **Port**: (mandatory) request port. | - | | - **Host**: (optional) IP address of the request. The default value is the IP address of the node where the container resides. | + | | - **Host**: (optional) requested host IP address. The default value is the IP address of the pod. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ .. _cce_10_0105__section2334114473712: @@ -125,7 +125,7 @@ Pre-Stop Processing | | | | | - **Path**: (optional) request URL. | | | - **Port**: (mandatory) request port. | - | | - **Host**: (optional) IP address of the request. The default value is the IP address of the node where the container resides. | + | | - **Host**: (optional) requested host IP address. The default value is the IP address of the pod. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Example YAML @@ -154,19 +154,19 @@ In the following configuration file, the **postStart** command is defined to run containers: - image: nginx command: - - sleep 3600 #Startup command + - sleep 3600 # Startup command imagePullPolicy: Always lifecycle: postStart: exec: command: - /bin/bash - - install.sh #Post-start command + - install.sh # Post-start command preStop: exec: command: - /bin/bash - - uninstall.sh #Pre-stop command + - uninstall.sh # Pre-stop command name: nginx imagePullSecrets: - name: default-secret diff --git a/umn/source/workloads/configuring_a_container/setting_container_specifications.rst b/umn/source/workloads/configuring_a_container/setting_container_specifications.rst index c78d517..f951f3d 100644 --- a/umn/source/workloads/configuring_a_container/setting_container_specifications.rst +++ b/umn/source/workloads/configuring_a_container/setting_container_specifications.rst @@ -8,15 +8,19 @@ Setting Container Specifications Scenario -------- -CCE allows you to set resource limits for added containers during workload creation. You can apply for and limit the CPU and memory quotas used by each pod in a workload. +CCE allows you to set resource requirements and limits, such as CPU and RAM, for added containers during workload creation. Kubernetes also allows using YAML to set requirements of other resource types. -Meanings --------- +Request and Limit +----------------- For **CPU** and **Memory**, the meanings of **Request** and **Limit** are as follows: -- **Request**: Schedules the pod to the node that meets the requirements for workload deployment. -- **Limit**: Limits the resources used by the workload. +- **Request**: The system schedules a pod to the node that meets the requirements for workload deployment based on the request value. +- **Limit**: The system limits the resources used by the workload based on the limit value. + +If a node has sufficient resources, the pod on this node can use more resources than requested, but no more than limited. + +For example, if you set the memory request of a container to 1 GiB and the limit value to 2 GiB, a pod is scheduled to a node with 8 GiB CPUs with no other pod running. In this case, the pod can use more than 1 GiB memory when the load is heavy, but the memory usage cannot exceed 2 GiB. If a process in a container attempts to use more than 2 GiB resources, the system kernel attempts to terminate the process. As a result, an out of memory (OOM) error occurs. .. note:: @@ -25,9 +29,9 @@ For **CPU** and **Memory**, the meanings of **Request** and **Limit** are as fol Configuration Description ------------------------- -In actual production services, the recommended ratio of **Request** to **Limit** is about 1:1.5. For some sensitive services, the recommended ratio is 1:1. If the **Request** is too small and the **Limit** is too large, node resources are overcommitted. During service peaks, the memory or CPU of a node may be used up. As a result, the node is unavailable. +In real-world scenarios, the recommended ratio of **Request** to **Limit** is about 1:1.5. For some sensitive services, the recommended ratio is 1:1. If the **Request** is too small and the **Limit** is too large, node resources are oversubscribed. During service peaks, the memory or CPU of a node may be used up. As a result, the node is unavailable. -- CPU quotas: +- CPU quota: The unit of CPU resources is core, which can be expressed by quantity or an integer suffixed with the unit (m). For example, 0.1 core in the quantity expression is equivalent to 100m in the expression. However, Kubernetes does not allow CPU resources whose precision is less than 1m. .. table:: **Table 1** Description of CPU quotas @@ -43,7 +47,7 @@ In actual production services, the recommended ratio of **Request** to **Limit** Actual available CPU of a node >= Sum of CPU limits of all containers on the current node >= Sum of CPU requests of all containers on the current node. You can view the actual available CPUs of a node on the CCE console (**Resource Management** > **Nodes** > **Allocatable**). -- Memory quotas: +- Memory quota: The default unit of memory resources is byte. You can also use an integer with the unit suffix, for example, 100 Mi. Note that the unit is case-sensitive. .. table:: **Table 2** Description of memory quotas @@ -61,19 +65,30 @@ In actual production services, the recommended ratio of **Request** to **Limit** .. note:: - The allocatable resources are calculated based on the resource request value (**Request**), which indicates the upper limit of resources that can be requested by pods on this node, but does not indicate the actual available resources of the node. The calculation formula is as follows: + The allocatable resources are calculated based on the resource request value (**Request**), which indicates the upper limit of resources that can be requested by pods on this node, but does not indicate the actual available resources of the node (for details, see :ref:`Example of CPU and Memory Quota Usage `). The calculation formula is as follows: - Allocatable CPU = Total CPU - Requested CPU of all pods - Reserved CPU for other resources - Allocatable memory = Total memory - Requested memory of all pods - Reserved memory for other resources -Example -------- +.. _cce_10_0163__section17887209103612: -Assume that a cluster contains a node with 4 cores and 8 GB. A workload containing two pods has been deployed on the cluster. The resources of the two pods (pods 1 and 2) are as follows: {CPU request, CPU limit, memory request, memory limit} = {1 core, 2 cores, 2 GB, 2 GB}. +Example of CPU and Memory Quota Usage +------------------------------------- + +Assume that a cluster contains a node with 4 CPU cores and 8 GiB memory. Two pods (pod 1 and pod 2) have been deployed on the cluster. Pod 1 oversubscribes resources (that is **Limit** > **Request**). The specifications of the two pods are as follows. + +===== =========== ========= ============== ============ +Pod CPU Request CPU Limit Memory Request Memory Limit +===== =========== ========= ============== ============ +Pod 1 1 core 2 cores 1 GiB 4 GiB +Pod 2 2 cores 2 cores 2 GiB 2 GiB +===== =========== ========= ============== ============ The CPU and memory usage of the node is as follows: -- Allocatable CPU = 4 cores - (1 core requested by pod 1 + 1 core requested by pod 2) = 2 cores -- Allocatable memory = 8 GB - (2 GB requested by pod 1 + 2 GB requested by pod 2) = 4 GB +- Allocatable CPUs = 4 cores - (1 core requested by pod 1 + 2 cores requested by pod 2) = 1 core +- Allocatable memory = 8 GiB - (1 GiB requested by pod 1 + 2 GiB requested by pod 2) = 5 GiB -Therefore, the remaining 2 cores and 4 GB can be used by the next new pod. +In this case, the remaining 1 core 5 GiB can be used by the next new pod. + +If pod 1 is under heavy load during peak hours, it will use more CPUs and memory within the limit. Therefore, the actual allocatable resources are fewer than 1 core 5 GiB. diff --git a/umn/source/workloads/configuring_a_container/setting_health_check_for_a_container.rst b/umn/source/workloads/configuring_a_container/setting_health_check_for_a_container.rst index 9661a4c..468cfd1 100644 --- a/umn/source/workloads/configuring_a_container/setting_health_check_for_a_container.rst +++ b/umn/source/workloads/configuring_a_container/setting_health_check_for_a_container.rst @@ -14,14 +14,14 @@ Kubernetes provides the following health check probes: - **Liveness probe** (livenessProbe): checks whether a container is still alive. It is similar to the **ps** command that checks whether a process exists. If the liveness check of a container fails, the cluster restarts the container. If the liveness check is successful, no operation is executed. - **Readiness probe** (readinessProbe): checks whether a container is ready to process user requests. Upon that the container is detected unready, service traffic will not be directed to the container. It may take a long time for some applications to start up before they can provide services. This is because that they need to load disk data or rely on startup of an external module. In this case, the application process is running, but the application cannot provide services. To address this issue, this health check probe is used. If the container readiness check fails, the cluster masks all requests sent to the container. If the container readiness check is successful, the container can be accessed. -- **Startup probe** (startupProbe): checks when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, ensuring that those probes do not interfere with the application startup. This can be used to perform liveness checks on slow starting containers to prevent them from getting terminated by the kubelet before they are started. +- **Startup probe** (startupProbe): checks when a containerized application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, ensuring that those probes do not interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting terminated by the kubelet before they are started. Check Method ------------ - **HTTP request** - This health check mode is applicable to containers that provide HTTP/HTTPS services. The cluster periodically initiates an HTTP/HTTPS GET request to such containers. If the return code of the HTTP/HTTPS response is within 200-399, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port and an HTTP/HTTPS request path. + This health check mode applies to containers that provide HTTP/HTTPS services. The cluster periodically initiates an HTTP/HTTPS GET request to such containers. If the return code of the HTTP/HTTPS response is within 200-399, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port and an HTTP/HTTPS request path. For example, for a container that provides HTTP services, the HTTP check path is **/health-check**, the port is 80, and the host address is optional (which defaults to the container IP address). Here, 172.16.0.186 is used as an example, and we can get such a request: GET http://172.16.0.186:80/health-check. The cluster periodically initiates this request to the container. You can also add one or more headers to an HTTP request. For example, set the request header name to **Custom-Header** and the corresponding value to **example**. @@ -29,7 +29,7 @@ Check Method For a container that provides TCP communication services, the cluster periodically establishes a TCP connection to the container. If the connection is successful, the probe is successful. Otherwise, the probe fails. In this health check mode, you must specify a container listening port. - For example, if you have a Nginx container with service port 80, after you specify TCP port 80 for container listening, the cluster will periodically initiate a TCP connection to port 80 of the container. If the connection is successful, the probe is successful. Otherwise, the probe fails. + For example, if you have an Nginx container with service port 80, after you specify TCP port 80 for container listening, the cluster will periodically initiate a TCP connection to port 80 of the container. If the connection is successful, the probe is successful. Otherwise, the probe fails. - **CLI** @@ -39,7 +39,7 @@ Check Method - For a TCP port, you can use a program script to connect to a container port. If the connection is successful, the script returns **0**. Otherwise, the script returns **-1**. - - For an HTTP request, you can run the **wget** command to check the container. + - For an HTTP request, you can use the script command to run the **wget** command to detect the container. **wget http://127.0.0.1:80/health-check** @@ -52,7 +52,7 @@ Check Method - **gRPC Check** - gRPC checks can configure startup, liveness, and readiness probes for your gRPC application without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can connect to your workload via gRPC and query its status. + gRPC checks can configure startup, liveness, and readiness probes for your gRPC application without exposing any HTTP endpoint, nor do you need an executable. Kubernetes can connect to your workload via gRPC and obtain its status. .. important:: diff --git a/umn/source/workloads/configuring_a_container/taints_and_tolerations.rst b/umn/source/workloads/configuring_a_container/taints_and_tolerations.rst new file mode 100644 index 0000000..7369bd3 --- /dev/null +++ b/umn/source/workloads/configuring_a_container/taints_and_tolerations.rst @@ -0,0 +1,73 @@ +:original_name: cce_10_0728.html + +.. _cce_10_0728: + +Taints and Tolerations +====================== + +Tolerations allow the scheduler to schedule pods to nodes with target taints. Tolerances work with :ref:`node taints `. Each node allows one or more taints. If no tolerance is configured for a pod, the scheduler will schedule the pod based on node taint policies to prevent the pod from being scheduled to an inappropriate node. + +The following table shows how taint policies and tolerations affect pod running. + ++-----------------------+-------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Taint Policy | No Taint Toleration Configured | Taint Toleration Configured | ++=======================+=========================================================================+======================================================================================================================================================================================+ +| NoExecute | - Pods running on the node will be evicted immediately. | - If the tolerance time window **tolerationSeconds** is not specified, pods can run on this node all the time. | +| | - Inactive pods will not scheduled to the node. | - If the tolerance time window **tolerationSeconds** is specified, pods still run on the node with taints within the time window. After the time expires, the pods will be evicted. | ++-----------------------+-------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| PreferNoSchedule | - Pods running on the node will not be evicted. | Pods can run on this node all the time. | +| | - Inactive pods will not scheduled to the node **to the best extend**. | | ++-----------------------+-------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| NoSchedule | - Pods running on the node will not be evicted. | Pods can run on this node all the time. | +| | - Inactive pods will not scheduled to the node. | | ++-----------------------+-------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +Configuring Tolerance Policies on the Console +--------------------------------------------- + +#. Log in to the CCE console. +#. When creating a workload, click **Toleration** in the **Advanced Settings** area. +#. Add a taint tolerance policy. + + .. table:: **Table 1** Parameters for configuring a taint tolerance policy + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+=======================================================================================================================================================================================================+ + | Taint key | Key of a node taint | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Operator | - **Equal**: **Exact match** for the specified taint key (mandatory) and taint value. If the taint value is left blank, all taints with the key the same as the specified taint key will be matched. | + | | - **Exists**: **matches only** the nodes with the specified taint key. In this case, the taint value cannot be specified. If the taint key is left blank, all taints will be tolerated. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Taint value | Taint value specified if the operator is set to **Equal**. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Taint Policy | - **All**: All taint policies are matched. | + | | - **NoSchedule**: Only the **NoSchedule** taint is matched. | + | | - **PreferNoSchedule**: Only the **PreferNoSchedule** taint is matched. | + | | - **NoExecute**: Only the **NoExecute** taint is matched. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Toleration Time Window | **tolerationSeconds**, which is configurable only when **Taint Policy** is set to **NoExecute**. | + | | | + | | Within the tolerance time window, pods still run on the node with taints. After the time expires, the pods will be evicted. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +Default Tolerance Policy +------------------------ + +Kubernetes automatically adds tolerances for the **node.kubernetes.io/not-ready** and **node.kubernetes.io/unreachable** taints to pods, and sets the tolerance time window (**tolerationSeconds**) to 300s. These default tolerance policies indicate that when either of the preceding taint is added to the node where pods are running, the pods can still run on the node for 5 minutes. + +.. note:: + + When a DaemonSet pod is created, no tolerance time window will be specified for the tolerances automatically added for the preceding taints. When either of the preceding taints is added to the node where the DaemonSet pod is running, the DaemonSet pod will never be evicted. + +.. code-block:: + + tolerations: + - key: node.kubernetes.io/not-ready + operator: Exists + effect: NoExecute + tolerationSeconds: 300 + - key: node.kubernetes.io/unreachable + operator: Exists + effect: NoExecute + tolerationSeconds: 300 diff --git a/umn/source/workloads/configuring_a_container/using_a_third-party_image.rst b/umn/source/workloads/configuring_a_container/using_third-party_images.rst similarity index 56% rename from umn/source/workloads/configuring_a_container/using_a_third-party_image.rst rename to umn/source/workloads/configuring_a_container/using_third-party_images.rst index 026dac6..43e94cf 100644 --- a/umn/source/workloads/configuring_a_container/using_a_third-party_image.rst +++ b/umn/source/workloads/configuring_a_container/using_third-party_images.rst @@ -2,15 +2,15 @@ .. _cce_10_0009: -Using a Third-Party Image -========================= +Using Third-Party Images +======================== Scenario -------- CCE allows you to create workloads using images pulled from third-party image repositories. -Generally, a third-party image repository can be accessed only after authentication (using your account and password). CCE uses the secret-based authentication to pull images. Therefore, you need to create a secret for an image repository before pulling images from the repository. +Generally, a third-party image repository can be accessed only after authentication (using your account and password). CCE uses the secret-based authentication to pull images. Therefore, create a secret for an image repository before pulling images from the repository. Prerequisites ------------- @@ -24,11 +24,11 @@ Using the Console Create a secret for accessing a third-party image repository. - Click the cluster name and access the cluster console. In the navigation pane, choose **ConfigMaps and Secrets**. On the **Secrets** tab page, click **Create Secret** in the upper right corner. Set **Secret Type** to **kubernetes.io/dockerconfigjson**. For details, see :ref:`Creating a Secret `. + Click the cluster name to access the cluster console. In the navigation pane, choose **ConfigMaps and Secrets**. On the **Secrets** tab, click **Create Secret** in the upper right corner. Set **Secret Type** to **kubernetes.io/dockerconfigjson**. For details, see :ref:`Creating a Secret `. Enter the user name and password used to access the third-party image repository. -#. When creating a workload, you can enter a private image path in the format of **domainname/namespace/imagename:tag** in **Image Name** and select the key created in :ref:`1 `. +#. When creating a workload, you can enter a private image path in the format of *domainname/namespace/imagename:tag* for **Image Name** and select the key created in :ref:`1 ` for **Image Access Credential**. #. Set other parameters and click **Create Workload**. @@ -37,13 +37,13 @@ Using kubectl #. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. -#. Create a secret of the dockercfg type using kubectl. +#. Use kubectl to create a secret of the kubernetes.io/dockerconfigjson. .. code-block:: - kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL + kubectl create secret docker-registry myregistrykey -n default --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL - In the preceding commands, **myregistrykey** indicates the secret name, and other parameters are described as follows: + In the preceding command, *myregistrykey* indicates the key name, *default* indicates the namespace where the key is located, and other parameters are as follows: - **DOCKER_REGISTRY_SERVER**: address of a third-party image repository, for example, **www.3rdregistry.com** or **10.10.10.10:443** - **DOCKER_USER**: account used for logging in to a third-party image repository @@ -52,7 +52,7 @@ Using kubectl #. Use a third-party image to create a workload. - A dockecfg secret is used for authentication when you obtain a private image. The following is an example of using the myregistrykey for authentication. + A kubernetes.io/dockerconfigjson secret is used for authentication when you obtain a private image. The following is an example of using the myregistrykey for authentication. .. code-block:: diff --git a/umn/source/workloads/configuring_qos_rate_limiting_for_inter-pod_access.rst b/umn/source/workloads/configuring_qos_rate_limiting_for_inter-pod_access.rst deleted file mode 100644 index 05e7b74..0000000 --- a/umn/source/workloads/configuring_qos_rate_limiting_for_inter-pod_access.rst +++ /dev/null @@ -1,54 +0,0 @@ -:original_name: cce_10_0382.html - -.. _cce_10_0382: - -Configuring QoS Rate Limiting for Inter-Pod Access -================================================== - -Scenario --------- - -Bandwidth preemption occurs between different containers deployed on the same node, which may cause service jitter. You can configure QoS rate limiting for inter-pod access to prevent this problem. - -Using kubectl -------------- - -You can add annotations to a workload to specify its egress and ingress bandwidth. - -.. code-block:: - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: test - namespace: default - labels: - app: test - spec: - replicas: 2 - selector: - matchLabels: - app: test - template: - metadata: - labels: - app: test - annotations: - kubernetes.io/ingress-bandwidth: 100M - kubernetes.io/egress-bandwidth: 100M - spec: - containers: - - name: container-1 - image: nginx:alpine - imagePullPolicy: IfNotPresent - imagePullSecrets: - - name: default-secret - -- **kubernetes.io/ingress-bandwidth**: ingress bandwidth of the pod -- **kubernetes.io/egress-bandwidth**: egress bandwidth of the pod - -If these two parameters are not specified, the bandwidth is not limited. - -.. note:: - - After modifying the ingress or egress bandwidth limit of a pod, you need to restart the container for the modification to take effect. After annotations are modified in a pod not managed by workloads, the container will not be restarted, so the bandwidth limits do not take effect. You can create a pod again or manually restart the container. diff --git a/umn/source/workloads/creating_a_cron_job.rst b/umn/source/workloads/creating_a_cron_job.rst deleted file mode 100644 index 2f69db9..0000000 --- a/umn/source/workloads/creating_a_cron_job.rst +++ /dev/null @@ -1,204 +0,0 @@ -:original_name: cce_10_0151.html - -.. _cce_10_0151: - -Creating a Cron Job -=================== - -Scenario --------- - -A cron job runs on a repeating schedule. You can perform time synchronization for all active nodes at a fixed time point. - -A cron job runs periodically at the specified time. It is similar with Linux crontab. A cron job has the following characteristics: - -- Runs only once at the specified time. -- Runs periodically at the specified time. - -The typical usage of a cron job is as follows: - -- Schedules jobs at the specified time. -- Creates jobs to run periodically, for example, database backup and email sending. - -Prerequisites -------------- - -Resources have been created. For details, see :ref:`Creating a Node `. - -Using the CCE Console ---------------------- - -#. Log in to the CCE console. - -#. Click the cluster name to go to the cluster console, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. - -#. Set basic information about the workload. - - **Basic Info** - - - **Workload Type**: Select **Cron Job**. For details about workload types, see :ref:`Overview `. - - **Workload Name**: Enter the name of the workload. Enter 1 to 52 characters starting with a lowercase letter and ending with a letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. - - **Namespace**: Select the namespace of the workload. The default value is **default**. You can also click **Create Namespace** to create one. For details, see :ref:`Creating a Namespace `. - - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see :ref:`Kata Containers and Common Containers `. - - **Container Settings** - - - Container Information - - Multiple containers can be configured in a pod. You can click **Add Container** on the right to configure multiple containers for the pod. - - - **Basic Info**: See :ref:`Setting Basic Container Information `. - - **Lifecycle**: See :ref:`Setting Container Lifecycle Parameters `. - - **Environment Variables**: See :ref:`Setting an Environment Variable `. - - - **Image Access Credential**: Select the credential used for accessing the image repository. The default value is **default-secret**. You can use default-secret to access images in SWR. For details about **default-secret**, see :ref:`default-secret `. - - - **GPU graphics card**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. - - **Schedule** - - - **Concurrency Policy**: The following three modes are supported: - - - **Forbid**: A new job cannot be created before the previous job is completed. - - **Allow**: The cron job allows concurrently running jobs, which preempt cluster resources. - - **Replace**: A new job replaces the previous job when it is time to create a job but the previous job is not completed. - - - **Policy Settings**: specifies when a new cron job is executed. Policy settings in YAML are implemented using cron expressions. - - - A cron job is executed at a fixed interval. The unit can be minute, hour, day, or month. For example, if a cron job is executed every 30 minutes, the cron expression is **\*/30 \* \* \* \***, the execution time starts from 0 in the unit range, for example, **00:00:00**, **00:30:00**, **01:00:00**, and **...**. - - The cron job is executed at a fixed time (by month). For example, if a cron job is executed at 00:00 on the first day of each month, the cron expression is **0 0 1 \*/1 \***, and the execution time is **\****-01-01 00:00:00**, **\****-02-01 00:00:00**, and **...**. - - The cron job is executed at a fixed time (by week). For example, if a cron job is executed at 00:00 every Monday, the cron expression is **0 0 \* \* 1**, and the execution time is **\****-**-01 00:00:00 on Monday**, **\****-**-08 00:00:00 on Monday**, and **...**. - - For details about how to use cron expressions, see `cron `__. - - .. note:: - - - If a cron job is executed at a fixed time (by month) and the number of days in a month does not exist, the cron job will not be executed in this month. For example, if the number of days is set to 30 but February does not have the 30th day, the cron job skips this month and continues on March 30. - - - Due to the definition of the cron expression, the fixed period is not a strict period. The time unit range is divided from 0 by period. For example, if the unit is minute, the value ranges from 0 to 59. If the value cannot be exactly divided, the last period is reset. Therefore, an accurate period can be represented only when the period can evenly divide its time unit range. - - For example, the unit of the period is hour. Because **/2, /3, /4, /6, /8, and /12** can be divided by 24, the accurate period can be represented. If another period is used, the last period will be reset at the beginning of a new day. For example, if the cron expression is **\* \*/12 \* \* \***, the execution time is **00:00:00** and **12:00:00** every day. If the cron expression is **\* \*/13 \* \* \***, the execution time is **00:00:00** and **13:00:00** every day. At 00:00 on the next day, the execution time is updated even if the period does not reach 13 hours. - - - **Job Records**: You can set the number of jobs that are successfully executed or fail to be executed. Setting a limit to **0** corresponds to keeping none of the jobs after they finish. - - **Advanced Settings** - - - **Labels and Annotations**: See :ref:`Pod Labels and Annotations `. - - Network configuration: - - - Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see :ref:`Configuring QoS Rate Limiting for Inter-Pod Access `. - -#. Click **Create Workload** in the lower right corner. - -Using kubectl -------------- - -A cron job has the following configuration parameters: - -- **.spec.schedule**: takes a `Cron `__ format string, for example, **0 \* \* \* \*** or **@hourly**, as schedule time of jobs to be created and executed. -- **.spec.jobTemplate**: specifies jobs to be run, and has the same schema as when you are :ref:`Creating a Job Using kubectl `. -- **.spec.startingDeadlineSeconds**: specifies the deadline for starting a job. -- **.spec.concurrencyPolicy**: specifies how to treat concurrent executions of a job created by the Cron job. The following options are supported: - - - **Allow** (default value): allows concurrently running jobs. - - **Forbid**: forbids concurrent runs, skipping next run if previous has not finished yet. - - **Replace**: cancels the currently running job and replaces it with a new one. - -The following is an example cron job, which is saved in the **cronjob.yaml** file. - -.. code-block:: - - apiVersion: batch/v1beta1 - kind: CronJob - metadata: - name: hello - spec: - schedule: "*/1 * * * *" - jobTemplate: - spec: - template: - spec: - containers: - - name: hello - image: busybox - args: - - /bin/sh - - -c - - date; echo Hello from the Kubernetes cluster - restartPolicy: OnFailure - -**Run the job.** - -#. Create a cron job. - - **kubectl create -f cronjob.yaml** - - Information similar to the following is displayed: - - .. code-block:: - - cronjob.batch/hello created - -#. Query the running status of the cron job: - - **kubectl get cronjob** - - .. code-block:: - - NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE - hello */1 * * * * False 0 9s - - **kubectl get jobs** - - .. code-block:: - - NAME COMPLETIONS DURATION AGE - hello-1597387980 1/1 27s 45s - - **kubectl get pod** - - .. code-block:: - - NAME READY STATUS RESTARTS AGE - hello-1597387980-tjv8f 0/1 Completed 0 114s - hello-1597388040-lckg9 0/1 Completed 0 39s - - **kubectl logs** **hello-1597387980-tjv8f** - - .. code-block:: - - Fri Aug 14 06:56:31 UTC 2020 - Hello from the Kubernetes cluster - - **kubectl delete cronjob hello** - - .. code-block:: - - cronjob.batch "hello" deleted - - .. important:: - - When a cron job is deleted, the related jobs and pods are deleted too. - -Related Operations ------------------- - -After a cron job is created, you can perform operations listed in :ref:`Table 1 `. - -.. _cce_10_0151__t6d520710097a4ee098eae42bcb508608: - -.. table:: **Table 1** Other operations - - +-----------------------------------+----------------------------------------------------------------------------------------------------+ - | Operation | Description | - +===================================+====================================================================================================+ - | Editing a YAML file | Click **More** > **Edit YAML** next to the cron job name to edit the YAML file of the current job. | - +-----------------------------------+----------------------------------------------------------------------------------------------------+ - | Stopping a cron job | #. Select the job to be stopped and click **Stop** in the **Operation** column. | - | | #. Click **Yes**. | - +-----------------------------------+----------------------------------------------------------------------------------------------------+ - | Deleting a cron job | #. Select the cron job to be deleted and click **More** > **Delete** in the **Operation** column. | - | | | - | | #. Click **Yes**. | - | | | - | | Deleted jobs cannot be restored. Therefore, exercise caution when deleting a job. | - +-----------------------------------+----------------------------------------------------------------------------------------------------+ diff --git a/umn/source/workloads/creating_a_daemonset.rst b/umn/source/workloads/creating_a_daemonset.rst deleted file mode 100644 index e246297..0000000 --- a/umn/source/workloads/creating_a_daemonset.rst +++ /dev/null @@ -1,156 +0,0 @@ -:original_name: cce_10_0216.html - -.. _cce_10_0216: - -Creating a DaemonSet -==================== - -Scenario --------- - -CCE provides deployment and management capabilities for multiple types of containers and supports features of container workloads, including creation, configuration, monitoring, scaling, upgrade, uninstall, service discovery, and load balancing. - -DaemonSet ensures that only one pod runs on all or some nodes. When a node is added to a cluster, a new pod is also added for the node. When a node is removed from a cluster, the pod is also reclaimed. If a DaemonSet is deleted, all pods created by it will be deleted. - -The typical application scenarios of a DaemonSet are as follows: - -- Run the cluster storage daemon, such as glusterd or Ceph, on each node. -- Run the log collection daemon, such as Fluentd or Logstash, on each node. -- Run the monitoring daemon, such as Prometheus Node Exporter, collectd, Datadog agent, New Relic agent, or Ganglia (gmond), on each node. - -You can deploy a DaemonSet for each type of daemons on all nodes, or deploy multiple DaemonSets for the same type of daemons. In the second case, DaemonSets have different flags and different requirements on memory and CPU for different hardware types. - -Prerequisites -------------- - -You must have one cluster available before creating a DaemonSet. For details about how to create a cluster, see :ref:`Creating a CCE Cluster `. - -Using the CCE Console ---------------------- - -#. Log in to the CCE console. - -#. Click the cluster name to go to the cluster console, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. - -#. Set basic information about the workload. - - **Basic Info** - - - **Workload Type**: Select **DaemonSet**. For details about workload types, see :ref:`Overview `. - - **Workload Name**: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. - - **Namespace**: Select the namespace of the workload. The default value is **default**. You can also click **Create Namespace** to create one. For details, see :ref:`Creating a Namespace `. - - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see :ref:`Kata Containers and Common Containers `. - - **Time Zone Synchronization**: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see :ref:`Configuring Time Zone Synchronization `. - - **Container Settings** - - - Container Information - - Multiple containers can be configured in a pod. You can click **Add Container** on the right to configure multiple containers for the pod. - - - **Basic Info**: See :ref:`Setting Basic Container Information `. - - **Lifecycle**: See :ref:`Setting Container Lifecycle Parameters `. - - **Health Check**: See :ref:`Setting Health Check for a Container `. - - **Environment Variables**: See :ref:`Setting an Environment Variable `. - - **Data Storage**: See :ref:`Overview `. - - .. note:: - - If the workload contains more than one pod, EVS volumes cannot be mounted. - - - **Security Context**: Set container permissions to protect the system and other containers from being affected. Enter the user ID to set container permissions and prevent systems and other containers from being affected. - - **Logging**: See :ref:`Using ICAgent to Collect Container Logs `. - - - **Image Access Credential**: Select the credential used for accessing the image repository. The default value is **default-secret**. You can use default-secret to access images in SWR. For details about **default-secret**, see :ref:`default-secret `. - - - **GPU graphics card**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. - - **Service Settings** - - A Service is used for pod access. With a fixed IP address, a Service forwards access traffic to pods and performs load balancing for these pods. - - You can also create a Service after creating a workload. For details about the Service, see :ref:`Service Overview `. - - **Advanced Settings** - - - **Upgrade**: See :ref:`Configuring the Workload Upgrade Policy `. - - **Scheduling**: See :ref:`Scheduling Policy (Affinity/Anti-affinity) `. - - **Labels and Annotations**: See :ref:`Pod Labels and Annotations `. - - **Toleration**: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see :ref:`Tolerations `. - - **DNS**: See :ref:`DNS Configuration `. - - Network configuration: - - - Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see :ref:`Configuring QoS Rate Limiting for Inter-Pod Access `. - -#. Click **Create Workload** in the lower right corner. - -Using kubectl -------------- - -The following procedure uses Nginx as an example to describe how to create a workload using kubectl. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create and edit the **nginx-daemonset.yaml** file. **nginx-daemonset.yaml** is an example file name, and you can change it as required. - - **vi nginx-daemonset.yaml** - - The content of the description file is as follows: The following provides an example. For more information on DaemonSets, see `Kubernetes documents `__. - - .. code-block:: - - apiVersion: apps/v1 - kind: DaemonSet - metadata: - name: nginx-daemonset - labels: - app: nginx-daemonset - spec: - selector: - matchLabels: - app: nginx-daemonset - template: - metadata: - labels: - app: nginx-daemonset - spec: - nodeSelector: # Node selection. A pod is created on a node only when the node meets daemon=need. - daemon: need - containers: - - name: nginx-daemonset - image: nginx:alpine - resources: - limits: - cpu: 250m - memory: 512Mi - requests: - cpu: 250m - memory: 512Mi - imagePullSecrets: - - name: default-secret - - The **replicas** parameter used in defining a Deployment or StatefulSet does not exist in the above configuration for a DaemonSet, because each node has only one replica. It is fixed. - - The nodeSelector in the preceding pod template specifies that a pod is created only on the nodes that meet **daemon=need**, as shown in the following figure. If you want to create a pod on each node, delete the label. - -#. Create a DaemonSet. - - **kubectl create -f nginx-daemonset.yaml** - - If the following information is displayed, the DaemonSet is being created. - - .. code-block:: - - daemonset.apps/nginx-daemonset created - -#. Query the DaemonSet status. - - **kubectl get ds** - - .. code-block:: - - $ kubectl get ds - NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE - nginx-daemonset 1 1 0 1 0 daemon=need 116s - -#. If the workload will be accessed through a ClusterIP or NodePort Service, set the corresponding workload access type. For details, see :ref:`Networking `. diff --git a/umn/source/workloads/creating_a_job.rst b/umn/source/workloads/creating_a_job.rst deleted file mode 100644 index 5526f16..0000000 --- a/umn/source/workloads/creating_a_job.rst +++ /dev/null @@ -1,197 +0,0 @@ -:original_name: cce_10_0150.html - -.. _cce_10_0150: - -Creating a Job -============== - -Scenario --------- - -Jobs are short-lived and run for a certain time to completion. They can be executed immediately after being deployed. It is completed after it exits normally (exit 0). - -A job is a resource object that is used to control batch tasks. It is different from a long-term servo workload (such as Deployment and StatefulSet). - -A job is started and terminated at specific times, while a long-term servo workload runs unceasingly unless being terminated. The pods managed by a job automatically exit after successfully completing the job based on user configurations. The success flag varies according to the spec.completions policy. - -- One-off jobs: A single pod runs once until successful termination. -- Jobs with a fixed success count: N pods run until successful termination. -- A queue job is considered completed based on the global success confirmed by the application. - -Prerequisites -------------- - -Resources have been created. For details, see :ref:`Creating a Node `. If clusters and nodes are available, you need not create them again. - -Using the CCE Console ---------------------- - -#. Log in to the CCE console. - -#. Click the cluster name to go to the cluster console, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. - -#. Set basic information about the workload. - - **Basic Info** - - - **Workload Type**: Select **Job**. For details about workload types, see :ref:`Overview `. - - **Workload Name**: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. - - **Namespace**: Select the namespace of the workload. The default value is **default**. You can also click **Create Namespace** to create one. For details, see :ref:`Creating a Namespace `. - - **Pods**: Enter the number of pods. - - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see :ref:`Kata Containers and Common Containers `. - - **Container Settings** - - - Container Information - - Multiple containers can be configured in a pod. You can click **Add Container** on the right to configure multiple containers for the pod. - - - **Basic Info**: See :ref:`Setting Basic Container Information `. - - **Lifecycle**: See :ref:`Setting Container Lifecycle Parameters `. - - **Environment Variables**: See :ref:`Setting an Environment Variable `. - - **Data Storage**: See :ref:`Overview `. - - .. note:: - - If the workload contains more than one pod, EVS volumes cannot be mounted. - - - **Logging**: See :ref:`Using ICAgent to Collect Container Logs `. - - - **Image Access Credential**: Select the credential used for accessing the image repository. The default value is **default-secret**. You can use default-secret to access images in SWR. For details about **default-secret**, see :ref:`default-secret `. - - - **GPU graphics card**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. - - **Advanced Settings** - - - **Labels and Annotations**: See :ref:`Pod Labels and Annotations `. - - **Job Settings**: - - - **Parallel Pods**: Maximum number of pods that can run in parallel during job execution. The value cannot be greater than the total number of pods in the job. - - **Timeout (s)**: Once a job reaches this time, the job status becomes failed and all pods in this job will be deleted. If you leave this parameter blank, the job will never time out. - - - Network configuration: - - - Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see :ref:`Configuring QoS Rate Limiting for Inter-Pod Access `. - -#. Click **Create Workload** in the lower right corner. - -.. _cce_10_0150__section450152719412: - -Using kubectl -------------- - -A job has the following configuration parameters: - -- **spec.template**: has the same schema as a pod. -- **RestartPolicy**: can only be set to **Never** or **OnFailure**. -- For a single-pod job, the job ends after the pod runs successfully by default. -- **.spec.completions**: indicates the number of pods that need to run successfully to end a job. The default value is **1**. -- **.spec.parallelism**: indicates the number of pods that run concurrently. The default value is **1**. -- **spec.backoffLimit**: indicates the maximum number of retries performed if a pod fails. When the limit is reached, the pod will not try again. -- **.spec.activeDeadlineSeconds**: indicates the running time of pods. Once the time is reached, all pods of the job are terminated. The priority of .spec.activeDeadlineSeconds is higher than that of .spec.backoffLimit. That is, if a job reaches the .spec.activeDeadlineSeconds, the spec.backoffLimit is ignored. - -Based on the **.spec.completions** and **.spec.Parallelism** settings, jobs are classified into the following types. - -.. table:: **Table 1** Job types - - +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ - | Job Type | Description | Example | - +=============================================+=======================================================================+=======================================================+ - | One-off jobs | A single pod runs once until successful termination. | Database migration | - +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ - | Jobs with a fixed completion count | One pod runs until reaching the specified **completions** count. | Work queue processing pod | - +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ - | Parallel jobs with a fixed completion count | Multiple pods run until reaching the specified **completions** count. | Multiple pods for processing work queues concurrently | - +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ - | Parallel jobs | One or more pods run until successful termination. | Multiple pods for processing work queues concurrently | - +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ - -The following is an example job, which calculates Pi till the 2000\ :sup:`th` digit and prints the output. - -.. code-block:: - - apiVersion: batch/v1 - kind: Job - metadata: - name: myjob - spec: - completions: 50 # 50 pods need to be run to finish a job. In this example, Pi is printed for 50 times. - parallelism: 5 # 5 pods are run in parallel. - backoffLimit: 5 # The maximum number of retry times is 5. - template: - spec: - containers: - - name: pi - image: perl - command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] - restartPolicy: Never - -**Description** - -- **apiVersion: batch/v1** indicates the version of the current job. -- **kind: Job** indicates that the current resource is a job. -- **restartPolicy: Never** indicates the current restart policy. For jobs, this parameter can only be set to **Never** or **OnFailure**. For other controllers (for example, Deployments), you can set this parameter to **Always**. - -**Run the job.** - -#. Start the job. - - .. code-block:: console - - [root@k8s-master k8s]# kubectl apply -f myjob.yaml - job.batch/myjob created - -#. View the job details. - - **kubectl get job** - - .. code-block:: console - - [root@k8s-master k8s]# kubectl get job - NAME COMPLETIONS DURATION AGE - myjob 50/50 23s 3m45s - - If the value of **COMPLETIONS** is **50/50**, the job is successfully executed. - -#. Query the pod status. - - **kubectl get pod** - - .. code-block:: console - - [root@k8s-master k8s]# kubectl get pod - NAME READY STATUS RESTARTS AGE - myjob-29qlw 0/1 Completed 0 4m5s - ... - - If the status is **Completed**, the job is complete. - -#. View the pod logs. - - **kubectl logs** - - .. code-block:: - - # kubectl logs myjob-29qlw - 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 - -Related Operations ------------------- - -After a one-off job is created, you can perform operations listed in :ref:`Table 2 `. - -.. _cce_10_0150__t84075653e7544394939d13740fad0c20: - -.. table:: **Table 2** Other operations - - +-----------------------------------+-------------------------------------------------------------------------------------------------------------+ - | Operation | Description | - +===================================+=============================================================================================================+ - | Editing a YAML file | Click **More** > **Edit YAML** next to the job name to edit the YAML file corresponding to the current job. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------+ - | Deleting a job | #. Select the job to be deleted and click **Delete** in the **Operation** column. | - | | | - | | #. Click **Yes**. | - | | | - | | Deleted jobs cannot be restored. Exercise caution when deleting a job. | - +-----------------------------------+-------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/workloads/creating_a_statefulset.rst b/umn/source/workloads/creating_a_statefulset.rst deleted file mode 100644 index 8313b56..0000000 --- a/umn/source/workloads/creating_a_statefulset.rst +++ /dev/null @@ -1,228 +0,0 @@ -:original_name: cce_10_0048.html - -.. _cce_10_0048: - -Creating a StatefulSet -====================== - -Scenario --------- - -StatefulSets are a type of workloads whose data or status is stored while they are running. For example, MySQL is a StatefulSet because it needs to store new data. - -A container can be migrated between different hosts, but data is not stored on the hosts. To store StatefulSet data persistently, attach HA storage volumes provided by CCE to the container. - -Constraints ------------ - -- When you delete or scale a StatefulSet, the system does not delete the storage volumes associated with the StatefulSet to ensure data security. -- When you delete a StatefulSet, reduce the number of replicas to **0** before deleting the StatefulSet so that pods in the StatefulSet can be stopped in order. -- When you create a StatefulSet, a headless Service is required for pod access. For details, see :ref:`Headless Service `. -- When a node is unavailable, pods become **Unready**. In this case, you need to manually delete the pods of the StatefulSet so that the pods can be migrated to a normal node. - -Prerequisites -------------- - -- Before creating a workload, you must have an available cluster. For details about how to create a cluster, see :ref:`Creating a CCE Cluster `. -- To enable public access to a workload, ensure that an EIP or load balancer has been bound to at least one node in the cluster. - - .. note:: - - If a pod has multiple containers, ensure that the ports used by the containers do not conflict with each other. Otherwise, creating the StatefulSet will fail. - -Using the CCE Console ---------------------- - -#. Log in to the CCE console. - -#. Click the cluster name and access the cluster details page, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. - -#. Set basic information about the workload. - - **Basic Info** - - - **Workload Type**: Select **StatefulSet**. For details about workload types, see :ref:`Overview `. - - **Workload Name**: Enter the name of the workload. Enter 1 to 52 characters starting with a lowercase letter and ending with a letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. - - **Namespace**: Select the namespace of the workload. The default value is **default**. You can also click **Create Namespace** to create one. For details, see :ref:`Creating a Namespace `. - - **Pods**: Enter the number of pods. - - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see :ref:`Kata Containers and Common Containers `. - - **Time Zone Synchronization**: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see :ref:`Configuring Time Zone Synchronization `. - - **Container Settings** - - - Container Information - - Multiple containers can be configured in a pod. You can click **Add Container** on the right to configure multiple containers for the pod. - - - **Basic Info**: See :ref:`Setting Basic Container Information `. - - **Lifecycle**: See :ref:`Setting Container Lifecycle Parameters `. - - **Health Check**: See :ref:`Setting Health Check for a Container `. - - **Environment Variables**: See :ref:`Setting an Environment Variable `. - - **Data Storage**: See :ref:`Overview `. - - .. note:: - - - StatefulSets support dynamically provisioned EVS volumes. - - Dynamic mounting is achieved by using the `volumeClaimTemplates `__ field and depends on the dynamic creation capability of StorageClass. A StatefulSet associates each pod with a unique PVC using the **volumeClaimTemplates** field, and the PVCs are bound to their corresponding PVs. Therefore, after the pod is rescheduled, the original data can still be mounted thanks to the PVC. - - - After a workload is created, the storage that is dynamically mounted cannot be updated. - - - **Security Context**: Set container permissions to protect the system and other containers from being affected. Enter the user ID to set container permissions and prevent systems and other containers from being affected. - - **Logging**: See :ref:`Using ICAgent to Collect Container Logs `. - - - **Image Access Credential**: Select the credential used for accessing the image repository. The default value is **default-secret**. You can use default-secret to access images in SWR. For details about **default-secret**, see :ref:`default-secret `. - - - **GPU graphics card**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. - - **Headless Service Parameters** - - A headless Service is used to solve the problem of mutual access between pods in a StatefulSet. The headless Service provides a fixed access domain name for each pod. For details, see :ref:`Headless Service `. - - **Service Settings** - - A Service is used for pod access. With a fixed IP address, a Service forwards access traffic to pods and performs load balancing for these pods. - - You can also create a Service after creating a workload. For details about the Service, see :ref:`Service Overview `. - - **Advanced Settings** - - - **Upgrade**: See :ref:`Configuring the Workload Upgrade Policy `. - - - **Scheduling**: See :ref:`Scheduling Policy (Affinity/Anti-affinity) `. - - - **Instances Management Policies** - - For some distributed systems, the StatefulSet sequence is unnecessary and/or should not occur. These systems require only uniqueness and identifiers. - - - **OrderedReady**: The StatefulSet will deploy, delete, or scale pods in order and one by one. (The StatefulSet continues only after the previous pod is ready or deleted.) This is the default policy. - - **Parallel**: The StatefulSet will create pods in parallel to match the desired scale without waiting, and will delete all pods at once. - - - **Toleration**: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see :ref:`Tolerations `. - - - **Labels and Annotations**: See :ref:`Pod Labels and Annotations `. - - - **DNS**: See :ref:`DNS Configuration `. - - - Network configuration: - - - Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see :ref:`Configuring QoS Rate Limiting for Inter-Pod Access `. - -#. Click **Create Workload** in the lower right corner. - -Using kubectl -------------- - -In this example, an nginx workload is used and the EVS volume is dynamically mounted to it using the **volumeClaimTemplates** field. - -#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. - -#. Create and edit the **nginx-statefulset.yaml** file. - - **nginx-statefulset.yaml** is an example file name, and you can change it as required. - - **vi nginx-statefulset.yaml** - - The following provides an example of the file contents. For more information on StatefulSet, see the `Kubernetes documentation `__. - - .. code-block:: - - apiVersion: apps/v1 - kind: StatefulSet - metadata: - name: nginx - spec: - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - containers: - - name: container-1 - image: nginx:latest - imagePullPolicy: IfNotPresent - resources: - requests: - cpu: 250m - memory: 512Mi - limits: - cpu: 250m - memory: 512Mi - volumeMounts: - - name: test - readOnly: false - mountPath: /usr/share/nginx/html - subPath: '' - imagePullSecrets: - - name: default-secret - dnsPolicy: ClusterFirst - volumes: [] - serviceName: nginx-svc - replicas: 2 - volumeClaimTemplates: # Dynamically mounts the EVS volume to the workload. - - apiVersion: v1 - kind: PersistentVolumeClaim - metadata: - name: test - namespace: default - annotations: - everest.io/disk-volume-type: SAS # SAS EVS volume type. - labels: - failure-domain.beta.kubernetes.io/region: eu-de # region where the EVS volume is created. - failure-domain.beta.kubernetes.io/zone: # AZ where the EVS volume is created. It must be the same as the AZ of the node. - spec: - accessModes: - - ReadWriteOnce # The value must be ReadWriteOnce for the EVS volume. - resources: - requests: - storage: 10Gi - storageClassName: csi-disk # Storage class name. The value is csi-disk for the EVS volume. - updateStrategy: - type: RollingUpdate - - **vi nginx-headless.yaml** - - .. code-block:: - - apiVersion: v1 - kind: Service - metadata: - name: nginx-svc - namespace: default - labels: - app: nginx - spec: - selector: - app: nginx - version: v1 - clusterIP: None - ports: - - name: nginx - targetPort: 80 - nodePort: 0 - port: 80 - protocol: TCP - type: ClusterIP - -#. Create a workload and the corresponding headless service. - - **kubectl create -f nginx-statefulset.yaml** - - If the following information is displayed, the StatefulSet has been successfully created. - - .. code-block:: - - statefulset.apps/nginx created - - **kubectl create -f nginx-headless.yaml** - - If the following information is displayed, the headless service has been successfully created. - - .. code-block:: - - service/nginx-svc created - -#. If the workload will be accessed through a ClusterIP or NodePort Service, set the corresponding workload access type. For details, see :ref:`Networking `. diff --git a/umn/source/workloads/creating_a_workload/creating_a_cron_job.rst b/umn/source/workloads/creating_a_workload/creating_a_cron_job.rst new file mode 100644 index 0000000..3d1f611 --- /dev/null +++ b/umn/source/workloads/creating_a_workload/creating_a_cron_job.rst @@ -0,0 +1,254 @@ +:original_name: cce_10_0151.html + +.. _cce_10_0151: + +Creating a Cron Job +=================== + +Scenario +-------- + +A cron job runs on a repeating schedule. You can perform time synchronization for all active nodes at a fixed time point. + +A cron job runs periodically at the specified time. It is similar with Linux crontab. A cron job has the following characteristics: + +- Runs only once at the specified time. +- Runs periodically at the specified time. + +The typical usage of a cron job is as follows: + +- Schedules jobs at the specified time. +- Creates jobs to run periodically, for example, database backup and email sending. + +Prerequisites +------------- + +Resources have been created. For details, see :ref:`Creating a Node `. + +Using the CCE Console +--------------------- + +#. Log in to the CCE console. + +#. Click the cluster name to go to the cluster console, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. + +#. Set basic information about the workload. + + **Basic Info** + + - **Workload Type**: Select **Cron Job**. For details about workload types, see :ref:`Overview `. + - **Workload Name**: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. + - **Namespace**: Select the namespace of the workload. The default value is **default**. You can also click **Create Namespace** to create one. For details, see :ref:`Creating a Namespace `. + - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences, see :ref:`Kata Runtime and Common Runtime `. + + **Container Settings** + + - Container Information + + Multiple containers can be configured in a pod. You can click **Add Container** on the right to configure multiple containers for the pod. + + - **Basic Info**: Configure basic information about the container. + + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+====================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Container Name | Name the container. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Pull Policy | Image update or pull policy. If you select **Always**, the image is pulled from the image repository each time. If you do not select **Always**, the existing image of the node is preferentially used. If the image does not exist, the image is pulled from the image repository. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Name | Click **Select Image** and select the image used by the container. | + | | | + | | To use a third-party image, see :ref:`Using Third-Party Images `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Tag | Select the image tag to be deployed. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CPU Quota | - **Request**: minimum number of CPU cores required by a container. The default value is 0.25 cores. | + | | - **Limit**: maximum number of CPU cores available for a container. Do not leave **Limit** unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Memory Quota | - **Request**: minimum amount of memory required by a container. The default value is 512 MiB. | + | | - **Limit**: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) GPU Quota | Configurable only when the cluster contains GPU nodes and the :ref:`gpu-beta ` add-on is installed. | + | | | + | | - **All**: The GPU is not used. | + | | - **Dedicated**: GPU resources are exclusively used by the container. | + | | - **Shared**: percentage of GPU resources used by the container. For example, if this parameter is set to **10%**, the container uses 10% of GPU resources. | + | | | + | | For details about how to use GPU in the cluster, see :ref:`Default GPU Scheduling in Kubernetes `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Privileged Container | Programs in a privileged container have certain privileges. | + | | | + | | If **Privileged Container** is enabled, the container is assigned privileges. For example, privileged containers can manipulate network devices on the host machine and modify kernel parameters. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Init Container | Indicates whether to use the container as an init container. The init container does not support health check. | + | | | + | | An init container is a special container that runs before other app containers in a pod are started. Each pod can contain multiple containers. In addition, a pod can contain one or more Init containers. Application containers in a pod are started and run only after the running of all Init containers completes. For details, see `Init Container `__. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + - (Optional) **Lifecycle**: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see :ref:`Setting Container Lifecycle Parameters `. + - (Optional) **Environment Variables**: Set variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see :ref:`Setting an Environment Variable `. + + - **Image Access Credential**: Select the credential used for accessing the image repository. The default value is **default-secret**. You can use default-secret to access images in SWR. For details about **default-secret**, see :ref:`default-secret `. + + - (Optional) **GPU**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. + + **Schedule** + + - **Concurrency Policy**: The following three modes are supported: + + - **Forbid**: A new job cannot be created before the previous job is completed. + - **Allow**: The cron job allows concurrently running jobs, which preempt cluster resources. + - **Replace**: A new job replaces the previous job when it is time to create a job but the previous job is not completed. + + - **Policy Settings**: specifies when a new cron job is executed. Policy settings in YAML are implemented using cron expressions. + + - A cron job is executed at a fixed interval. The unit can be minute, hour, day, or month. For example, if a cron job is executed every 30 minutes and the corresponding cron expression is **\*/30 \* \* \* \***, the execution time starts from 0 in the unit range, for example, **00:00:00**, **00:30:00**, **01:00:00**, and **...**. + - The cron job is executed at a fixed time (by month). For example, if a cron job is executed at 00:00 on the first day of each month, the cron expression is **0 0 1 \*/1 \***, and the execution time is **\****-01-01 00:00:00**, **\****-02-01 00:00:00**, and **...**. + - The cron job is executed by week. For example, if a cron job is executed at 00:00 every Monday, the cron expression is **0 0 \* \* 1**, and the execution time is **\****-**-01 00:00:00 on Monday**, **\****-**-08 00:00:00 on Monday**, and **...**. + - **Custom Cron Expression**: For details about how to use cron expressions, see `CronJob `__. + + .. note:: + + - If a cron job is executed at a fixed time (by month) and the number of days in a month does not exist, the cron job will not be executed in this month. For example, the execution will skip February if the date is set to 30. + + - Due to the definition of cron, the fixed period is not a strict period. The time unit range is divided from 0 by period. For example, if the unit is minute, the value ranges from 0 to 59. If the value cannot be exactly divided, the last period is reset. Therefore, an accurate period can be represented only when the period can be evenly divided. + + Take a cron job that is executed by hour as an example. As **/2, /3, /4, /6, /8, and /12** can exactly divide 24 hours, an accurate period can be represented. If another period is used, the last period will be reset at the beginning of a new day. For example, if the cron expression is **\* \*/12 \* \* \***, the execution time is **00:00:00** and **12:00:00** every day. If the cron expression is **\* \*/13 \* \* \***, the execution time is **00:00:00** and **13:00:00** every day. At 00:00 on the next day, the execution time is updated even if the period does not reach 13 hours. + + - **Job Records**: You can set the number of jobs that are successfully executed or fail to be executed. Setting a limit to **0** corresponds to keeping none of the jobs after they finish. + + **(Optional) Advanced Settings** + + - **Labels and Annotations**: Add labels or annotations for pods using key-value pairs. After entering the key and value, click **Confirm**. For details about how to use and configure labels and annotations, see :ref:`Labels and Annotations `. + + - Network configuration: + + - Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see :ref:`Configuring QoS Rate Limiting for Inter-Pod Access `. + +#. Click **Create Workload** in the lower right corner. + +Using kubectl +------------- + +A cron job has the following configuration parameters: + +- **.spec.schedule**: takes a `Cron `__ format string, for example, **0 \* \* \* \*** or **@hourly**, as schedule time of jobs to be created and executed. +- **.spec.jobTemplate**: specifies jobs to be run, and has the same schema as when you are :ref:`Creating a Job Using kubectl `. +- **.spec.startingDeadlineSeconds**: specifies the deadline for starting a job. +- **.spec.concurrencyPolicy**: specifies how to treat concurrent executions of a job created by the Cron job. The following options are supported: + + - **Allow** (default value): allows concurrently running jobs. + - **Forbid**: forbids concurrent runs, skipping next run if previous has not finished yet. + - **Replace**: cancels the currently running job and replaces it with a new one. + +The following is an example cron job, which is saved in the **cronjob.yaml** file. + +.. note:: + + In clusters of v1.21 or later, CronJob apiVersion is **batch/v1**. + + In clusters earlier than v1.21, CronJob apiVersion is **batch/v1beta1**. + +.. code-block:: + + apiVersion: batch/v1 + kind: CronJob + metadata: + name: hello + spec: + schedule: "*/1 * * * *" + jobTemplate: + spec: + template: + spec: + containers: + - name: hello + image: busybox + command: + - /bin/sh + - -c + - date; echo Hello from the Kubernetes cluster + restartPolicy: OnFailure + imagePullSecrets: + - name: default-secret + +**Run the job.** + +#. Create a cron job. + + **kubectl create -f cronjob.yaml** + + Information similar to the following is displayed: + + .. code-block:: + + cronjob.batch/hello created + +#. Query the running status of the cron job: + + **kubectl get cronjob** + + .. code-block:: + + NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE + hello */1 * * * * False 0 9s + + **kubectl get jobs** + + .. code-block:: + + NAME COMPLETIONS DURATION AGE + hello-1597387980 1/1 27s 45s + + **kubectl get pod** + + .. code-block:: + + NAME READY STATUS RESTARTS AGE + hello-1597387980-tjv8f 0/1 Completed 0 114s + hello-1597388040-lckg9 0/1 Completed 0 39s + + **kubectl logs** **hello-1597387980-tjv8f** + + .. code-block:: + + Fri Aug 14 06:56:31 UTC 2020 + Hello from the Kubernetes cluster + + **kubectl delete cronjob hello** + + .. code-block:: + + cronjob.batch "hello" deleted + + .. important:: + + When a CronJob is deleted, the related jobs and pods are deleted accordingly. + +Related Operations +------------------ + +After a CronJob is created, you can perform operations listed in :ref:`Table 1 `. + +.. _cce_10_0151__t6d520710097a4ee098eae42bcb508608: + +.. table:: **Table 1** Related operations + + +-----------------------------------+----------------------------------------------------------------------------------------------------+ + | Operation | Description | + +===================================+====================================================================================================+ + | Editing a YAML file | Click **More** > **Edit YAML** next to the cron job name to edit the YAML file of the current job. | + +-----------------------------------+----------------------------------------------------------------------------------------------------+ + | Stopping a CronJob | #. Select the job to be stopped and click **Stop** in the **Operation** column. | + | | #. Click **Yes**. | + +-----------------------------------+----------------------------------------------------------------------------------------------------+ + | Deleting a CronJob | #. Select the CronJob to be deleted and click **More** > **Delete** in the **Operation** column. | + | | | + | | #. Click **Yes**. | + | | | + | | Deleted jobs cannot be restored. Therefore, exercise caution when deleting a job. | + +-----------------------------------+----------------------------------------------------------------------------------------------------+ diff --git a/umn/source/workloads/creating_a_workload/creating_a_daemonset.rst b/umn/source/workloads/creating_a_workload/creating_a_daemonset.rst new file mode 100644 index 0000000..ff71c86 --- /dev/null +++ b/umn/source/workloads/creating_a_workload/creating_a_daemonset.rst @@ -0,0 +1,201 @@ +:original_name: cce_10_0216.html + +.. _cce_10_0216: + +Creating a DaemonSet +==================== + +Scenario +-------- + +CCE provides deployment and management capabilities for multiple types of containers and supports features of container workloads, including creation, configuration, monitoring, scaling, upgrade, uninstall, service discovery, and load balancing. + +DaemonSet ensures that only one pod runs on all or some nodes. When a node is added to a cluster, a new pod is also added for the node. When a node is removed from a cluster, the pod is also reclaimed. If a DaemonSet is deleted, all pods created by it will be deleted. + +The typical application scenarios of a DaemonSet are as follows: + +- Run the cluster storage daemon, such as glusterd or Ceph, on each node. +- Run the log collection daemon, such as Fluentd or Logstash, on each node. +- Run the monitoring daemon, such as Prometheus Node Exporter, collectd, Datadog agent, New Relic agent, or Ganglia (gmond), on each node. + +You can deploy a DaemonSet for each type of daemons on all nodes, or deploy multiple DaemonSets for the same type of daemons. In the second case, DaemonSets have different flags and different requirements on memory and CPU for different hardware types. + +Prerequisites +------------- + +You must have one cluster available before creating a DaemonSet. For details on how to create a cluster, see :ref:`Creating a Cluster `. + +Using the CCE Console +--------------------- + +#. Log in to the CCE console. + +#. Click the cluster name to go to the cluster console, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. + +#. Set basic information about the workload. + + **Basic Info** + + - **Workload Type**: Select **DaemonSet**. For details about workload types, see :ref:`Overview `. + - **Workload Name**: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. + - **Namespace**: Select the namespace of the workload. The default value is **default**. You can also click **Create Namespace** to create one. For details, see :ref:`Creating a Namespace `. + - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences, see :ref:`Kata Runtime and Common Runtime `. + - **Time Zone Synchronization**: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see :ref:`Configuring Time Zone Synchronization `. + + **Container Settings** + + - Container Information + + Multiple containers can be configured in a pod. You can click **Add Container** on the right to configure multiple containers for the pod. + + - **Basic Info**: Configure basic information about the container. + + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+====================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Container Name | Name the container. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Pull Policy | Image update or pull policy. If you select **Always**, the image is pulled from the image repository each time. If you do not select **Always**, the existing image of the node is preferentially used. If the image does not exist, the image is pulled from the image repository. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Name | Click **Select Image** and select the image used by the container. | + | | | + | | To use a third-party image, see :ref:`Using Third-Party Images `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Tag | Select the image tag to be deployed. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CPU Quota | - **Request**: minimum number of CPU cores required by a container. The default value is 0.25 cores. | + | | - **Limit**: maximum number of CPU cores available for a container. Do not leave **Limit** unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Memory Quota | - **Request**: minimum amount of memory required by a container. The default value is 512 MiB. | + | | - **Limit**: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) GPU Quota | Configurable only when the cluster contains GPU nodes and the :ref:`gpu-beta ` add-on is installed. | + | | | + | | - **All**: The GPU is not used. | + | | - **Dedicated**: GPU resources are exclusively used by the container. | + | | - **Shared**: percentage of GPU resources used by the container. For example, if this parameter is set to **10%**, the container uses 10% of GPU resources. | + | | | + | | For details about how to use GPU in the cluster, see :ref:`Default GPU Scheduling in Kubernetes `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Privileged Container | Programs in a privileged container have certain privileges. | + | | | + | | If **Privileged Container** is enabled, the container is assigned privileges. For example, privileged containers can manipulate network devices on the host machine and modify kernel parameters. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Init Container | Indicates whether to use the container as an init container. The init container does not support health check. | + | | | + | | An init container is a special container that runs before other app containers in a pod are started. Each pod can contain multiple containers. In addition, a pod can contain one or more Init containers. Application containers in a pod are started and run only after the running of all Init containers completes. For details, see `Init Container `__. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + - (Optional) **Lifecycle**: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see :ref:`Setting Container Lifecycle Parameters `. + + - (Optional) **Health Check**: Set the liveness probe, ready probe, and startup probe as required. For details, see :ref:`Setting Health Check for a Container `. + + - (Optional) **Environment Variables**: Set variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see :ref:`Setting an Environment Variable `. + + - (Optional) **Data Storage**: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see :ref:`Storage `. + + - (Optional) **Security Context**: Set container permissions to protect the system and other containers from being affected. Enter the user ID to set container permissions and prevent systems and other containers from being affected. + + - (Optional) **Logging**: Report container stdout streams to AOM by default and require no manual settings. You can manually configure the log collection path. For details, see :ref:`Using ICAgent to Collect Container Logs `. + + To disable the standard output of the current workload, add the annotation kubernetes.AOM.log.stdout: [] in :ref:`Labels and Annotations `. For details about how to use this annotation, see :ref:`Table 1 `. + + - **Image Access Credential**: Select the credential used for accessing the image repository. The default value is **default-secret**. You can use default-secret to access images in SWR. For details about **default-secret**, see :ref:`default-secret `. + + - (Optional) **GPU**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. + + **(Optional) Service Settings** + + A Service provides external access for pods. With a static IP address, a Service forwards access traffic to pods and performs automatic load balancing for these pods. + + You can also create a Service after creating a workload. For details about Services of different types, see :ref:`Overview `. + + **(Optional) Advanced Settings** + + - Upgrade: Specify the upgrade mode and upgrade parameters of the workload. **Rolling upgrade** and **Replace upgrade** are supported. For details, see :ref:`Configuring the Workload Upgrade Policy `. + + - **Scheduling**: Configure affinity and anti-affinity policies for flexible workload scheduling. Node affinity, pod affinity, and pod anti-affinity are supported. For details, see :ref:`Scheduling Policy (Affinity/Anti-affinity) `. + + - **Toleration**: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see :ref:`Taints and Tolerations `. + - **Labels and Annotations**: Add labels or annotations for pods using key-value pairs. After entering the key and value, click **Confirm**. For details about how to use and configure labels and annotations, see :ref:`Labels and Annotations `. + - **DNS**: Configure a separate DNS policy for the workload. For details, see :ref:`DNS Configuration `. + - Network configuration: + + - Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see :ref:`Configuring QoS Rate Limiting for Inter-Pod Access `. + +#. Click **Create Workload** in the lower right corner. + +Using kubectl +------------- + +The following procedure uses Nginx as an example to describe how to create a workload using kubectl. + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Create and edit the **nginx-daemonset.yaml** file. **nginx-daemonset.yaml** is an example file name, and you can change it as required. + + **vi nginx-daemonset.yaml** + + The content of the description file is as follows: The following provides an example. For more information on DaemonSets, see `Kubernetes documents `__. + + .. code-block:: + + apiVersion: apps/v1 + kind: DaemonSet + metadata: + name: nginx-daemonset + labels: + app: nginx-daemonset + spec: + selector: + matchLabels: + app: nginx-daemonset + template: + metadata: + labels: + app: nginx-daemonset + spec: + nodeSelector: # Node selection. A pod is created on a node only when the node meets daemon=need. + daemon: need + containers: + - name: nginx-daemonset + image: nginx:alpine + resources: + limits: + cpu: 250m + memory: 512Mi + requests: + cpu: 250m + memory: 512Mi + imagePullSecrets: + - name: default-secret + + The **replicas** parameter used in defining a Deployment or StatefulSet does not exist in the above configuration for a DaemonSet, because each node has only one replica. It is fixed. + + The nodeSelector in the preceding pod template specifies that a pod is created only on the nodes that meet **daemon=need**, as shown in the following figure. If you want to create a pod on each node, delete the label. + +#. Create a DaemonSet. + + **kubectl create -f nginx-daemonset.yaml** + + If the following information is displayed, the DaemonSet is being created. + + .. code-block:: + + daemonset.apps/nginx-daemonset created + +#. Query the DaemonSet status. + + **kubectl get ds** + + .. code-block:: + + $ kubectl get ds + NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE + nginx-daemonset 1 1 0 1 0 daemon=need 116s + +#. If the workload will be accessed through a ClusterIP or NodePort Service, set the corresponding workload access type. For details, see :ref:`Network `. diff --git a/umn/source/workloads/creating_a_deployment.rst b/umn/source/workloads/creating_a_workload/creating_a_deployment.rst similarity index 50% rename from umn/source/workloads/creating_a_deployment.rst rename to umn/source/workloads/creating_a_workload/creating_a_deployment.rst index 2498905..7d83c93 100644 --- a/umn/source/workloads/creating_a_deployment.rst +++ b/umn/source/workloads/creating_a_workload/creating_a_deployment.rst @@ -13,7 +13,7 @@ Deployments are workloads (for example, Nginx) that do not store any data or sta Prerequisites ------------- -- Before creating a containerized workload, you must have an available cluster. For details about how to create a cluster, see :ref:`Creating a CCE Cluster `. +- Before creating a workload, you must have an available cluster. For details on how to create a cluster, see :ref:`Creating a Cluster `. - To enable public access to a workload, ensure that an EIP or load balancer has been bound to at least one node in the cluster. .. note:: @@ -25,17 +25,17 @@ Using the CCE Console #. Log in to the CCE console. -#. Click the cluster name and access the cluster details page, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. +#. Click the cluster name to go to the cluster console, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. #. Set basic information about the workload. **Basic Info** - **Workload Type**: Select **Deployment**. For details about workload types, see :ref:`Overview `. - - **Workload Name**: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. + - **Workload Name**: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. - **Namespace**: Select the namespace of the workload. The default value is **default**. You can also click **Create Namespace** to create one. For details, see :ref:`Creating a Namespace `. - - **Pods**: Enter the number of pods. - - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences between runC and Kata, see :ref:`Kata Containers and Common Containers `. + - **Pods**: Enter the number of pods of the workload. + - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences, see :ref:`Kata Runtime and Common Runtime `. - **Time Zone Synchronization**: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see :ref:`Configuring Time Zone Synchronization `. **Container Settings** @@ -44,36 +44,90 @@ Using the CCE Console Multiple containers can be configured in a pod. You can click **Add Container** on the right to configure multiple containers for the pod. - - **Basic Info**: See :ref:`Setting Basic Container Information `. - - **Lifecycle**: See :ref:`Setting Container Lifecycle Parameters `. - - **Health Check**: See :ref:`Setting Health Check for a Container `. - - **Environment Variables**: See :ref:`Setting an Environment Variable `. - - **Data Storage**: See :ref:`Overview `. + - **Basic Info**: Configure basic information about the container. + + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+====================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Container Name | Name the container. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Pull Policy | Image update or pull policy. If you select **Always**, the image is pulled from the image repository each time. If you do not select **Always**, the existing image of the node is preferentially used. If the image does not exist, the image is pulled from the image repository. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Name | Click **Select Image** and select the image used by the container. | + | | | + | | To use a third-party image, see :ref:`Using Third-Party Images `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Tag | Select the image tag to be deployed. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CPU Quota | - **Request**: minimum number of CPU cores required by a container. The default value is 0.25 cores. | + | | - **Limit**: maximum number of CPU cores available for a container. Do not leave **Limit** unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Memory Quota | - **Request**: minimum amount of memory required by a container. The default value is 512 MiB. | + | | - **Limit**: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) GPU Quota | Configurable only when the cluster contains GPU nodes and the :ref:`gpu-beta ` add-on is installed. | + | | | + | | - **All**: The GPU is not used. | + | | - **Dedicated**: GPU resources are exclusively used by the container. | + | | - **Shared**: percentage of GPU resources used by the container. For example, if this parameter is set to **10%**, the container uses 10% of GPU resources. | + | | | + | | For details about how to use GPU in the cluster, see :ref:`Default GPU Scheduling in Kubernetes `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Privileged Container | Programs in a privileged container have certain privileges. | + | | | + | | If **Privileged Container** is enabled, the container is assigned privileges. For example, privileged containers can manipulate network devices on the host machine and modify kernel parameters. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Init Container | Indicates whether to use the container as an init container. The init container does not support health check. | + | | | + | | An init container is a special container that runs before other app containers in a pod are started. Each pod can contain multiple containers. In addition, a pod can contain one or more Init containers. Application containers in a pod are started and run only after the running of all Init containers completes. For details, see `Init Container `__. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + - (Optional) **Lifecycle**: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see :ref:`Setting Container Lifecycle Parameters `. + + - (Optional) **Health Check**: Set the liveness probe, ready probe, and startup probe as required. For details, see :ref:`Setting Health Check for a Container `. + + - (Optional) **Environment Variables**: Set variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see :ref:`Setting an Environment Variable `. + + - (Optional) **Data Storage**: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see :ref:`Storage `. .. note:: If the workload contains more than one pod, EVS volumes cannot be mounted. - - **Security Context**: Set container permissions to protect the system and other containers from being affected. Enter the user ID to set container permissions and prevent systems and other containers from being affected. - - **Logging**: See :ref:`Using ICAgent to Collect Container Logs `. + - (Optional) **Security Context**: Set container permissions to protect the system and other containers from being affected. Enter the user ID to set container permissions and prevent systems and other containers from being affected. + + - (Optional) **Logging**: Report container stdout streams to AOM by default and require no manual settings. You can manually configure the log collection path. For details, see :ref:`Using ICAgent to Collect Container Logs `. + + To disable the standard output of the current workload, add the annotation kubernetes.AOM.log.stdout: [] in :ref:`Labels and Annotations `. For details about how to use this annotation, see :ref:`Table 1 `. - **Image Access Credential**: Select the credential used for accessing the image repository. The default value is **default-secret**. You can use default-secret to access images in SWR. For details about **default-secret**, see :ref:`default-secret `. - - **GPU graphics card**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. + - (Optional) **GPU**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. - **Service Settings** + **(Optional) Service Settings** - A Service is used for pod access. With a fixed IP address, a Service forwards access traffic to pods and performs load balancing for these pods. + A Service provides external access for pods. With a static IP address, a Service forwards access traffic to pods and performs automatic load balancing for these pods. - You can also create a Service after creating a workload. For details about the Service, see :ref:`Service Overview `. + You can also create a Service after creating a workload. For details about Services of different types, see :ref:`Overview `. - **Advanced Settings** + **(Optional) Advanced Settings** + + - Upgrade: Specify the upgrade mode and upgrade parameters of the workload. **Rolling upgrade** and **Replace upgrade** are supported. For details, see :ref:`Configuring the Workload Upgrade Policy `. + + - **Scheduling**: Configure affinity and anti-affinity policies for flexible workload scheduling. Node affinity, pod affinity, and pod anti-affinity are supported. For details, see :ref:`Scheduling Policy (Affinity/Anti-affinity) `. + + - **Toleration**: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see :ref:`Taints and Tolerations `. + + - .. _cce_10_0047__li179714209414: + + **Labels and Annotations**: Add labels or annotations for pods using key-value pairs. After entering the key and value, click **Confirm**. For details about how to use and configure labels and annotations, see :ref:`Labels and Annotations `. + + - **DNS**: Configure a separate DNS policy for the workload. For details, see :ref:`DNS Configuration `. - - **Upgrade**: See :ref:`Configuring the Workload Upgrade Policy `. - - **Scheduling**: See :ref:`Scheduling Policy (Affinity/Anti-affinity) `. - - **Labels and Annotations**: See :ref:`Pod Labels and Annotations `. - - **Toleration**: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see :ref:`Tolerations `. - - **DNS**: See :ref:`DNS Configuration `. - Network configuration: - Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see :ref:`Configuring QoS Rate Limiting for Inter-Pod Access `. @@ -144,7 +198,7 @@ The following procedure uses Nginx as an example to describe how to create a wor +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ | name | Name of the Deployment. | Mandatory | +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ - | Spec | Detailed description of the Deployment. | Mandatory | + | spec | Detailed description of the Deployment. | Mandatory | +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ | replicas | Number of pods. | Mandatory | +-----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------+ @@ -202,4 +256,4 @@ The following procedure uses Nginx as an example to describe how to create a wor - **AVAILABLE**: indicates the number of available pods. - **AGE**: period the Deployment keeps running -#. If the Deployment will be accessed through a ClusterIP or NodePort Service, add the corresponding Service. For details, see :ref:`Networking `. +#. If the Deployment will be accessed through a ClusterIP or NodePort Service, add the corresponding Service. For details, see :ref:`Network `. diff --git a/umn/source/workloads/creating_a_workload/creating_a_job.rst b/umn/source/workloads/creating_a_workload/creating_a_job.rst new file mode 100644 index 0000000..e851118 --- /dev/null +++ b/umn/source/workloads/creating_a_workload/creating_a_job.rst @@ -0,0 +1,245 @@ +:original_name: cce_10_0150.html + +.. _cce_10_0150: + +Creating a Job +============== + +Scenario +-------- + +Jobs are short-lived and run for a certain time to completion. They can be executed immediately after being deployed. It is completed after it exits normally (exit 0). + +A job is a resource object that is used to control batch tasks. It is different from a long-term servo workload (such as Deployment and StatefulSet). + +A job is started and terminated at specific times, while a long-term servo workload runs unceasingly unless being terminated. The pods managed by a job automatically exit after successfully completing the job based on user configurations. The success flag varies according to the spec.completions policy. + +- One-off jobs: A single pod runs once until successful termination. +- Jobs with a fixed success count: N pods run until successful termination. +- A queue job is considered completed based on the global success confirmed by the application. + +Prerequisites +------------- + +Resources have been created. For details, see :ref:`Creating a Node `. If clusters and nodes are available, you need not create them again. + +Using the CCE Console +--------------------- + +#. Log in to the CCE console. + +#. Click the cluster name to go to the cluster console, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. + +#. Set basic information about the workload. + + **Basic Info** + + - **Workload Type**: Select **Job**. For details about workload types, see :ref:`Overview `. + - **Workload Name**: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. + - **Namespace**: Select the namespace of the workload. The default value is **default**. You can also click **Create Namespace** to create one. For details, see :ref:`Creating a Namespace `. + - **Pods**: Enter the number of pods of the workload. + - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences, see :ref:`Kata Runtime and Common Runtime `. + + **Container Settings** + + - Container Information + + Multiple containers can be configured in a pod. You can click **Add Container** on the right to configure multiple containers for the pod. + + - **Basic Info**: Configure basic information about the container. + + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+====================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Container Name | Name the container. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Pull Policy | Image update or pull policy. If you select **Always**, the image is pulled from the image repository each time. If you do not select **Always**, the existing image of the node is preferentially used. If the image does not exist, the image is pulled from the image repository. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Name | Click **Select Image** and select the image used by the container. | + | | | + | | To use a third-party image, see :ref:`Using Third-Party Images `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Tag | Select the image tag to be deployed. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CPU Quota | - **Request**: minimum number of CPU cores required by a container. The default value is 0.25 cores. | + | | - **Limit**: maximum number of CPU cores available for a container. Do not leave **Limit** unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Memory Quota | - **Request**: minimum amount of memory required by a container. The default value is 512 MiB. | + | | - **Limit**: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) GPU Quota | Configurable only when the cluster contains GPU nodes and the :ref:`gpu-beta ` add-on is installed. | + | | | + | | - **All**: The GPU is not used. | + | | - **Dedicated**: GPU resources are exclusively used by the container. | + | | - **Shared**: percentage of GPU resources used by the container. For example, if this parameter is set to **10%**, the container uses 10% of GPU resources. | + | | | + | | For details about how to use GPU in the cluster, see :ref:`Default GPU Scheduling in Kubernetes `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Privileged Container | Programs in a privileged container have certain privileges. | + | | | + | | If **Privileged Container** is enabled, the container is assigned privileges. For example, privileged containers can manipulate network devices on the host machine and modify kernel parameters. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Init Container | Indicates whether to use the container as an init container. The init container does not support health check. | + | | | + | | An init container is a special container that runs before other app containers in a pod are started. Each pod can contain multiple containers. In addition, a pod can contain one or more Init containers. Application containers in a pod are started and run only after the running of all Init containers completes. For details, see `Init Container `__. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + - (Optional) **Lifecycle**: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see :ref:`Setting Container Lifecycle Parameters `. + + - (Optional) **Environment Variables**: Set variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see :ref:`Setting an Environment Variable `. + + - (Optional) **Data Storage**: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see :ref:`Storage `. + + .. note:: + + If the workload contains more than one pod, EVS volumes cannot be mounted. + + - (Optional) **Logging**: Report container stdout streams to AOM by default and require no manual settings. You can manually configure the log collection path. For details, see :ref:`Using ICAgent to Collect Container Logs `. + + To disable the standard output of the current workload, add the annotation kubernetes.AOM.log.stdout: [] in :ref:`Labels and Annotations `. For details about how to use this annotation, see :ref:`Table 1 `. + + - **Image Access Credential**: Select the credential used for accessing the image repository. The default value is **default-secret**. You can use default-secret to access images in SWR. For details about **default-secret**, see :ref:`default-secret `. + + - (Optional) **GPU**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. + + **(Optional) Advanced Settings** + + - **Labels and Annotations**: Add labels or annotations for pods using key-value pairs. After entering the key and value, click **Confirm**. For details about how to use and configure labels and annotations, see :ref:`Labels and Annotations `. + + - **Job Settings** + + - **Parallel Pods**: Maximum number of pods that can run in parallel during job execution. The value cannot be greater than the total number of pods in the job. + - **Timeout (s)**: Once a job reaches this time, the job status becomes failed and all pods in this job will be deleted. If you leave this parameter blank, the job will never time out. + + - Network configuration: + + - Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see :ref:`Configuring QoS Rate Limiting for Inter-Pod Access `. + +#. Click **Create Workload** in the lower right corner. + +.. _cce_10_0150__section450152719412: + +Using kubectl +------------- + +A job has the following configuration parameters: + +- **spec.template**: has the same schema as a pod. +- **RestartPolicy**: can only be set to **Never** or **OnFailure**. +- For a single-pod job, the job ends after the pod runs successfully by default. +- **.spec.completions**: indicates the number of pods that need to run successfully to end a job. The default value is **1**. +- **.spec.parallelism**: indicates the number of pods that run concurrently. The default value is **1**. +- **spec.backoffLimit**: indicates the maximum number of retries performed if a pod fails. When the limit is reached, the pod will not try again. +- **.spec.activeDeadlineSeconds**: indicates the running time of pods. Once the time is reached, all pods of the job are terminated. The priority of .spec.activeDeadlineSeconds is higher than that of .spec.backoffLimit. That is, if a job reaches the .spec.activeDeadlineSeconds, the spec.backoffLimit is ignored. + +Based on the **.spec.completions** and **.spec.Parallelism** settings, jobs are classified into the following types. + +.. table:: **Table 1** Job types + + +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ + | Job Type | Description | Example | + +=============================================+=======================================================================+=======================================================+ + | One-off jobs | A single pod runs once until successful termination. | Database migration | + +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ + | Jobs with a fixed completion count | One pod runs until reaching the specified **completions** count. | Work queue processing pod | + +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ + | Parallel jobs with a fixed completion count | Multiple pods run until reaching the specified **completions** count. | Multiple pods for processing work queues concurrently | + +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ + | Parallel jobs | One or more pods run until successful termination. | Multiple pods for processing work queues concurrently | + +---------------------------------------------+-----------------------------------------------------------------------+-------------------------------------------------------+ + +The following is an example job, which calculates Pi till the 2000\ :sup:`th` digit and prints the output. + +.. code-block:: + + apiVersion: batch/v1 + kind: Job + metadata: + name: myjob + spec: + completions: 50 # 50 pods need to be run to finish a job. In this example, Pi is printed for 50 times. + parallelism: 5 # 5 pods are run in parallel. + backoffLimit: 5 # The maximum number of retry times is 5. + template: + spec: + containers: + - name: pi + image: perl + command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] + restartPolicy: Never + imagePullSecrets: + - name: default-secret + +**Description** + +- **apiVersion: batch/v1** indicates the version of the current job. +- **kind: Job** indicates that the current resource is a job. +- **restartPolicy: Never** indicates the current restart policy. For jobs, this parameter can only be set to **Never** or **OnFailure**. For other controllers (for example, Deployments), you can set this parameter to **Always**. + +**Run the job.** + +#. Start the job. + + .. code-block:: console + + [root@k8s-master k8s]# kubectl apply -f myjob.yaml + job.batch/myjob created + +#. View the job details. + + **kubectl get job** + + .. code-block:: console + + [root@k8s-master k8s]# kubectl get job + NAME COMPLETIONS DURATION AGE + myjob 50/50 23s 3m45s + + If the value of **COMPLETIONS** is **50/50**, the job is successfully executed. + +#. Query the pod status. + + **kubectl get pod** + + .. code-block:: console + + [root@k8s-master k8s]# kubectl get pod + NAME READY STATUS RESTARTS AGE + myjob-29qlw 0/1 Completed 0 4m5s + ... + + If the status is **Completed**, the job is complete. + +#. View the pod logs. + + **kubectl logs** + + .. code-block:: + + # kubectl logs myjob-29qlw + 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 + +Related Operations +------------------ + +After a one-off job is created, you can perform operations listed in :ref:`Table 2 `. + +.. _cce_10_0150__t84075653e7544394939d13740fad0c20: + +.. table:: **Table 2** Related operations + + +-----------------------------------+-------------------------------------------------------------------------------------------------------------+ + | Operation | Description | + +===================================+=============================================================================================================+ + | Editing a YAML file | Click **More** > **Edit YAML** next to the job name to edit the YAML file corresponding to the current job. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------+ + | Deleting a job | #. Select the job to be deleted and choose **More** > **Delete** in the **Operation** column. | + | | | + | | #. Click **Yes**. | + | | | + | | Deleted jobs cannot be restored. Exercise caution when deleting a job. | + +-----------------------------------+-------------------------------------------------------------------------------------------------------------+ diff --git a/umn/source/workloads/creating_a_workload/creating_a_statefulset.rst b/umn/source/workloads/creating_a_workload/creating_a_statefulset.rst new file mode 100644 index 0000000..6055e5e --- /dev/null +++ b/umn/source/workloads/creating_a_workload/creating_a_statefulset.rst @@ -0,0 +1,275 @@ +:original_name: cce_10_0048.html + +.. _cce_10_0048: + +Creating a StatefulSet +====================== + +Scenario +-------- + +StatefulSets are a type of workloads whose data or status is stored while they are running. For example, MySQL is a StatefulSet because it needs to store new data. + +A container can be migrated between different hosts, but data is not stored on the hosts. To store StatefulSet data persistently, attach HA storage volumes provided by CCE to the container. + +Constraints +----------- + +- When you delete or scale a StatefulSet, the system does not delete the storage volumes associated with the StatefulSet to ensure data security. +- When you delete a StatefulSet, reduce the number of replicas to **0** before deleting the StatefulSet so that pods in the StatefulSet can be stopped in order. +- When you create a StatefulSet, a headless Service is required for pod access. For details, see :ref:`Headless Service `. +- When a node is unavailable, pods become **Unready**. In this case, manually delete the pods of the StatefulSet so that the pods can be migrated to a normal node. + +Prerequisites +------------- + +- Before creating a workload, you must have an available cluster. For details on how to create a cluster, see :ref:`Creating a Cluster `. +- To enable public access to a workload, ensure that an EIP or load balancer has been bound to at least one node in the cluster. + + .. note:: + + If a pod has multiple containers, ensure that the ports used by the containers do not conflict with each other. Otherwise, creating the StatefulSet will fail. + +Using the CCE Console +--------------------- + +#. Log in to the CCE console. + +#. Click the cluster name to go to the cluster console, choose **Workloads** in the navigation pane, and click the **Create Workload** in the upper right corner. + +#. Set basic information about the workload. + + **Basic Info** + + - **Workload Type**: Select **StatefulSet**. For details about workload types, see :ref:`Overview `. + - **Workload Name**: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed. + - **Namespace**: Select the namespace of the workload. The default value is **default**. You can also click **Create Namespace** to create one. For details, see :ref:`Creating a Namespace `. + - **Pods**: Enter the number of pods of the workload. + - **Container Runtime**: A CCE cluster uses runC by default, whereas a CCE Turbo cluster supports both runC and Kata. For details about the differences, see :ref:`Kata Runtime and Common Runtime `. + - **Time Zone Synchronization**: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see :ref:`Configuring Time Zone Synchronization `. + + **Container Settings** + + - Container Information + + Multiple containers can be configured in a pod. You can click **Add Container** on the right to configure multiple containers for the pod. + + - **Basic Info**: Configure basic information about the container. + + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Parameter | Description | + +===================================+====================================================================================================================================================================================================================================================================================================================================================================================================================================+ + | Container Name | Name the container. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Pull Policy | Image update or pull policy. If you select **Always**, the image is pulled from the image repository each time. If you do not select **Always**, the existing image of the node is preferentially used. If the image does not exist, the image is pulled from the image repository. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Name | Click **Select Image** and select the image used by the container. | + | | | + | | To use a third-party image, see :ref:`Using Third-Party Images `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Image Tag | Select the image tag to be deployed. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | CPU Quota | - **Request**: minimum number of CPU cores required by a container. The default value is 0.25 cores. | + | | - **Limit**: maximum number of CPU cores available for a container. Do not leave **Limit** unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Memory Quota | - **Request**: minimum amount of memory required by a container. The default value is 512 MiB. | + | | - **Limit**: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated. | + | | | + | | If **Request** and **Limit** are not specified, the quota is not limited. For more information and suggestions about **Request** and **Limit**, see :ref:`Setting Container Specifications `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) GPU Quota | Configurable only when the cluster contains GPU nodes and the :ref:`gpu-beta ` add-on is installed. | + | | | + | | - **All**: The GPU is not used. | + | | - **Dedicated**: GPU resources are exclusively used by the container. | + | | - **Shared**: percentage of GPU resources used by the container. For example, if this parameter is set to **10%**, the container uses 10% of GPU resources. | + | | | + | | For details about how to use GPU in the cluster, see :ref:`Default GPU Scheduling in Kubernetes `. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Privileged Container | Programs in a privileged container have certain privileges. | + | | | + | | If **Privileged Container** is enabled, the container is assigned privileges. For example, privileged containers can manipulate network devices on the host machine and modify kernel parameters. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | (Optional) Init Container | Indicates whether to use the container as an init container. The init container does not support health check. | + | | | + | | An init container is a special container that runs before other app containers in a pod are started. Each pod can contain multiple containers. In addition, a pod can contain one or more Init containers. Application containers in a pod are started and run only after the running of all Init containers completes. For details, see `Init Container `__. | + +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + + - (Optional) **Lifecycle**: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see :ref:`Setting Container Lifecycle Parameters `. + + - (Optional) **Health Check**: Set the liveness probe, ready probe, and startup probe as required. For details, see :ref:`Setting Health Check for a Container `. + + - (Optional) **Environment Variables**: Set variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see :ref:`Setting an Environment Variable `. + + - (Optional) **Data Storage**: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see :ref:`Storage `. + + .. note:: + + - StatefulSets support dynamic attachment of EVS disks. For details, see :ref:`Dynamically Mounting an EVS Disk to a StatefulSet ` and :ref:`Dynamically Mounting a Local PV to a StatefulSet `. + + Dynamic mounting is achieved by using the `volumeClaimTemplates `__ field and depends on the dynamic creation capability of StorageClass. A StatefulSet associates each pod with a PVC using the **volumeClaimTemplates** field, and the PVC is bound to the corresponding PV. Therefore, after the pod is rescheduled, the original data can still be mounted based on the PVC name. + + - After a workload is created, the storage that is dynamically mounted cannot be updated. + + - (Optional) **Security Context**: Set container permissions to protect the system and other containers from being affected. Enter the user ID to set container permissions and prevent systems and other containers from being affected. + + - (Optional) **Logging**: Report container stdout streams to AOM by default and require no manual settings. You can manually configure the log collection path. For details, see :ref:`Using ICAgent to Collect Container Logs `. + + To disable the standard output of the current workload, add the annotation kubernetes.AOM.log.stdout: [] in :ref:`Labels and Annotations `. For details about how to use this annotation, see :ref:`Table 1 `. + + - **Image Access Credential**: Select the credential used for accessing the image repository. The default value is **default-secret**. You can use default-secret to access images in SWR. For details about **default-secret**, see :ref:`default-secret `. + + - (Optional) **GPU**: **All** is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type. + + **Headless Service Parameters** + + A headless Service is used to solve the problem of mutual access between pods in a StatefulSet. The headless Service provides a fixed access domain name for each pod. For details, see :ref:`Headless Service `. + + **(Optional) Service Settings** + + A Service provides external access for pods. With a static IP address, a Service forwards access traffic to pods and performs automatic load balancing for these pods. + + You can also create a Service after creating a workload. For details about Services of different types, see :ref:`Overview `. + + **(Optional) Advanced Settings** + + - Upgrade: Specify the upgrade mode and upgrade parameters of the workload. **Rolling upgrade** and **Replace upgrade** are supported. For details, see :ref:`Configuring the Workload Upgrade Policy `. + + - **Pod Management Policies**: + + For some distributed systems, the StatefulSet sequence is unnecessary and/or should not occur. These systems require only uniqueness and identifiers. + + - **OrderedReady**: The StatefulSet will deploy, delete, or scale pods in order and one by one. (The StatefulSet continues only after the previous pod is ready or deleted.) This is the default policy. + - **Parallel**: The StatefulSet will create pods in parallel to match the desired scale without waiting, and will delete all pods at once. + + - **Scheduling**: Configure affinity and anti-affinity policies for flexible workload scheduling. Node affinity, pod affinity, and pod anti-affinity are supported. For details, see :ref:`Scheduling Policy (Affinity/Anti-affinity) `. + + - **Toleration**: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see :ref:`Taints and Tolerations `. + + - **Labels and Annotations**: Add labels or annotations for pods using key-value pairs. After entering the key and value, click **Confirm**. For details about how to use and configure labels and annotations, see :ref:`Labels and Annotations `. + + - **DNS**: Configure a separate DNS policy for the workload. For details, see :ref:`DNS Configuration `. + + - Network configuration: + + - Pod ingress/egress bandwidth limitation: You can set ingress/egress bandwidth limitation for pods. For details, see :ref:`Configuring QoS Rate Limiting for Inter-Pod Access `. + +#. Click **Create Workload** in the lower right corner. + +Using kubectl +------------- + +In this example, an nginx workload is used and the EVS volume is dynamically mounted to it using the **volumeClaimTemplates** field. + +#. Use kubectl to connect to the cluster. For details, see :ref:`Connecting to a Cluster Using kubectl `. + +#. Create and edit the **nginx-statefulset.yaml** file. + + **nginx-statefulset.yaml** is an example file name, and you can change it as required. + + **vi nginx-statefulset.yaml** + + The following provides an example of the file contents. For more information on StatefulSet, see the `Kubernetes documentation `__. + + .. code-block:: + + apiVersion: apps/v1 + kind: StatefulSet + metadata: + name: nginx + spec: + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: container-1 + image: nginx:latest + imagePullPolicy: IfNotPresent + resources: + requests: + cpu: 250m + memory: 512Mi + limits: + cpu: 250m + memory: 512Mi + volumeMounts: + - name: test + readOnly: false + mountPath: /usr/share/nginx/html + subPath: '' + imagePullSecrets: + - name: default-secret + dnsPolicy: ClusterFirst + volumes: [] + serviceName: nginx-svc + replicas: 2 + volumeClaimTemplates: # Dynamically mounts the EVS volume to the workload. + - apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: test + namespace: default + annotations: + everest.io/disk-volume-type: SAS # SAS EVS volume type. + labels: + failure-domain.beta.kubernetes.io/region: eu-de # region where the EVS volume is created. + failure-domain.beta.kubernetes.io/zone: # AZ where the EVS volume is created. It must be the same as the AZ of the node. + spec: + accessModes: + - ReadWriteOnce # The value must be ReadWriteOnce for the EVS volume. + resources: + requests: + storage: 10Gi + storageClassName: csi-disk # Storage class name. The value is csi-disk for the EVS volume. + updateStrategy: + type: RollingUpdate + + **vi nginx-headless.yaml** + + .. code-block:: + + apiVersion: v1 + kind: Service + metadata: + name: nginx-svc + namespace: default + labels: + app: nginx + spec: + selector: + app: nginx + version: v1 + clusterIP: None + ports: + - name: nginx + targetPort: 80 + nodePort: 0 + port: 80 + protocol: TCP + type: ClusterIP + +#. Create a workload and the corresponding headless service. + + **kubectl create -f nginx-statefulset.yaml** + + If the following information is displayed, the StatefulSet has been successfully created. + + .. code-block:: + + statefulset.apps/nginx created + + **kubectl create -f nginx-headless.yaml** + + If the following information is displayed, the headless service has been successfully created. + + .. code-block:: + + service/nginx-svc created + +#. If the workload will be accessed through a ClusterIP or NodePort Service, set the corresponding workload access type. For details, see :ref:`Network `. diff --git a/umn/source/workloads/creating_a_workload/index.rst b/umn/source/workloads/creating_a_workload/index.rst new file mode 100644 index 0000000..520d302 --- /dev/null +++ b/umn/source/workloads/creating_a_workload/index.rst @@ -0,0 +1,22 @@ +:original_name: cce_10_0673.html + +.. _cce_10_0673: + +Creating a Workload +=================== + +- :ref:`Creating a Deployment ` +- :ref:`Creating a StatefulSet ` +- :ref:`Creating a DaemonSet ` +- :ref:`Creating a Job ` +- :ref:`Creating a Cron Job ` + +.. toctree:: + :maxdepth: 1 + :hidden: + + creating_a_deployment + creating_a_statefulset + creating_a_daemonset + creating_a_job + creating_a_cron_job diff --git a/umn/source/workloads/index.rst b/umn/source/workloads/index.rst index b6cb8e1..0317c6a 100644 --- a/umn/source/workloads/index.rst +++ b/umn/source/workloads/index.rst @@ -6,37 +6,19 @@ Workloads ========= - :ref:`Overview ` -- :ref:`Creating a Deployment ` -- :ref:`Creating a StatefulSet ` -- :ref:`Creating a DaemonSet ` -- :ref:`Creating a Job ` -- :ref:`Creating a Cron Job ` -- :ref:`Managing Workloads and Jobs ` +- :ref:`Creating a Workload ` - :ref:`Configuring a Container ` -- :ref:`GPU Scheduling ` -- :ref:`CPU Core Binding ` - :ref:`Accessing a Container ` -- :ref:`Configuring QoS Rate Limiting for Inter-Pod Access ` -- :ref:`Pod Labels and Annotations ` -- :ref:`Volcano Scheduling ` -- :ref:`Security Group Policies ` +- :ref:`Managing Workloads and Jobs ` +- :ref:`Kata Runtime and Common Runtime ` .. toctree:: :maxdepth: 1 :hidden: overview - creating_a_deployment - creating_a_statefulset - creating_a_daemonset - creating_a_job - creating_a_cron_job - managing_workloads_and_jobs + creating_a_workload/index configuring_a_container/index - gpu_scheduling - cpu_core_binding/index accessing_a_container - configuring_qos_rate_limiting_for_inter-pod_access - pod_labels_and_annotations - volcano_scheduling/index - security_group_policies + managing_workloads_and_jobs + kata_runtime_and_common_runtime diff --git a/umn/source/nodes/node_overview/kata_containers_and_common_containers.rst b/umn/source/workloads/kata_runtime_and_common_runtime.rst similarity index 79% rename from umn/source/nodes/node_overview/kata_containers_and_common_containers.rst rename to umn/source/workloads/kata_runtime_and_common_runtime.rst index 2101440..d864eb0 100644 --- a/umn/source/nodes/node_overview/kata_containers_and_common_containers.rst +++ b/umn/source/workloads/kata_runtime_and_common_runtime.rst @@ -2,43 +2,45 @@ .. _cce_10_0463: -Kata Containers and Common Containers -===================================== +Kata Runtime and Common Runtime +=============================== -The most significant difference is that each Kata container (pod) runs on an independent micro-VM, has an independent OS kernel, and is securely isolated at the virtualization layer. CCE provides container isolation that is more secure than independent private Kubernetes clusters. With isolated OS kernels, computing resources, and networks, pod resources and data will not be preempted and stolen by other pods. +The most significant difference is that each Kata container (pod) runs on an independent micro-VM, has an independent OS kernel, and is securely isolated at the virtualization layer. With Kata runtime, kernels, compute resources, and networks are isolated between containers to protect pod resources and data from being preempted and stolen by other pods. -You can run common or Kata containers on a single node in a CCE Turbo cluster. The differences between them are as follows: +CCE Turbo clusters allow you to create workloads using common runtime or Kata runtime as required. The differences between them are as follows. -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Category | Kata Container | Common Container (Docker) | Common Container (containerd) | -+==========================================================================================+=====================================================================================================================================================================================================================================================================================================+========================================================================+========================================================================+ -| Node type used to run containers | Bare-metal server (BMS) | VM | VM | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Container Engine | containerd | Docker | containerd | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Container Runtime | Kata | runC | runC | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Container kernel | Exclusive kernel | Sharing the kernel with the host | Sharing the kernel with the host | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Container isolation | Lightweight VMs | cgroups and namespaces | cgroups and namespaces | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Container engine storage driver | Device Mapper | OverlayFS2 | OverlayFS | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| `Pod overhead `__ | Memory: 100 MiB | None | None | -| | | | | -| | CPU: 0.1 cores | | | -| | | | | -| | Pod overhead is a feature for accounting for the resources consumed by the pod infrastructure on top of the container requests and limits. For example, if **limits.cpu** is set to 0.5 cores and **limits.memory** to 256 MiB for a pod, the pod will request 0.6-core CPUs and 356 MiB of memory. | | | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Minimal specifications | Memory: 256 MiB | None | None | -| | | | | -| | CPU: 0.25 cores | | | -| | | | | -| | It is recommended that the ratio of CPU (unit: core) to memory (unit: GiB) be in the range of 1:1 to 1:8. For example, if CPU is 0.5 cores, the memory should range form 512 MiB to 4 GiB. | | | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Container engine CLI | crictl | Docker | crictl | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Pod computing resources | The request and limit values must be the same for both CPU and memory. | The request and limit values can be different for both CPU and memory. | The request and limit values can be different for both CPU and memory. | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ -| Host network | Not supported | Supported | Supported | -+------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+------------------------------------------------------------------------+ ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| Category | Kata Runtime | Common Runtime | ++==========================================================================================+=====================================================================================================================================================================================================================================================================================================+========================================================================+ +| Node type used to run containers | Bare-metal server (BMS) | VM | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| Container engine | containerd | Docker and containerd | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| Container runtime | Kata | runC | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| Container kernel | Exclusive kernel | Sharing the kernel with the host | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| Container isolation | Lightweight VMs | cgroups and namespaces | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| Container engine storage driver | Device Mapper | - Docker container: OverlayFS2 | +| | | - containerd container: OverlayFS | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| `Pod overhead `__ | Memory: 100 MiB | None | +| | | | +| | CPU: 0.1 cores | | +| | | | +| | Pod overhead is a feature for accounting for the resources consumed by the pod infrastructure on top of the container requests and limits. For example, if **limits.cpu** is set to 0.5 cores and **limits.memory** to 256 MiB for a pod, the pod will request 0.6-core CPUs and 356 MiB of memory. | | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| Minimal specifications | Memory: 256 MiB | None | +| | | | +| | CPU: 0.25 cores | | +| | | | +| | It is recommended that the ratio of CPU (unit: core) to memory (unit: GiB) be in the range of 1:1 to 1:8. For example, if CPU is 0.5 cores, the memory should range form 512 MiB to 4 GiB. | | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| Container engine CLI | crictl | - Docker container: docker | +| | | - containerd container: crictl | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| Pod computing resources | The request and limit values must be the same for both CPU and memory. | The request and limit values can be different for both CPU and memory. | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ +| :ref:`Host Network ` | Not supported | Supported | ++------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+ diff --git a/umn/source/workloads/managing_workloads_and_jobs.rst b/umn/source/workloads/managing_workloads_and_jobs.rst index a47252c..6ef5e8e 100644 --- a/umn/source/workloads/managing_workloads_and_jobs.rst +++ b/umn/source/workloads/managing_workloads_and_jobs.rst @@ -29,7 +29,7 @@ After a workload is created, you can upgrade, monitor, roll back, or delete the +------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | :ref:`Enabling/Disabling the Upgrade ` | Only Deployments support this operation. | +------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | :ref:`Manage Label ` | Labels are key-value pairs and can be attached to workloads for affinity and anti-affinity scheduling. Jobs and Cron Jobs do not support this operation. | + | :ref:`Manage Label ` | Labels are attached to workloads as key-value pairs to manage and select workloads. Jobs and Cron Jobs do not support this operation. | +------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | :ref:`Delete ` | You can delete a workload or job that is no longer needed. Deleted workloads or jobs cannot be recovered. | +------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -56,11 +56,19 @@ Viewing Logs You can view logs of Deployments, StatefulSets, DaemonSets, and jobs. This section uses a Deployment as an example to describe how to view logs. +.. important:: + + Before viewing logs, ensure that the time of the browser is the same as that on the backend server. + #. Log in to the CCE console, go to an existing cluster, and choose **Workloads** in the navigation pane. #. Click the **Deployments** tab and click the **View Log** of the target workload. - On the displayed **View Log** window, you can view logs by time. + On the displayed **View Log** window, you can view logs. + + .. note:: + + The displayed logs are standard output logs of containers and do not have persistence and advanced O&M capabilities. To use more comprehensive log capabilities, see :ref:`Logs `. If the function of collecting standard output is enabled for the workload (enabled by default), you can go to AOM to view more workload logs. For details, see :ref:`Using ICAgent to Collect Container Logs `. .. _cce_10_0007__en-us_topic_0107283638_section17604174417381: @@ -79,7 +87,7 @@ Before replacing an image or image version, upload the new image to the SWR serv .. note:: - Workloads cannot be upgraded in batches. - - Before performing an in-place StatefulSet upgrade, you must manually delete old pods. Otherwise, the upgrade status is always displayed as **Upgrading**. + - Before performing an in-place StatefulSet upgrade, you must manually delete old pods. Otherwise, the upgrade status is always displayed as **Processing**. #. Upgrade the workload based on service requirements. The method for setting parameter is the same as that for creating a workload. #. After the update is complete, click **Upgrade Workload**, manually confirm the YAML file, and submit the upgrade. @@ -92,7 +100,7 @@ Editing a YAML file You can modify and download the YAML files of Deployments, StatefulSets, DaemonSets, and pods on the CCE console. YAML files of jobs and cron jobs can only be viewed, copied, and downloaded. This section uses a Deployment as an example to describe how to edit the YAML file. #. Log in to the CCE console, go to an existing cluster, and choose **Workloads** in the navigation pane. -#. Click the **Deployments** tab and choose **More** > **Edit YAML** in the **Operation** column of the target workload. In the dialog box displayed, modify the YAML file. +#. Click the **Deployments** tab and choose **More** > **Edit YAML** in the **Operation** column of the target workload. In the dialog box that is displayed, modify the YAML file. #. Click **OK**. #. (Optional) In the **Edit YAML** window, click **Download** to download the YAML file. @@ -144,23 +152,7 @@ Only Deployments support this operation. Managing Labels --------------- -Labels are key-value pairs and can be attached to workloads. Workload labels are often used for affinity and anti-affinity scheduling. You can add labels to multiple workloads or a specified workload. - -You can manage the labels of Deployments, StatefulSets, and DaemonSets based on service requirements. This section uses Deployments as an example to describe how to manage labels. - -In the following figure, three labels (release, env, and role) are defined for workload APP 1, APP 2, and APP 3. The values of these labels vary with workload. - -- Label of APP 1: [release:alpha;env:development;role:frontend] -- Label of APP 2: [release:beta;env:testing;role:frontend] -- Label of APP 3: [release:alpha;env:production;role:backend] - -If you set **key** to **role** and **value** to **frontend** when using workload scheduling or another function, APP 1 and APP 2 will be selected. - - -.. figure:: /_static/images/en-us_image_0000001517903028.png - :alt: **Figure 1** Label example - - **Figure 1** Label example +Labels are key-value pairs and can be attached to workloads. You can manage and select workloads by labels. You can add labels to multiple workloads or a specified workload. #. Log in to the CCE console, go to an existing cluster, and choose **Workloads** in the navigation pane. #. Click the **Deployments** tab and choose **More** > **Manage Label** in the **Operation** column of the target workload. @@ -192,8 +184,8 @@ You can delete a workload or job that is no longer needed. Deleted workloads or .. _cce_10_0007__en-us_topic_0107283638_section1947616516301: -Viewing Events --------------- +Events +------ This section uses Deployments as an example to illustrate how to view events of a workload. To view the event of a job or cron jon, click **View Event** in the **Operation** column of the target workload. diff --git a/umn/source/workloads/overview.rst b/umn/source/workloads/overview.rst index 84ab7aa..d448940 100644 --- a/umn/source/workloads/overview.rst +++ b/umn/source/workloads/overview.rst @@ -20,12 +20,12 @@ Pods can be used in either of the following ways: .. _cce_10_0006__en-us_topic_0254767870_fig347141918551: - .. figure:: /_static/images/en-us_image_0000001518222716.png + .. figure:: /_static/images/en-us_image_0000001695896725.png :alt: **Figure 1** Pod **Figure 1** Pod -In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller generally uses a pod template to create corresponding pods. +In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and Jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller typically uses a pod template to create corresponding pods. Deployment ---------- @@ -33,10 +33,10 @@ Deployment A pod is the smallest and simplest unit that you create or deploy in Kubernetes. It is designed to be an ephemeral, one-off entity. A pod can be evicted when node resources are insufficient and disappears along with a cluster node failure. Kubernetes provides controllers to manage pods. Controllers can create and manage pods, and provide replica management, rolling upgrade, and self-healing capabilities. The most commonly used controller is Deployment. -.. figure:: /_static/images/en-us_image_0000001569023033.png - :alt: **Figure 2** Relationship between a Deployment and pods +.. figure:: /_static/images/en-us_image_0000001695896721.png + :alt: **Figure 2** Deployment - **Figure 2** Relationship between a Deployment and pods + **Figure 2** Deployment A Deployment can contain one or more pods. These pods have the same role. Therefore, the system automatically distributes requests to multiple pods of a Deployment. @@ -49,7 +49,7 @@ All pods under a Deployment have the same characteristics except for the name an However, Deployments cannot meet the requirements in some distributed scenarios when each pod requires its own status or in a distributed database where each pod requires independent storage. -With detailed analysis, it is found that each part of distributed stateful applications plays a different role. For example, the database nodes are deployed in active/standby mode, and pods are dependent on each other. In this case, you need to meet the following requirements for the pods: +With detailed analysis, it is found that each part of distributed stateful applications plays a different role. For example, the database nodes are deployed in active/standby mode, and pods are dependent on each other. In this case, the pods need to meet the following requirements: - A pod can be recognized by other pods. Therefore, a pod must have a fixed identifier. - Each pod has an independent storage device. After a pod is deleted and then restored, the data read from the pod must be the same as the previous one. Otherwise, the pod status is inconsistent. @@ -58,7 +58,7 @@ To address the preceding requirements, Kubernetes provides StatefulSets. #. A StatefulSet provides a fixed name for each pod following a fixed number ranging from 0 to N. After a pod is rescheduled, the pod name and the host name remain unchanged. -#. A StatefulSet provides a fixed access domain name for each pod through the headless Service (described in following sections). +#. A StatefulSet provides a fixed access domain name for each pod through the headless Service (described in the following sections). #. The StatefulSet creates PersistentVolumeClaims (PVCs) with fixed identifiers to ensure that pods can access the same persistent data after being rescheduled. @@ -72,7 +72,7 @@ A DaemonSet runs a pod on each node in a cluster and ensures that there is only DaemonSets are closely related to nodes. If a node becomes faulty, the DaemonSet will not create the same pods on other nodes. -.. figure:: /_static/images/en-us_image_0000001518062772.png +.. figure:: /_static/images/en-us_image_0000001647577048.png :alt: **Figure 3** DaemonSet **Figure 3** DaemonSet @@ -82,7 +82,7 @@ Job and Cron Job Jobs and cron jobs allow you to run short lived, one-off tasks in batch. They ensure the task pods run to completion. -- A job is a resource object used by Kubernetes to control batch tasks. Jobs are different from long-term servo tasks (such as Deployments and StatefulSets). The former is started and terminated at specific times, while the latter runs unceasingly unless being terminated. The pods managed by a job will be automatically removed after successfully completing tasks based on user configurations. +- A job is a resource object used by Kubernetes to control batch tasks. Jobs are different from long-term servo tasks (such as Deployments and StatefulSets). The former is started and terminated at specific times, while the latter runs unceasingly unless being terminated. The pods managed by a job will be automatically removed after completing tasks based on user configurations. - A cron job runs a job periodically on a specified schedule. A cron job object is similar to a line of a crontab file in Linux. This run-to-completion feature of jobs is especially suitable for one-off tasks, such as continuous integration (CI). @@ -92,24 +92,22 @@ Workload Lifecycle .. table:: **Table 1** Status description - +------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | Status | Description | - +========================+=========================================================================================================================+ - | Running | All pods are running. | - +------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | Unready | A container is abnormal, the number of pods is 0, or the workload is in pending state. | - +------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | Upgrading/Rolling back | The workload is being upgraded or rolled back. | - +------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | Available | For a multi-pod Deployment, some pods are abnormal but at least one pod is available. | - +------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | Completed | The task is successfully executed. This status is available only for common tasks. | - +------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | Stopped | The workload is stopped and the number of pods changes to 0. This status is available for workloads earlier than v1.13. | - +------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | Deleting | The workload is being deleted. | - +------------------------+-------------------------------------------------------------------------------------------------------------------------+ - | Pausing | The workload is being paused. | - +------------------------+-------------------------------------------------------------------------------------------------------------------------+ + +------------+-------------------------------------------------------------------------------------------------------------------------+ + | Status | Description | + +============+=========================================================================================================================+ + | Running | All pods are running or the number of pods is 0. | + +------------+-------------------------------------------------------------------------------------------------------------------------+ + | Unready | The container malfunctions and the pod under the workload is not working. | + +------------+-------------------------------------------------------------------------------------------------------------------------+ + | Processing | The workload is not running but no error is reported. | + +------------+-------------------------------------------------------------------------------------------------------------------------+ + | Available | For a multi-pod Deployment, some pods are abnormal but at least one pod is available. | + +------------+-------------------------------------------------------------------------------------------------------------------------+ + | Completed | The task is successfully executed. This status is available only for common tasks. | + +------------+-------------------------------------------------------------------------------------------------------------------------+ + | Stopped | The workload is stopped and the number of pods changes to 0. This status is available for workloads earlier than v1.13. | + +------------+-------------------------------------------------------------------------------------------------------------------------+ + | Deleting | The workload is being deleted. | + +------------+-------------------------------------------------------------------------------------------------------------------------+ -.. |image1| image:: /_static/images/en-us_image_0000001517743628.png +.. |image1| image:: /_static/images/en-us_image_0000001647417792.png