Update content

This commit is contained in:
OpenTelekomCloud Proposal Bot 2022-10-26 14:42:26 +00:00
parent 73c2c6eb54
commit e8c037285b
165 changed files with 72 additions and 308 deletions

View File

@ -35,6 +35,9 @@ sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ----------------------------------------------------
# https://docutils.sourceforge.io/docs/user/smartquotes.html - it does not
# what it is expected
smartquotes = False
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.

View File

@ -82,7 +82,6 @@ If the security group rules authorized in the cluster are insufficient for you t
.. figure:: /_static/images/en-us_image_0000001349257145.png
:alt: **Figure 1** Update
**Figure 1** Update
#. Click **OK**.
@ -91,7 +90,6 @@ If the security group rules authorized in the cluster are insufficient for you t
.. figure:: /_static/images/en-us_image_0000001295738056.png
:alt: **Figure 2** Updating access control rules
**Figure 2** Updating access control rules
.. |image1| image:: /_static/images/en-us_image_0000001349137565.png

View File

@ -348,7 +348,6 @@ If a cluster fails to be created, the failed task will be managed on the **Manag
.. figure:: /_static/images/en-us_image_0000001296217700.png
:alt: **Figure 1** Failed task management
**Figure 1** Failed task management
:ref:`Table 5 <mrs_01_0513__ta32348b05460406dbdc7db739e0fbb38>` lists the error codes of MRS cluster creation failures.

View File

@ -87,15 +87,15 @@ Custom Cluster Template Description
+-----------------------------------------------------------------------------------------------------------+--------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Node Deployment Principle | | Applicable Scenario | Networking Rule |
+===========================================================================================================+==========================+==============================================================================================================================================================================================================================================================================================================================================+=========================================================================================================================================================================================================================================================================================================+
| Management nodes, control nodes, and data nodes are deployed separately. | MN × 2 + CN × 9 + DN × n | (Recommended) This scheme is used when the number of data nodes is 5002000. | - If the number of nodes in a cluster exceeds 200, the nodes are distributed to different subnets and the subnets are interconnected with each other in Layer 3 using core switches. Each subnet can contain a maximum of 200 nodes and the allocation of nodes to different subnets must be balanced. |
| Management nodes, control nodes, and data nodes are deployed separately. | MN x 2 + CN x 9 + DN x n | (Recommended) This scheme is used when the number of data nodes is 500-2000. | - If the number of nodes in a cluster exceeds 200, the nodes are distributed to different subnets and the subnets are interconnected with each other in Layer 3 using core switches. Each subnet can contain a maximum of 200 nodes and the allocation of nodes to different subnets must be balanced. |
| | | | - If the number of nodes is less than 200, the nodes in the cluster are deployed in the same subnet and the nodes are interconnected with each other in Layer 2 using aggregation switches. |
| (This scheme requires at least eight nodes.) | | | |
+-----------------------------------------------------------------------------------------------------------+--------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | MN × 2 + CN × 5 + DN × n | (Recommended) This scheme is used when the number of data nodes is 100500. | |
| | MN x 2 + CN x 5 + DN x n | (Recommended) This scheme is used when the number of data nodes is 100-500. | |
+-----------------------------------------------------------------------------------------------------------+--------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | MN × 2 + CN × 3 + DN × n | (Recommended) This scheme is used when the number of data nodes is 30100. | |
| | MN x 2 + CN x 3 + DN x n | (Recommended) This scheme is used when the number of data nodes is 30-100. | |
+-----------------------------------------------------------------------------------------------------------+--------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| The management nodes and control nodes are deployed together, and the data nodes are deployed separately. | (MN+CN) × 3 + DN × n | (Recommended) This scheme is used when the number of data nodes is 330. | Nodes in the cluster are deployed in the same subnet and are interconnected with each other at Layer 2 through aggregation switches. |
| The management nodes and control nodes are deployed together, and the data nodes are deployed separately. | (MN+CN) x 3 + DN x n | (Recommended) This scheme is used when the number of data nodes is 3-30. | Nodes in the cluster are deployed in the same subnet and are interconnected with each other at Layer 2 through aggregation switches. |
+-----------------------------------------------------------------------------------------------------------+--------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| The management nodes, control nodes, and data nodes are deployed together. | | - This scheme is applicable to a cluster having fewer than 6 nodes. | Nodes in the cluster are deployed in the same subnet and are interconnected with each other at Layer 2 through aggregation switches. |
| | | - This scheme requires at least three nodes. | |

View File

@ -33,7 +33,6 @@ This operation is required only for **MRS 3.1.0 or later**.
.. figure:: /_static/images/en-us_image_0000001296217820.png
:alt: **Figure 1** Disabling Ranger authentication
**Figure 1** Disabling Ranger authentication
#. (Optional) To use an existing authentication policy, perform this step to export the authentication policy on the Ranger web page. After the Ranger metadata is switched, you can import the existing authentication policy again. The following uses Hive as an example. After the export, a policy file in JSON format is generated in a local directory.
@ -54,7 +53,6 @@ This operation is required only for **MRS 3.1.0 or later**.
.. figure:: /_static/images/en-us_image_0000001349058021.png
:alt: **Figure 2** Exporting authentication policies
**Figure 2** Exporting authentication policies
f. .. _mrs_01_24051__li1947954718720:
@ -65,7 +63,6 @@ This operation is required only for **MRS 3.1.0 or later**.
.. figure:: /_static/images/en-us_image_0000001348738221.png
:alt: **Figure 3** Exporting Hive authentication policies
**Figure 3** Exporting Hive authentication policies
Configuring a Data Connection for an MRS Cluster
@ -102,7 +99,6 @@ Configuring a Data Connection for an MRS Cluster
.. figure:: /_static/images/en-us_image_0000001296058188.png
:alt: **Figure 4** Restarting a service
**Figure 4** Restarting a service
#. Enable Ranger authentication for the component to be authenticated. The Hive component is used as an example.
@ -117,7 +113,6 @@ Configuring a Data Connection for an MRS Cluster
.. figure:: /_static/images/en-us_image_0000001295738404.png
:alt: **Figure 5** Enabling Ranger authentication
**Figure 5** Enabling Ranger authentication
#. Log in to the Ranger web UI and click the import button |image2| in the row of the Hive component.
@ -133,7 +128,6 @@ Configuring a Data Connection for an MRS Cluster
.. figure:: /_static/images/en-us_image_0000001296217824.png
:alt: **Figure 6** Importing authentication policies
**Figure 6** Importing authentication policies
#. Restart the component for which Ranger authentication is enabled.

View File

@ -137,7 +137,6 @@ Step 4: Accessing the OBS File System
.. figure:: /_static/images/en-us_image_0000001296217708.png
:alt: **Figure 1** Returned file list
**Figure 1** Returned file list
#. Verify that Hive can access OBS.
@ -164,7 +163,6 @@ Step 4: Accessing the OBS File System
.. figure:: /_static/images/en-us_image_0000001348738105.png
:alt: **Figure 2** Returned table name
**Figure 2** Returned table name
e. Press **Ctrl+C** to exit the Hive beeline.
@ -187,7 +185,6 @@ Step 4: Accessing the OBS File System
.. figure:: /_static/images/en-us_image_0000001349057897.png
:alt: **Figure 3** Returned table name
**Figure 3** Returned table name
d. Press **Ctrl+C** to exit the Spark beeline.
@ -212,7 +209,6 @@ Step 4: Accessing the OBS File System
.. figure:: /_static/images/en-us_image_0000001349257377.png
:alt: **Figure 4** Return result
**Figure 4** Return result
d. Run **exit** to exit the client.
@ -239,7 +235,6 @@ Step 4: Accessing the OBS File System
.. figure:: /_static/images/en-us_image_0000001349057901.png
:alt: **Figure 5** Downloading the Presto user authentication credential
**Figure 5** Downloading the Presto user authentication credential
#. On FusionInsight Manager for MRS 3.x or later,, choose **System > Permission > User**. In the row that contains the newly added user, click **More > Download Authentication Credential**.
@ -248,7 +243,6 @@ Step 4: Accessing the OBS File System
.. figure:: /_static/images/en-us_image_0000001296058088.png
:alt: **Figure 6** Downloading the Presto user authentication credential
**Figure 6** Downloading the Presto user authentication credential
e. .. _mrs_01_0768__li65281811161910:
@ -284,7 +278,6 @@ Step 4: Accessing the OBS File System
.. figure:: /_static/images/en-us_image_0000001296058084.png
:alt: **Figure 7** Return result
**Figure 7** Return result
j. Run **exit** to exit the client.
@ -300,7 +293,6 @@ Step 4: Accessing the OBS File System
.. figure:: /_static/images/en-us_image_0000001349137793.png
:alt: **Figure 8** Creating a Flink job
**Figure 8** Creating a Flink job
c. On OBS Console, go to the output path specified during job creation. If the output directory is automatically created and contains the job execution results, OBS access is successful.
@ -309,7 +301,6 @@ Step 4: Accessing the OBS File System
.. figure:: /_static/images/en-us_image_0000001349137789.png
:alt: **Figure 9** Flink job execution result
**Figure 9** Flink job execution result
Reference

View File

@ -119,7 +119,6 @@ Using Spark to Access OBS
.. figure:: /_static/images/en-us_image_0000001295738100.png
:alt: **Figure 1** Parameters for adding an OBS
**Figure 1** Parameters for adding an OBS
#. Click **Save Configuration** and select **Restart the affected services or instances**. Restart the Spark service.

View File

@ -51,7 +51,6 @@ Setting the Default Location of the Created Hive Table to the OBS Path
.. figure:: /_static/images/en-us_image_0000001349057721.png
:alt: **Figure 1** Configurations of **hive.metastore.warehouse.dir**
**Figure 1** Configurations of **hive.metastore.warehouse.dir**
#. In the left navigation tree, choose **HiveServer** > **Customization**. Add **hive.metastore.warehouse.dir** to the **hive.metastore.customized.configs** and **hive.metastore.customized.configs** parameters, and set it to the OBS path.
@ -60,7 +59,6 @@ Setting the Default Location of the Created Hive Table to the OBS Path
.. figure:: /_static/images/en-us_image_0000001349057725.png
:alt: **Figure 2** hive.metastore.warehouse.dir configuration
**Figure 2** hive.metastore.warehouse.dir configuration
#. Save the configurations and restart Hive.

View File

@ -76,7 +76,7 @@ Interconnecting Hudi with OBS
**format("org.apache.hudi").**
**load(basePath + ``"/*/*/*/*")``**
**load(basePath + "/*/*/*/*")**
**roViewDF.createOrReplaceTempView("hudi_ro_table")**

View File

@ -46,7 +46,6 @@ Using Spark Beeline After Cluster Installation
.. figure:: /_static/images/en-us_image_0000001349057877.png
:alt: **Figure 1** Verifying the created table name returned using Spark2x
**Figure 1** Verifying the created table name returned using Spark2x
#. Press **Ctrl+C** to exit the Spark Beeline.

View File

@ -102,11 +102,11 @@ Importing Data from MySQL to Hive Using the sqoop import Command
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -z,-compress | Compresses sequence, text, and Avro data files using the GZIP compression algorithm. Data is not compressed by default. |
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| compression-codec | Specifies the Hadoop compression codec. GZIP is used by default. |
| -compression-codec | Specifies the Hadoop compression codec. GZIP is used by default. |
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| null-string <null-string> | Specifies the string to be interpreted as **NULL** for string columns. |
| -null-string <null-string> | Specifies the string to be interpreted as **NULL** for string columns. |
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| null-non-string<null-string> | Specifies the string to be interpreted as null for non-string columns. If this parameter is not specified, **NULL** will be used. |
| -null-non-string<null-string> | Specifies the string to be interpreted as null for non-string columns. If this parameter is not specified, **NULL** will be used. |
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -check-column (col) | Specifies the column for checking incremental data import, for example, **id**. |
+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

View File

@ -124,7 +124,6 @@ In the command, **member** indicates the name of the table to be exported.
.. figure:: /_static/images/en-us_image_0000001349137569.png
:alt: **Figure 1** Directory data
**Figure 1** Directory data
#. Run the **create** command to create a table in the standby cluster with the same structure as that of the active cluster, for example, **member_import**.
@ -215,7 +214,7 @@ Offline backup of HDFS data means stopping the HBase service and allowing users
**hadoop distcp -i /hbase/data hdfs://IP address of the active NameNode of the HDFS service in the standby cluster:Port number/hbase**
**hadoop distcp update append delete /hbase/ hdfs://IP address of the active NameNode of the HDFS service in the standby cluster:Port number/hbase/**
**hadoop distcp -update -append -delete /hbase/ hdfs://IP address of the active NameNode of the HDFS service in the standby cluster:Port number/hbase/**
The second command is used to incrementally copy files except the data directory. For example, data in **archive** may be referenced by the data directory.

View File

@ -58,7 +58,7 @@ HDFS metadata information to be exported includes file and folder permissions an
.. code-block::
$HADOOP_HOME/bin/hdfs dfs ls R <migrating_path> > /tmp/hdfs_meta.txt
$HADOOP_HOME/bin/hdfs dfs -ls -R <migrating_path> > /tmp/hdfs_meta.txt
The following provides description about the parameters in the preceding command.
@ -77,5 +77,5 @@ Based on the exported permission information, run the HDFS commands in the backg
.. code-block::
$HADOOP_HOME/bin/hdfs dfs chmod <MODE> <path>
$HADOOP_HOME/bin/hdfs dfs chown <OWNER> <path>
$HADOOP_HOME/bin/hdfs dfs -chmod <MODE> <path>
$HADOOP_HOME/bin/hdfs dfs -chown <OWNER> <path>

View File

@ -21,14 +21,13 @@ How Do I Check Whether the ResourceManager Configuration of Yarn Is Correct?
.. figure:: /_static/images/en-us_image_0293129996.png
:alt: **Figure 1** Synchronization configurations
**Figure 1** Synchronization configurations
#. Log in to the Master nodes as user **root**.
#. Run the **cd /opt/Bigdata/MRS_Current/*_*_ResourceManager/etc_UPDATED/** command to go to the **etc_UPDATED** directory.
#. Run the **grep '\.queues' capacity-scheduler.xml -A2** command to display all configured queues and check whether the queues are consistent with those displayed on Manager.
#. Run the **grep '\\.queues' capacity-scheduler.xml -A2** command to display all configured queues and check whether the queues are consistent with those displayed on Manager.
**root-default** is hidden on the Manager page.
@ -36,7 +35,7 @@ How Do I Check Whether the ResourceManager Configuration of Yarn Is Correct?
#. .. _mrs_03_1163__li941013146411:
Run the **grep '\.capacity</name>' capacity-scheduler.xml -A2** command to display the value of each queue and check whether the value of each queue is the same as that displayed on Manager. Check whether the sum of the values configured for all queues is **100**.
Run the **grep '\\.capacity</name>' capacity-scheduler.xml -A2** command to display the value of each queue and check whether the value of each queue is the same as that displayed on Manager. Check whether the sum of the values configured for all queues is **100**.
- If the sum is **100**, the configuration is correct.
- If the sum is not **100**, the configuration is incorrect. Perform the following steps to rectify the fault.

View File

@ -38,7 +38,6 @@ How Do I Connect to Spark Beeline from MRS?
.. figure:: /_static/images/en-us_image_0264281176.png
:alt: **Figure 1** Returned table name
**Figure 1** Returned table name
#. Press **Ctrl+C** to exit the Spark Beeline.

View File

@ -18,7 +18,6 @@ Solutions:
.. figure:: /_static/images/en-us_image_0000001205479339.png
:alt: **Figure 1** Changing the **component_env** file
**Figure 1** Changing the **component_env** file
#. Run the following command to verify the configuration:

View File

@ -29,5 +29,4 @@ How Do I Configure the knox Memory?
.. figure:: /_static/images/en-us_image_0293101307.png
:alt: **Figure 1** knox memory
**Figure 1** knox memory

View File

@ -21,7 +21,6 @@ How Do I Do If the Time on MRS Nodes Is Incorrect?
.. figure:: /_static/images/en-us_image_0000001229685789.png
:alt: **Figure 1** Adding the master node IP addresses
**Figure 1** Adding the master node IP addresses
#. .. _mrs_03_1211__li1924115541119:

View File

@ -22,5 +22,4 @@ You can view operation logs of clusters and jobs on the **Operation Logs** page.
.. figure:: /_static/images/en-us_image_0264281200.png
:alt: **Figure 1** Log information
**Figure 1** Log information

View File

@ -35,7 +35,6 @@ How Do I Access Presto in a Cluster with Kerberos Authentication Enabled?
.. figure:: /_static/images/en-us_image_0264281104.png
:alt: **Figure 1** Downloading the Presto user authentication credential
**Figure 1** Downloading the Presto user authentication credential
- Operations on FusionInsight Manager:
@ -75,7 +74,6 @@ How Do I Access Presto in a Cluster with Kerberos Authentication Enabled?
.. figure:: /_static/images/en-us_image_0264281044.png
:alt: **Figure 2** Return result
**Figure 2** Return result
j. Run **exit** to exit the client.

View File

@ -37,7 +37,6 @@ How Do I Access Spark in a Cluster with Kerberos Authentication Enabled?
.. figure:: /_static/images/en-us_image_0000001153149209.png
:alt: **Figure 1** Returned table name
**Figure 1** Returned table name
#. Press **Ctrl+C** to exit Spark Beeline.

View File

@ -37,4 +37,4 @@ How Do I Prevent Kerberos Authentication Expiration?
Example:
**spark-shell --principal spark2x/hadoop.**\ <*System domain name*>@<*System domain name*>\ **--keytab ${BIGDATA_HOME}/FusionInsight_Spark2x_8.1.0.1/install/FusionInsight-Spark2x-2.4.5/keytab/spark2x/SparkResource/spark2x.keytab --master yarn**
**spark-shell --principal spark2x/hadoop.**\ <*System domain name*>@<*System domain name*>\ ** --keytab ${BIGDATA_HOME}/FusionInsight_Spark2x_8.1.0.1/install/FusionInsight-Spark2x-2.4.5/keytab/spark2x/SparkResource/spark2x.keytab --master yarn**

View File

@ -67,7 +67,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269383824.png
:alt: **Figure 1** Setting alarm smoothing times
**Figure 1** Setting alarm smoothing times
On **Host CPU Usage** page and click **Modify** in the **Operation** column to change the alarm threshold, as shown in :ref:`Figure 2 <alm-12016__fig30961038173938>`.
@ -77,7 +76,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269383825.png
:alt: **Figure 2** Setting an alarm threshold
**Figure 2** Setting an alarm threshold
#. After 2 minutes, check whether the alarm is cleared.

View File

@ -70,7 +70,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269383827.png
:alt: **Figure 1** Setting an alarm threshold
**Figure 1** Setting an alarm threshold
#. After 2 minutes, check whether the alarm is cleared.

View File

@ -84,7 +84,7 @@ Procedure
6. If the memory usage exceeds the threshold, perform memory capacity expansion.
7. Run the command **free -m \| grep Mem\: \| awk '{printf("%s,", $3 \* 100 / $2)}'** to check the system memory usage.
7. Run the command **free -m \| grep Mem\\: \| awk '{printf("%s,", $3 \* 100 / $2)}'** to check the system memory usage.
8. Wait for 5 minutes, check whether the alarm is cleared.

View File

@ -59,7 +59,7 @@ Procedure
#. Run the following command as user **omm** to view the PID of the process that is in the D state:
**ps -elf \| grep -v "\[thread_checkio\]" \| awk 'NR!=1 {print $2, $3, $4}' \| grep omm \| awk -F' ' '{print $1, $3}' \| grep -E "Z|D" \| awk '{print $2}'**
**ps -elf \| grep -v "\\[thread_checkio\\]" \| awk 'NR!=1 {print $2, $3, $4}' \| grep omm \| awk -F' ' '{print $1, $3}' \| grep -E "Z|D" \| awk '{print $2}'**
#. Check whether the command output is empty.

View File

@ -82,7 +82,7 @@ Procedure
#. .. _alm-12040__li34867421105655:
Run the **ps -ef \| grep -v "grep" \| grep rngd \| tr -d " " \| grep "\-r/dev/urandom"** command and check the command output.
Run the **ps -ef \| grep -v "grep" \| grep rngd \| tr -d " " \| grep "\\-r/dev/urandom"** command and check the command output.
- If the command is executed successfully, rngd has been installed and configured correctly and is running properly. Go to :ref:`8 <alm-12040__li22912175218>`.
@ -112,7 +112,7 @@ Procedure
.. code-block::
Type=simple
ExecStar=/usr/sbin/haveged -w 1024 -v 1 Foreground
ExecStar=/usr/sbin/haveged -w 1024 -v 1 -Foreground
SuccessExitStatus=137 143
Restart=always

View File

@ -241,7 +241,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001194891604.png
:alt: **Figure 1** Configuring the alarm threshold
**Figure 1** Configuring the alarm threshold
16. After 5 minutes, check whether the alarm is cleared.

View File

@ -74,7 +74,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001194571658.png
:alt: **Figure 1** Configuring the alarm threshold
**Figure 1** Configuring the alarm threshold
#. After 5 minutes, check whether the alarm is cleared.

View File

@ -74,7 +74,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001239731545.png
:alt: **Figure 1** Configuring the alarm threshold
**Figure 1** Configuring the alarm threshold
#. After 5 minutes, check whether the alarm is cleared.

View File

@ -74,7 +74,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001239611529.png
:alt: **Figure 1** Configuring the alarm threshold
**Figure 1** Configuring the alarm threshold
#. After 5 minutes, check whether the alarm is cleared.

View File

@ -74,7 +74,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269383871.png
:alt: **Figure 1** Setting alarm thresholds
**Figure 1** Setting alarm thresholds
#. Wait for 5 minutes, and check whether the alarm is cleared.

View File

@ -74,7 +74,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269383874.png
:alt: **Figure 1** Setting alarm thresholds
**Figure 1** Setting alarm thresholds
#. Wait for 5 minutes, and check whether the alarm is cleared.

View File

@ -69,7 +69,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269383904.jpg
:alt: **Figure 1** Setting Trigger Count
**Figure 1** Setting Trigger Count
Set the alarm threshold based on the actual process usage. To check the process usage, choose **O&M** > **Alarm** > **Thresholds** > *Name of the desired cluster* > **Host**> **Process** > **omm Process Usage**, as shown in :ref:`Figure 2 <alm-12061__fig437414238216>`.
@ -79,7 +78,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269383905.png
:alt: **Figure 2** Setting an alarm threshold
**Figure 2** Setting an alarm threshold
#. 2 minutes later, check whether the alarm is cleared.

View File

@ -72,7 +72,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269383933.png
:alt: **Figure 1** Cluster information
**Figure 1** Cluster information
5. In the early morning of the next day, check whether this alarm is cleared.

View File

@ -159,7 +159,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269417415.png
:alt: **Figure 1** HBase system table
**Figure 1** HBase system table
18. .. _alm-19000__li52774331192610:
@ -187,7 +186,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269417416.png
:alt: **Figure 2** HMaster is being started
**Figure 2** HMaster is being started
.. _alm-19000__fig41660353192610:
@ -195,7 +193,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269417417.png
:alt: **Figure 3** HMaster is started
**Figure 3** HMaster is started
- If yes, go to :ref:`20 <alm-19000__li34107122192610>`.
@ -209,7 +206,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269417418.png
:alt: **Figure 4** Region in Transition
**Figure 4** Region in Transition
- If yes, go to :ref:`21 <alm-19000__li23797537192610>`.

View File

@ -66,7 +66,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0276801805.png
:alt: **Figure 1** WebUI of HBase instance
**Figure 1** WebUI of HBase instance
**Enable load balancing.**

View File

@ -67,7 +67,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0269417469.png
:alt: **Figure 1** Number of connections used by database users
**Figure 1** Number of connections used by database users
#. Wait for 2 minutes and check whether the alarm is automatically cleared.
@ -86,7 +85,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001086795516.png
:alt: **Figure 2** Setting the maximum number of database connections
**Figure 2** Setting the maximum number of database connections
5. After the maximum number of database connections is changed, restart DBService (do not restart the upper-layer services).
@ -115,7 +113,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001133372255.png
:alt: **Figure 3** Setting alarm trigger count
**Figure 3** Setting alarm trigger count
Based on the actual database connection usage, choose **O&M** >\ **Alarm** > **Thresholds** > *Name of the desired cluster* > **DBService > Database > Database Connections Usage (DBServer)**. In the **Database Connections Usage (DBServer)** area, click **Modify** in the **Operation** column. In the **Modify Rule** dialog box, modify the required parameters and click **OK** as shown in :ref:`Figure 4 <alm-27005__fig19690175212407>`.
@ -125,7 +122,6 @@ Procedure
.. figure:: /_static/images/en-us_image_0000001193639590.png
:alt: **Figure 4** Set alarm threshold
**Figure 4** Set alarm threshold
8. Wait for 2 minutes and check whether the alarm is automatically cleared.

View File

@ -91,7 +91,7 @@ Procedure
Run the following command to check whether the ClickHouse cluster topology information can be obtained.
**get /clickhouse/config/**\ *value of*\ **macros.id**\ *in* :ref:`3 <alm-45425__li156597363713>`/metrika.xml
**get /clickhouse/config/**\ *value of* **macros.id** *in* :ref:`3 <alm-45425__li156597363713>`/metrika.xml
- If yes, go to :ref:`6 <alm-45425__li1462431320505>`.
- If no, go to :ref:`9 <alm-45425__li62779304563>`.

View File

@ -20,7 +20,6 @@ Log in to FusionInsight Manager and choose **Audit**. The **Audit** page display
.. figure:: /_static/images/en-us_image_0000001318467884.png
:alt: **Figure 1** Audit information list
**Figure 1** Audit information list
- You can select audit logs at the **Critical**, **Major**, **Minor**, or **Notice** level from the **All risk levels** drop-down list.

View File

@ -94,7 +94,7 @@ Procedure
a. Click **Query Regular Expression**.
b. Enter the database where the ClickHouse tables are located in the first text box as prompted. The database must be the same as the existing database, for example, **default**.
c. Enter a regular expression in the second text box. Standard regular expressions are supported. For example, to get all tables in the database, enter **([\s\S]*?)**. To get tables named in the format of letters and digits, for example, **tb1**, enter **tb\d\***.
c. Enter a regular expression in the second text box. Standard regular expressions are supported. For example, to get all tables in the database, enter **([\\s\\S]*?)**. To get tables named in the format of letters and digits, for example, **tb1**, enter **tb\\d\***.
d. Click **Refresh** to view the displayed tables in **Directory Name**.
e. Click **Synchronize** to save the result.

View File

@ -145,7 +145,7 @@ Procedure
a. Click **Query Regular Expression**.
b. Enter the namespace where the HBase tables are located in the first text box as prompted. The namespace must be the same as the existing namespace, for example, **default**.
c. Enter a regular expression in the second text box. Standard regular expressions are supported. For example, to get all tables in the namespace, enter **([\s\S]*?)**. To get tables whose names consist of letters and digits, for example, **tb1**, enter **tb\d\***.
c. Enter a regular expression in the second text box. Standard regular expressions are supported. For example, to get all tables in the namespace, enter **([\\s\\S]*?)**. To get tables whose names consist of letters and digits, for example, **tb1**, enter **tb\\d\***.
d. Click **Refresh** to view the displayed tables in **Directory Name**.
e. Click **Synchronize** to save the result.

View File

@ -144,7 +144,7 @@ Procedure
a. Click **Query Regular Expression**.
b. Enter the parent directory full path of the directory in the first text box as prompted. The directory must be the same as the existing directory, for example, **/tmp**.
c. Enter a regular expression in the second text box. Standard regular expressions are supported. For example, to get all files or subdirectories in the parent directory, enter **([\s\S]*?)**. To get files whose names consist of letters and digits, for example, **file\ 1**, enter **file\d\***.
c. Enter a regular expression in the second text box. Standard regular expressions are supported. For example, to get all files or subdirectories in the parent directory, enter **([\\s\\S]*?)**. To get files whose names consist of letters and digits, for example, **file\ 1**, enter **file\\d\***.
d. Click **Refresh** to view the displayed directories in **Directory Name**.
e. Click **Synchronize** to save the result.

View File

@ -138,7 +138,7 @@ Procedure
a. Click **Query Regular Expression**.
b. Enter the database where the Hive tables are located in the first text box as prompted. The database must be the same as the existing database, for example, **default**.
c. Enter a regular expression in the second text box. Standard regular expressions are supported. For example, to get all tables in the database, enter **([\s\S]*?)**. To get tables whose names consist of letters and digits, for example, **tb1**, enter **tb\d\***.
c. Enter a regular expression in the second text box. Standard regular expressions are supported. For example, to get all tables in the database, enter **([\\s\\S]*?)**. To get tables whose names consist of letters and digits, for example, **tb1**, enter **tb\\d\***.
d. Click **Refresh** to view the displayed tables in **Directory Name**.
e. Click **Synchronize** to save the result.

View File

@ -30,7 +30,7 @@ Procedure
#. .. _admin_guide_000200__en-us_topic_0046736761_li45131484:
Choose **Cluster** > ****Name of the desired cluster** > \ Services** > **Yarn** > **Configurations**, and click **All Configurations**.
Choose **Cluster** > ****Name of the desired cluster** > Services** > **Yarn** > **Configurations**, and click **All Configurations**.
#. In the navigation pane, choose **Yarn** > **Distcp**.

View File

@ -23,7 +23,7 @@ Answer
#. After the client file package is generated, download the client to the local PC as prompted and decompress the package.
For example, if the client file package is **FusionInsight_Cluster_1_HDFS_Client.tar**, decompress it to obtain **FusionInsight_Cluster_1_HDFS_ClientConfig_ConfigFiles.tar**, and then decompress **FusionInsight_Cluster_1_HDFS_ClientConfig_ConfigFiles.tar** to the **D:\FusionInsight_Cluster_1_HDFS_ClientConfig_ConfigFiles** directory on the local PC. The directory name cannot contain spaces.
For example, if the client file package is **FusionInsight_Cluster_1_HDFS_Client.tar**, decompress it to obtain **FusionInsight_Cluster_1_HDFS_ClientConfig_ConfigFiles.tar**, and then decompress **FusionInsight_Cluster_1_HDFS_ClientConfig_ConfigFiles.tar** to the **D:\\FusionInsight_Cluster_1_HDFS_ClientConfig_ConfigFiles** directory on the local PC. The directory name cannot contain spaces.
#. .. _admin_guide_000357__li73411714152720:

View File

@ -81,7 +81,7 @@ Procedure
If you select **RemoteHDFS**, set the following parameters:
- **Source NameService Name**: indicates the NameService name of the backup data cluster.You can enter the built-in NameService name of the remote cluster, for example, **haclusterX**, **haclusterX1**, **haclusterX2**, **haclusterX3**, or **haclusterX4**. You can also enter a configured NameService name of the remote cluster.
- **Source NameService Name**: indicates the NameService name of the backup data cluster. You can enter the built-in NameService name of the remote cluster, for example, **haclusterX**, **haclusterX1**, **haclusterX2**, **haclusterX3**, or **haclusterX4**. You can also enter a configured NameService name of the remote cluster.
- **IP Mode**: indicates the mode of the target IP address. The system automatically selects the IP address mode based on the cluster network type, for example, **IPv4** or **IPv6**.
- **Source NameNode IP Address**: indicates the NameNode service plane IP address of the standby cluster, supporting the active node or standby node.
- **Source Path**: indicates the full path of HDFS directory for storing backup data of the standby cluster, for example, *Backup path/Backup task name_Data source_Task creation time/Version_Data source_Task execution time*\ **.tar.gz**.

View File

@ -80,7 +80,7 @@ Procedure
If you select **RemoteHDFS**, set the following parameters:
- **Source NameService Name**: indicates the NameService name of the backup data cluster.You can enter the built-in NameService name of the remote cluster, for example, **haclusterX**, **haclusterX1**, **haclusterX2**, **haclusterX3**, or **haclusterX4**. You can also enter a configured NameService name of the remote cluster.
- **Source NameService Name**: indicates the NameService name of the backup data cluster. You can enter the built-in NameService name of the remote cluster, for example, **haclusterX**, **haclusterX1**, **haclusterX2**, **haclusterX3**, or **haclusterX4**. You can also enter a configured NameService name of the remote cluster.
- **IP Mode**: indicates the mode of the target IP address. The system automatically selects the IP address mode based on the cluster network type, for example, **IPv4** or **IPv6**.
- **Source NameNode IP Address**: indicates the NameNode service plane IP address of the standby cluster, supporting the active node or standby node.
- **Source Path**: indicates the full path of HDFS directory for storing backup data of the standby cluster, for example, *Backup path/Backup task name_Data source_Task creation time/Version_Data source_Task execution time*\ **.tar.gz**.

View File

@ -67,7 +67,7 @@ Procedure
- **RemoteHDFS**: indicates that the backup files are stored in the HDFS directory of the standby cluster. If you select **RemoteHDFS**, set the following parameters:
- **Source NameService Name**: indicates the NameService name of the backup data cluster.You can enter the built-in NameService name of the remote cluster, for example, **haclusterX**, **haclusterX1**, **haclusterX2**, **haclusterX3**, or **haclusterX4**. You can also enter a configured NameService name of the remote cluster.
- **Source NameService Name**: indicates the NameService name of the backup data cluster. You can enter the built-in NameService name of the remote cluster, for example, **haclusterX**, **haclusterX1**, **haclusterX2**, **haclusterX3**, or **haclusterX4**. You can also enter a configured NameService name of the remote cluster.
- **IP Mode**: indicates the mode of the target IP address. The system automatically selects the IP address mode based on the cluster network type, for example, **IPv4** or **IPv6**.
- **Source NameNode IP Address**: indicates the NameNode service plane IP address of the standby cluster, supporting the active node or standby node.
- **Source Path**: indicates the full path of the backup file in the HDFS, for example, *Backup path/Backup task name_Data source_Task creation time*.

View File

@ -69,7 +69,7 @@ Procedure
If you select **RemoteHDFS**, set the following parameters:
- **Source NameService Name**: indicates the NameService name of the backup data cluster.You can enter the built-in NameService name of the remote cluster, for example, **haclusterX**, **haclusterX1**, **haclusterX2**, **haclusterX3**, or **haclusterX4**. You can also enter a configured NameService name of the remote cluster.
- **Source NameService Name**: indicates the NameService name of the backup data cluster. You can enter the built-in NameService name of the remote cluster, for example, **haclusterX**, **haclusterX1**, **haclusterX2**, **haclusterX3**, or **haclusterX4**. You can also enter a configured NameService name of the remote cluster.
- **IP Mode**: indicates the mode of the target IP address. The system automatically selects the IP address mode based on the cluster network type, for example, **IPv4** or **IPv6**.
- **Source NameNode IP Address**: indicates the NameNode service plane IP address of the standby cluster, supporting the active node or standby node.
- **Source Path**: indicates the full path of HDFS directory for storing backup data of the standby cluster, for example, *Backup path/Backup task name_Data source_Task creation time*.

Some files were not shown because too many files have changed in this diff Show More