forked from docs/doc-exports
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com> Co-authored-by: Yang, Tong <yangtong2@huawei.com> Co-committed-by: Yang, Tong <yangtong2@huawei.com>
8174 lines
331 KiB
JSON
8174 lines
331 KiB
JSON
[
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using CarbonData",
|
||
"uri":"mrs_01_1400.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"1"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Overview",
|
||
"uri":"mrs_01_1401.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"1",
|
||
"code":"2"
|
||
},
|
||
{
|
||
"desc":"CarbonData is a new Apache Hadoop native data-store format. CarbonData allows faster interactive queries over PetaBytes of data using advanced columnar storage, index, co",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Overview",
|
||
"uri":"mrs_01_1402.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"2",
|
||
"code":"3"
|
||
},
|
||
{
|
||
"desc":"The memory required for data loading depends on the following factors:Number of columnsColumn valuesConcurrency (configured using carbon.number.of.cores.while.loading)Sor",
|
||
"product_code":"mrs",
|
||
"title":"Main Specifications of CarbonData",
|
||
"uri":"mrs_01_1403.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"2",
|
||
"code":"4"
|
||
},
|
||
{
|
||
"desc":"This section provides the details of all the configurations required for the CarbonData System.Configure the following parameters in the spark-defaults.conf file on the S",
|
||
"product_code":"mrs",
|
||
"title":"Configuration Reference",
|
||
"uri":"mrs_01_1404.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"1",
|
||
"code":"5"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Operation Guide",
|
||
"uri":"mrs_01_1405.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"1",
|
||
"code":"6"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create CarbonData tables, load data, and query data. This quick start provides operations based on the Spark Beeline client. If you want to ",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Quick Start",
|
||
"uri":"mrs_01_1406.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"6",
|
||
"code":"7"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Table Management",
|
||
"uri":"mrs_01_1407.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"6",
|
||
"code":"8"
|
||
},
|
||
{
|
||
"desc":"In CarbonData, data is stored in entities called tables. CarbonData tables are similar to RDBMS tables. RDBMS data is stored in a table consisting of rows and columns. Ca",
|
||
"product_code":"mrs",
|
||
"title":"About CarbonData Table",
|
||
"uri":"mrs_01_1408.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"8",
|
||
"code":"9"
|
||
},
|
||
{
|
||
"desc":"A CarbonData table must be created to load and query data. You can run the Create Table command to create a table. This command is used to create a table using custom col",
|
||
"product_code":"mrs",
|
||
"title":"Creating a CarbonData Table",
|
||
"uri":"mrs_01_1409.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"8",
|
||
"code":"10"
|
||
},
|
||
{
|
||
"desc":"You can run the DROP TABLE command to delete a table. After a CarbonData table is deleted, its metadata and loaded data are deleted together.Run the following command to ",
|
||
"product_code":"mrs",
|
||
"title":"Deleting a CarbonData Table",
|
||
"uri":"mrs_01_1410.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"8",
|
||
"code":"11"
|
||
},
|
||
{
|
||
"desc":"When the SET command is executed, the new properties overwrite the existing ones.SORT SCOPEThe following is an example of the SET SORT SCOPE command:ALTER TABLE tablename",
|
||
"product_code":"mrs",
|
||
"title":"Modify the CarbonData Table",
|
||
"uri":"mrs_01_1411.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"8",
|
||
"code":"12"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Table Data Management",
|
||
"uri":"mrs_01_1412.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"6",
|
||
"code":"13"
|
||
},
|
||
{
|
||
"desc":"After a CarbonData table is created, you can run the LOAD DATA command to load data to the table for query. Once data loading is triggered, data is encoded in CarbonData ",
|
||
"product_code":"mrs",
|
||
"title":"Loading Data",
|
||
"uri":"mrs_01_1413.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"13",
|
||
"code":"14"
|
||
},
|
||
{
|
||
"desc":"If you want to modify and reload the data because you have loaded wrong data into a table, or there are too many bad records, you can delete specific segments by segment ",
|
||
"product_code":"mrs",
|
||
"title":"Deleting Segments",
|
||
"uri":"mrs_01_1414.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"13",
|
||
"code":"15"
|
||
},
|
||
{
|
||
"desc":"Frequent data access results in a large number of fragmented CarbonData files in the storage directory. In each data loading, data is sorted and indexing is performed. Th",
|
||
"product_code":"mrs",
|
||
"title":"Combining Segments",
|
||
"uri":"mrs_01_1415.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"13",
|
||
"code":"16"
|
||
},
|
||
{
|
||
"desc":"If you want to rapidly migrate CarbonData data from a cluster to another one, you can use the CarbonData backup and restoration commands. This method does not require dat",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Data Migration",
|
||
"uri":"mrs_01_1416.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"6",
|
||
"code":"17"
|
||
},
|
||
{
|
||
"desc":"This migration guides you to migrate the CarbonData table data of Spark 1.5 to that of Spark2x.Before performing this operation, you need to stop the data import service ",
|
||
"product_code":"mrs",
|
||
"title":"Migrating Data on CarbonData from Spark1.5 to Spark2x",
|
||
"uri":"mrs_01_2301.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"6",
|
||
"code":"18"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Performance Tuning",
|
||
"uri":"mrs_01_1417.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"1",
|
||
"code":"19"
|
||
},
|
||
{
|
||
"desc":"There are various parameters that can be tuned to improve the query performance in CarbonData. Most of the parameters focus on increasing the parallelism in processing an",
|
||
"product_code":"mrs",
|
||
"title":"Tuning Guidelines",
|
||
"uri":"mrs_01_1418.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"19",
|
||
"code":"20"
|
||
},
|
||
{
|
||
"desc":"This section provides suggestions based on more than 50 test cases to help you create CarbonData tables with higher query performance.If the to-be-created table contains ",
|
||
"product_code":"mrs",
|
||
"title":"Suggestions for Creating CarbonData Tables",
|
||
"uri":"mrs_01_1419.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"19",
|
||
"code":"21"
|
||
},
|
||
{
|
||
"desc":"This section describes the configurations that can improve CarbonData performance.Table 1 and Table 2 describe the configurations about query of CarbonData.Table 3, Table",
|
||
"product_code":"mrs",
|
||
"title":"Configurations for Performance Tuning",
|
||
"uri":"mrs_01_1421.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"19",
|
||
"code":"22"
|
||
},
|
||
{
|
||
"desc":"The following table provides details about Hive ACL permissions required for performing operations on CarbonData tables.Parameters listed in Table 5 or Table 6 have been ",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Access Control",
|
||
"uri":"mrs_01_1422.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"1",
|
||
"code":"23"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Syntax Reference",
|
||
"uri":"mrs_01_1423.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"1",
|
||
"code":"24"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"DDL",
|
||
"uri":"mrs_01_1424.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"24",
|
||
"code":"25"
|
||
},
|
||
{
|
||
"desc":"This command is used to create a CarbonData table by specifying the list of fields along with the table properties.CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_",
|
||
"product_code":"mrs",
|
||
"title":"CREATE TABLE",
|
||
"uri":"mrs_01_1425.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"26"
|
||
},
|
||
{
|
||
"desc":"This command is used to create a CarbonData table by specifying the list of fields along with the table properties.CREATE TABLE[IF NOT EXISTS] [db_name.]table_name STORED",
|
||
"product_code":"mrs",
|
||
"title":"CREATE TABLE As SELECT",
|
||
"uri":"mrs_01_1426.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"27"
|
||
},
|
||
{
|
||
"desc":"This command is used to delete an existing table.DROP TABLE [IF EXISTS] [db_name.]table_name;In this command, IF EXISTS and db_name are optional.DROP TABLE IF EXISTS prod",
|
||
"product_code":"mrs",
|
||
"title":"DROP TABLE",
|
||
"uri":"mrs_01_1427.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"28"
|
||
},
|
||
{
|
||
"desc":"SHOW TABLES command is used to list all tables in the current or a specific database.SHOW TABLES [IN db_name];IN db_Name is optional.SHOW TABLES IN ProductDatabase;All ta",
|
||
"product_code":"mrs",
|
||
"title":"SHOW TABLES",
|
||
"uri":"mrs_01_1428.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"29"
|
||
},
|
||
{
|
||
"desc":"The ALTER TABLE COMPACTION command is used to merge a specified number of segments into a single segment. This improves the query performance of a table.ALTER TABLE[db_na",
|
||
"product_code":"mrs",
|
||
"title":"ALTER TABLE COMPACTION",
|
||
"uri":"mrs_01_1429.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"30"
|
||
},
|
||
{
|
||
"desc":"This command is used to rename an existing table.ALTER TABLE [db_name.]table_name RENAME TO new_table_name;Parallel queries (using table names to obtain paths for reading",
|
||
"product_code":"mrs",
|
||
"title":"TABLE RENAME",
|
||
"uri":"mrs_01_1430.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"31"
|
||
},
|
||
{
|
||
"desc":"This command is used to add a column to an existing table.ALTER TABLE [db_name.]table_name ADD COLUMNS (col_name data_type,...) TBLPROPERTIES(''COLUMNPROPERTIES.columnNam",
|
||
"product_code":"mrs",
|
||
"title":"ADD COLUMNS",
|
||
"uri":"mrs_01_1431.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"32"
|
||
},
|
||
{
|
||
"desc":"This command is used to delete one or more columns from a table.ALTER TABLE [db_name.]table_name DROP COLUMNS (col_name, ...);After a column is deleted, at least one key ",
|
||
"product_code":"mrs",
|
||
"title":"DROP COLUMNS",
|
||
"uri":"mrs_01_1432.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"33"
|
||
},
|
||
{
|
||
"desc":"This command is used to change the data type from INT to BIGINT or decimal precision from lower to higher.ALTER TABLE [db_name.]table_name CHANGE col_name col_name change",
|
||
"product_code":"mrs",
|
||
"title":"CHANGE DATA TYPE",
|
||
"uri":"mrs_01_1433.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"34"
|
||
},
|
||
{
|
||
"desc":"This command is used to register Carbon table to Hive meta store catalogue from exisiting Carbon table data.REFRESH TABLE db_name.table_name;The new database name and the",
|
||
"product_code":"mrs",
|
||
"title":"REFRESH TABLE",
|
||
"uri":"mrs_01_1434.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"35"
|
||
},
|
||
{
|
||
"desc":"This command is used to register an index table with the primary table.REGISTER INDEX TABLE indextable_name ON db_name.maintable_name;Before running this command, run REF",
|
||
"product_code":"mrs",
|
||
"title":"REGISTER INDEX TABLE",
|
||
"uri":"mrs_01_1435.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"36"
|
||
},
|
||
{
|
||
"desc":"This command is used to merge all segments for data files in the secondary index table.REFRESH INDEX indextable_name ON TABLE maintable_nameThis command is used to merge ",
|
||
"product_code":"mrs",
|
||
"title":"REFRESH INDEX",
|
||
"uri":"mrs_01_1436.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"25",
|
||
"code":"37"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"DML",
|
||
"uri":"mrs_01_1437.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"24",
|
||
"code":"38"
|
||
},
|
||
{
|
||
"desc":"This command is used to load user data of a particular type, so that CarbonData can provide good query performance.Only the raw data on HDFS can be loaded.LOAD DATA INPAT",
|
||
"product_code":"mrs",
|
||
"title":"LOAD DATA",
|
||
"uri":"mrs_01_1438.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"39"
|
||
},
|
||
{
|
||
"desc":"This command is used to update the CarbonData table based on the column expression and optional filtering conditions.Syntax 1:UPDATE <CARBON TABLE> SET (column_name1, col",
|
||
"product_code":"mrs",
|
||
"title":"UPDATE CARBON TABLE",
|
||
"uri":"mrs_01_1439.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"40"
|
||
},
|
||
{
|
||
"desc":"This command is used to delete records from a CarbonData table.DELETE FROM CARBON_TABLE [WHERE expression];If a segment is deleted, all secondary indexes associated with ",
|
||
"product_code":"mrs",
|
||
"title":"DELETE RECORDS from CARBON TABLE",
|
||
"uri":"mrs_01_1440.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"41"
|
||
},
|
||
{
|
||
"desc":"This command is used to add the output of the SELECT command to a Carbon table.INSERT INTO [CARBON TABLE] [select query];A table has been created.You must belong to the d",
|
||
"product_code":"mrs",
|
||
"title":"INSERT INTO CARBON TABLE",
|
||
"uri":"mrs_01_1441.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"42"
|
||
},
|
||
{
|
||
"desc":"This command is used to delete segments by the ID.DELETE FROM TABLE db_name.table_name WHERE SEGMENT.ID IN (segment_id1,segment_id2);Segments cannot be deleted from the s",
|
||
"product_code":"mrs",
|
||
"title":"DELETE SEGMENT by ID",
|
||
"uri":"mrs_01_1442.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"43"
|
||
},
|
||
{
|
||
"desc":"This command is used to delete segments by loading date. Segments created before a specific date will be deleted.DELETE FROM TABLE db_name.table_name WHERE SEGMENT.STARTT",
|
||
"product_code":"mrs",
|
||
"title":"DELETE SEGMENT by DATE",
|
||
"uri":"mrs_01_1443.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"44"
|
||
},
|
||
{
|
||
"desc":"This command is used to list the segments of a CarbonData table.SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_loads;NoneSHOW SEGMENTS FOR TABLE CarbonDatab",
|
||
"product_code":"mrs",
|
||
"title":"SHOW SEGMENTS",
|
||
"uri":"mrs_01_1444.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"45"
|
||
},
|
||
{
|
||
"desc":"This command is used to create secondary indexes in the CarbonData tables.CREATE INDEX index_nameON TABLE [db_name.]table_name (col_name1, col_name2)AS 'carbondata'PROPER",
|
||
"product_code":"mrs",
|
||
"title":"CREATE SECONDARY INDEX",
|
||
"uri":"mrs_01_1445.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"46"
|
||
},
|
||
{
|
||
"desc":"This command is used to list all secondary index tables in the CarbonData table.SHOW INDEXES ON db_name.table_name;db_name is optional.SHOW INDEXES ON productsales.produc",
|
||
"product_code":"mrs",
|
||
"title":"SHOW SECONDARY INDEXES",
|
||
"uri":"mrs_01_1446.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"47"
|
||
},
|
||
{
|
||
"desc":"This command is used to delete the existing secondary index table in a specific table.DROP INDEX [IF EXISTS] index_nameON [db_name.]table_name;In this command, IF EXISTS ",
|
||
"product_code":"mrs",
|
||
"title":"DROP SECONDARY INDEX",
|
||
"uri":"mrs_01_1447.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"48"
|
||
},
|
||
{
|
||
"desc":"After the DELETE SEGMENT command is executed, the deleted segments are marked as the delete state. After the segments are merged, the status of the original segments chan",
|
||
"product_code":"mrs",
|
||
"title":"CLEAN FILES",
|
||
"uri":"mrs_01_1448.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"49"
|
||
},
|
||
{
|
||
"desc":"This command is used to dynamically add, update, display, or reset the CarbonData properties without restarting the driver.Add or Update parameter value:SET parameter_nam",
|
||
"product_code":"mrs",
|
||
"title":"SET/RESET",
|
||
"uri":"mrs_01_1449.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"38",
|
||
"code":"50"
|
||
},
|
||
{
|
||
"desc":"Before performing DDL and DML operations, you need to obtain the corresponding locks. See Table 1 for details about the locks that need to be obtained for each operation.",
|
||
"product_code":"mrs",
|
||
"title":"Operation Concurrent Execution",
|
||
"uri":"mrs_01_24046.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"24",
|
||
"code":"51"
|
||
},
|
||
{
|
||
"desc":"This section describes the APIs and usage methods of Segment. All methods are in the org.apache.spark.util.CarbonSegmentUtil class.The following methods have been abandon",
|
||
"product_code":"mrs",
|
||
"title":"API",
|
||
"uri":"mrs_01_1450.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"24",
|
||
"code":"52"
|
||
},
|
||
{
|
||
"desc":"Spatial data includes multidimensional points, lines, rectangles, cubes, polygons, and other geometric objects. A spatial data object occupies a certain region of space, ",
|
||
"product_code":"mrs",
|
||
"title":"Spatial Indexes",
|
||
"uri":"mrs_01_1451.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"24",
|
||
"code":"53"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData Troubleshooting",
|
||
"uri":"mrs_01_1454.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"1",
|
||
"code":"54"
|
||
},
|
||
{
|
||
"desc":"When double data type values with higher precision are used in filters, incorrect values are returned by filtering results.When double data type values with higher precis",
|
||
"product_code":"mrs",
|
||
"title":"Filter Result Is not Consistent with Hive when a Big Double Type Value Is Used in Filter",
|
||
"uri":"mrs_01_1455.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"54",
|
||
"code":"55"
|
||
},
|
||
{
|
||
"desc":"The query performance fluctuates when the query is executed in different query periods.During data loading, the memory configured for each executor program instance may b",
|
||
"product_code":"mrs",
|
||
"title":"Query Performance Deterioration",
|
||
"uri":"mrs_01_1456.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"54",
|
||
"code":"56"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"CarbonData FAQ",
|
||
"uri":"mrs_01_1457.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"1",
|
||
"code":"57"
|
||
},
|
||
{
|
||
"desc":"Why is incorrect output displayed when I perform query with filter on decimal data type values?For example:select * from carbon_table where num = 1234567890123456.22;Outp",
|
||
"product_code":"mrs",
|
||
"title":"Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values?",
|
||
"uri":"mrs_01_1458.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"58"
|
||
},
|
||
{
|
||
"desc":"How to avoid minor compaction for historical data?If you want to load historical data first and then the incremental data, perform following steps to avoid minor compacti",
|
||
"product_code":"mrs",
|
||
"title":"How to Avoid Minor Compaction for Historical Data?",
|
||
"uri":"mrs_01_1459.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"59"
|
||
},
|
||
{
|
||
"desc":"How to change the default group name for CarbonData data loading?By default, the group name for CarbonData data loading is ficommon. You can perform the following operati",
|
||
"product_code":"mrs",
|
||
"title":"How to Change the Default Group Name for CarbonData Data Loading?",
|
||
"uri":"mrs_01_1460.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"60"
|
||
},
|
||
{
|
||
"desc":"Why does the INSERT INTO CARBON TABLE command fail and the following error message is displayed?The INSERT INTO CARBON TABLE command fails in the following scenarios:If t",
|
||
"product_code":"mrs",
|
||
"title":"Why Does INSERT INTO CARBON TABLE Command Fail?",
|
||
"uri":"mrs_01_1461.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"61"
|
||
},
|
||
{
|
||
"desc":"Why is the data logged in bad records different from the original input data with escaped characters?An escape character is a backslash (\\) followed by one or more charac",
|
||
"product_code":"mrs",
|
||
"title":"Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters?",
|
||
"uri":"mrs_01_1462.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"62"
|
||
},
|
||
{
|
||
"desc":"Why data load performance decreases due to bad records?If bad records are present in the data and BAD_RECORDS_LOGGER_ENABLE is true or BAD_RECORDS_ACTION is redirect then",
|
||
"product_code":"mrs",
|
||
"title":"Why Data Load Performance Decreases due to Bad Records?",
|
||
"uri":"mrs_01_1463.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"63"
|
||
},
|
||
{
|
||
"desc":"Why INSERT INTO or LOAD DATA task distribution is incorrect, and the openedtasks are less than the available executors when the number of initial executors is zero?In ca",
|
||
"product_code":"mrs",
|
||
"title":"Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial Executors Is Zero?",
|
||
"uri":"mrs_01_1464.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"64"
|
||
},
|
||
{
|
||
"desc":"Why does CarbonData require additional executors even though the parallelism is greater than the number of blocks to be processed?CarbonData block distribution optimizes ",
|
||
"product_code":"mrs",
|
||
"title":"Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed?",
|
||
"uri":"mrs_01_1465.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"65"
|
||
},
|
||
{
|
||
"desc":"Why Data Loading fails during off heap?YARN Resource Manager will consider (Java heap memory + spark.yarn.am.memoryOverhead) as memory limit, so during the off heap, the ",
|
||
"product_code":"mrs",
|
||
"title":"Why Data loading Fails During off heap?",
|
||
"uri":"mrs_01_1466.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"66"
|
||
},
|
||
{
|
||
"desc":"Why do I fail to create a hive table?Creating a Hive table fails, when source table or sub query has more number of partitions. The implementation of the query requires a",
|
||
"product_code":"mrs",
|
||
"title":"Why Do I Fail to Create a Hive Table?",
|
||
"uri":"mrs_01_1467.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"67"
|
||
},
|
||
{
|
||
"desc":"Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner?The Hive ACL is implemented after the version V100",
|
||
"product_code":"mrs",
|
||
"title":"Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner?",
|
||
"uri":"mrs_01_1468.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"68"
|
||
},
|
||
{
|
||
"desc":"How do I logically split data across different namespaces?Configuration:To logically split data across different namespaces, you must update the following configuration i",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Logically Split Data Across Different Namespaces?",
|
||
"uri":"mrs_01_1469.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"69"
|
||
},
|
||
{
|
||
"desc":"Why drop database cascade is throwing the following exception?This error is thrown when the owner of the database performs drop database <database_name> cascade which con",
|
||
"product_code":"mrs",
|
||
"title":"Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases?",
|
||
"uri":"mrs_01_1470.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"70"
|
||
},
|
||
{
|
||
"desc":"Why the UPDATE command cannot be executed in Spark Shell?The syntax and examples provided in this document are about Beeline commands instead of Spark Shell commands.To r",
|
||
"product_code":"mrs",
|
||
"title":"Why the UPDATE Command Cannot Be Executed in Spark Shell?",
|
||
"uri":"mrs_01_1471.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"71"
|
||
},
|
||
{
|
||
"desc":"How do I configure unsafe memory in CarbonData?In the Spark configuration, the value of spark.yarn.executor.memoryOverhead must be greater than the sum of (sort.inmemory.",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Configure Unsafe Memory in CarbonData?",
|
||
"uri":"mrs_01_1472.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"72"
|
||
},
|
||
{
|
||
"desc":"Why exception occurs in CarbonData when Disk Space Quota is set for the storage directory in HDFS?The data will be written to HDFS when you during create table, load tabl",
|
||
"product_code":"mrs",
|
||
"title":"Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS?",
|
||
"uri":"mrs_01_1473.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"73"
|
||
},
|
||
{
|
||
"desc":"Why does data query or loading fail and \"org.apache.carbondata.core.memory.MemoryException: Not enough memory\" is displayed?This exception is thrown when the out-of-heap ",
|
||
"product_code":"mrs",
|
||
"title":"Why Does Data Query or Loading Fail and \"org.apache.carbondata.core.memory.MemoryException: Not enough memory\" Is Displayed?",
|
||
"uri":"mrs_01_1474.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"57",
|
||
"code":"74"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using ClickHouse",
|
||
"uri":"mrs_01_2344.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"75"
|
||
},
|
||
{
|
||
"desc":"ClickHouse is a column-based database oriented to online analysis and processing. It supports SQL query and provides good query performance. The aggregation analysis and ",
|
||
"product_code":"mrs",
|
||
"title":"Using ClickHouse from Scratch",
|
||
"uri":"mrs_01_2345.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"75",
|
||
"code":"76"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common ClickHouse SQL Syntax",
|
||
"uri":"mrs_01_24199.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"75",
|
||
"code":"77"
|
||
},
|
||
{
|
||
"desc":"This section describes the basic syntax and usage of the SQL statement for creating a ClickHouse database.CREATE DATABASE [IF NOT EXISTS] Database_name [ON CLUSTERClickHo",
|
||
"product_code":"mrs",
|
||
"title":"CREATE DATABASE: Creating a Database",
|
||
"uri":"mrs_01_24200.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"77",
|
||
"code":"78"
|
||
},
|
||
{
|
||
"desc":"This section describes the basic syntax and usage of the SQL statement for creating a ClickHouse table.Method 1: Creating a table named table_name in the specified databa",
|
||
"product_code":"mrs",
|
||
"title":"CREATE TABLE: Creating a Table",
|
||
"uri":"mrs_01_24201.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"77",
|
||
"code":"79"
|
||
},
|
||
{
|
||
"desc":"This section describes the basic syntax and usage of the SQL statement for inserting data to a table in ClickHouse.Method 1: Inserting data in standard formatINSERT INTO ",
|
||
"product_code":"mrs",
|
||
"title":"INSERT INTO: Inserting Data into a Table",
|
||
"uri":"mrs_01_24202.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"77",
|
||
"code":"80"
|
||
},
|
||
{
|
||
"desc":"This section describes the basic syntax and usage of the SQL statement for querying table data in ClickHouse.SELECT [DISTINCT] expr_list[FROM[database_name.]table| (subqu",
|
||
"product_code":"mrs",
|
||
"title":"SELECT: Querying Table Data",
|
||
"uri":"mrs_01_24203.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"77",
|
||
"code":"81"
|
||
},
|
||
{
|
||
"desc":"This section describes the basic syntax and usage of the SQL statement for modifying a table structure in ClickHouse.ALTER TABLE [database_name].name[ON CLUSTER cluster] ",
|
||
"product_code":"mrs",
|
||
"title":"ALTER TABLE: Modifying a Table Structure",
|
||
"uri":"mrs_01_24204.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"77",
|
||
"code":"82"
|
||
},
|
||
{
|
||
"desc":"This section describes the basic syntax and usage of the SQL statement for querying a table structure in ClickHouse.DESC|DESCRIBETABLE[database_name.]table[INTOOUTFILE fi",
|
||
"product_code":"mrs",
|
||
"title":"DESC: Querying a Table Structure",
|
||
"uri":"mrs_01_24205.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"77",
|
||
"code":"83"
|
||
},
|
||
{
|
||
"desc":"This section describes the basic syntax and usage of the SQL statement for deleting a ClickHouse table.DROP[TEMPORARY] TABLE[IF EXISTS] [database_name.]name[ON CLUSTER cl",
|
||
"product_code":"mrs",
|
||
"title":"DROP: Deleting a Table",
|
||
"uri":"mrs_01_24208.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"77",
|
||
"code":"84"
|
||
},
|
||
{
|
||
"desc":"This section describes the basic syntax and usage of the SQL statement for displaying information about databases and tables in ClickHouse.show databasesshow tables",
|
||
"product_code":"mrs",
|
||
"title":"SHOW: Displaying Information About Databases and Tables",
|
||
"uri":"mrs_01_24207.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"77",
|
||
"code":"85"
|
||
},
|
||
{
|
||
"desc":"This section describes the basic syntax and usage of the SQL statement for importing and exporting file data in ClickHouse.Importing data in CSV formatclickhouse client -",
|
||
"product_code":"mrs",
|
||
"title":"Importing and Exporting File Data",
|
||
"uri":"mrs_01_24206.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"77",
|
||
"code":"86"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"User Management and Authentication",
|
||
"uri":"mrs_01_24251.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"75",
|
||
"code":"87"
|
||
},
|
||
{
|
||
"desc":"ClickHouse user permission management enables unified management of users, roles, and permissions on each ClickHouse instance in the cluster. You can use the permission m",
|
||
"product_code":"mrs",
|
||
"title":"ClickHouse User and Permission Management",
|
||
"uri":"mrs_01_24057.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"87",
|
||
"code":"88"
|
||
},
|
||
{
|
||
"desc":"After a ClickHouse cluster is created, you can use the ClickHouse client to connect to the ClickHouse server. The default username is default.This section describes how t",
|
||
"product_code":"mrs",
|
||
"title":"Setting the ClickHouse Username and Password",
|
||
"uri":"mrs_01_2395.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"87",
|
||
"code":"89"
|
||
},
|
||
{
|
||
"desc":"Table engines play a key role in ClickHouse to determine:Where to write and read dataSupported query modesWhether concurrent data access is supportedWhether indexes can b",
|
||
"product_code":"mrs",
|
||
"title":"ClickHouse Table Engine Overview",
|
||
"uri":"mrs_01_24105.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"75",
|
||
"code":"90"
|
||
},
|
||
{
|
||
"desc":"ClickHouse implements the replicated table mechanism based on the ReplicatedMergeTree engine and ZooKeeper. When creating a table, you can specify an engine to determine ",
|
||
"product_code":"mrs",
|
||
"title":"Creating a ClickHouse Table",
|
||
"uri":"mrs_01_2398.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"75",
|
||
"code":"91"
|
||
},
|
||
{
|
||
"desc":"The ClickHouse data migration tool can migrate some partitions of one or more partitioned MergeTree tables on several ClickHouseServer nodes to the same tables on other C",
|
||
"product_code":"mrs",
|
||
"title":"Using the ClickHouse Data Migration Tool",
|
||
"uri":"mrs_01_24053.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"75",
|
||
"code":"92"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Monitoring of Slow ClickHouse Query Statements and Replication Table Data Synchronization",
|
||
"uri":"mrs_01_24229.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"75",
|
||
"code":"93"
|
||
},
|
||
{
|
||
"desc":"The SQL statement query in ClickHouse is slow because the conditions such as partitions, where conditions, and indexes of SQL statements are set improperly. As a result, ",
|
||
"product_code":"mrs",
|
||
"title":"Slow Query Statement Monitoring",
|
||
"uri":"mrs_01_24230.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"93",
|
||
"code":"94"
|
||
},
|
||
{
|
||
"desc":"MRS monitors the synchronization between multiple copies of data in the same shard of a Replicated*MergeTree table.Currently, you can monitor and query only Replicated*Me",
|
||
"product_code":"mrs",
|
||
"title":"Replication Table Data Synchronization Monitoring",
|
||
"uri":"mrs_01_24231.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"93",
|
||
"code":"95"
|
||
},
|
||
{
|
||
"desc":"Materialized views (MVs) are used in ClickHouse to save the precomputed result of time-consuming operations. When querying data, you can query the materialized views rath",
|
||
"product_code":"mrs",
|
||
"title":"Adaptive MV Usage in ClickHouse",
|
||
"uri":"mrs_01_24287.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"75",
|
||
"code":"96"
|
||
},
|
||
{
|
||
"desc":"Log path: The default storage path of ClickHouse log files is as follows: ${BIGDATA_LOG_HOME}/clickhouseLog archive rule: The automatic ClickHouse log compression functio",
|
||
"product_code":"mrs",
|
||
"title":"ClickHouse Log Overview",
|
||
"uri":"mrs_01_2399.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"75",
|
||
"code":"97"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using DBService",
|
||
"uri":"mrs_01_2356.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"98"
|
||
},
|
||
{
|
||
"desc":"This section describes how to manually configure SSL for the HA module of DBService in the cluster where DBService is installed.After this operation is performed, if you ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring SSL for the HA Module",
|
||
"uri":"mrs_01_2346.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"98",
|
||
"code":"99"
|
||
},
|
||
{
|
||
"desc":"This section describes how to restore SSL for the HA module of DBService in the cluster where DBService is installed.SSL has been enabled for the HA module of DBService.C",
|
||
"product_code":"mrs",
|
||
"title":"Restoring SSL for the HA Module",
|
||
"uri":"mrs_01_2347.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"98",
|
||
"code":"100"
|
||
},
|
||
{
|
||
"desc":"The default timeout interval of DBService backup tasks is 2 hours. When the data volume in DBService is too large, the backup task may fail to be executed because the tim",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Timeout Interval of DBService Backup Tasks",
|
||
"uri":"mrs_01_24283.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"98",
|
||
"code":"101"
|
||
},
|
||
{
|
||
"desc":"Log path: The default storage path of DBService log files is /var/log/Bigdata/dbservice.GaussDB: /var/log/Bigdata/dbservice/DB (GaussDB run log directory), /var/log/Bigda",
|
||
"product_code":"mrs",
|
||
"title":"DBService Log Overview",
|
||
"uri":"mrs_01_0789.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"98",
|
||
"code":"102"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Flink",
|
||
"uri":"mrs_01_0591.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"103"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Flink to run wordcount jobs.Flink has been installed in the MRS cluster and all components in the cluster are running properly.The clust",
|
||
"product_code":"mrs",
|
||
"title":"Using Flink from Scratch",
|
||
"uri":"mrs_01_0473.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"104"
|
||
},
|
||
{
|
||
"desc":"You can view Flink job information on the Yarn web UI.The Flink service has been installed in a cluster.Log in to FusionInsight Manager. For details, see Accessing Fusion",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Flink Job Information",
|
||
"uri":"mrs_01_0784.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"105"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Flink Configuration Management",
|
||
"uri":"mrs_01_0592.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"106"
|
||
},
|
||
{
|
||
"desc":"All parameters of Flink must be set on a client. The path of a configuration file is as follows: Client installation path/Flink/flink/conf/flink-conf.yaml.You are advised",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Parameter Paths",
|
||
"uri":"mrs_01_1565.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"107"
|
||
},
|
||
{
|
||
"desc":"JobManager and TaskManager are main components of Flink. You can configure the parameters for different security and performance scenarios on the client.Main configuratio",
|
||
"product_code":"mrs",
|
||
"title":"JobManager & TaskManager",
|
||
"uri":"mrs_01_1566.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"108"
|
||
},
|
||
{
|
||
"desc":"The Blob server on the JobManager node is used to receive JAR files uploaded by users on the client, send JAR files to TaskManager, and transfer log files. Flink provides",
|
||
"product_code":"mrs",
|
||
"title":"Blob",
|
||
"uri":"mrs_01_1567.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"109"
|
||
},
|
||
{
|
||
"desc":"The Akka actor model is the basis of communications between the Flink client and JobManager, JobManager and TaskManager, as well as TaskManager and TaskManager. Flink ena",
|
||
"product_code":"mrs",
|
||
"title":"Distributed Coordination (via Akka)",
|
||
"uri":"mrs_01_1568.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"110"
|
||
},
|
||
{
|
||
"desc":"When the secure Flink cluster is required, SSL-related configuration items must be set.Configuration items include the SSL switch, certificate, password, and encryption a",
|
||
"product_code":"mrs",
|
||
"title":"SSL",
|
||
"uri":"mrs_01_1569.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"111"
|
||
},
|
||
{
|
||
"desc":"When Flink runs a job, data transmission and reverse pressure detection between tasks depend on Netty. In certain environments, Netty parameters should be configured.For ",
|
||
"product_code":"mrs",
|
||
"title":"Network communication (via Netty)",
|
||
"uri":"mrs_01_1570.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"112"
|
||
},
|
||
{
|
||
"desc":"When JobManager is started, the web server in the same process is also started.You can access the web server to obtain information about the current Flink cluster, includ",
|
||
"product_code":"mrs",
|
||
"title":"JobManager Web Frontend",
|
||
"uri":"mrs_01_1571.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"113"
|
||
},
|
||
{
|
||
"desc":"Result files are created when tasks are running. Flink enables you to configure parameters for file creation.Configuration items include overwriting policy and directory ",
|
||
"product_code":"mrs",
|
||
"title":"File Systems",
|
||
"uri":"mrs_01_1572.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"114"
|
||
},
|
||
{
|
||
"desc":"Flink enables HA and job exception, as well as job pause and recovery during version upgrade. Flink depends on state backend to store job states and on the restart strate",
|
||
"product_code":"mrs",
|
||
"title":"State Backend",
|
||
"uri":"mrs_01_1573.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"115"
|
||
},
|
||
{
|
||
"desc":"Flink Kerberos configuration items must be configured in security mode.The configuration items include keytab, principal, and cookie of Kerberos.",
|
||
"product_code":"mrs",
|
||
"title":"Kerberos-based Security",
|
||
"uri":"mrs_01_1574.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"116"
|
||
},
|
||
{
|
||
"desc":"The Flink HA mode depends on ZooKeeper. Therefore, ZooKeeper-related configuration items must be set.Configuration items include the ZooKeeper address, path, and security",
|
||
"product_code":"mrs",
|
||
"title":"HA",
|
||
"uri":"mrs_01_1575.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"117"
|
||
},
|
||
{
|
||
"desc":"In scenarios raising special requirements on JVM configuration, users can use configuration items to transfer JVM parameters to the client, JobManager, and TaskManager.Co",
|
||
"product_code":"mrs",
|
||
"title":"Environment",
|
||
"uri":"mrs_01_1576.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"118"
|
||
},
|
||
{
|
||
"desc":"Flink runs on a Yarn cluster and JobManager runs on ApplicationMaster. Certain configuration parameters of JobManager depend on Yarn. By setting Yarn-related configuratio",
|
||
"product_code":"mrs",
|
||
"title":"Yarn",
|
||
"uri":"mrs_01_1577.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"119"
|
||
},
|
||
{
|
||
"desc":"The Netty connection is used among multiple jobs to reduce latency. In this case, NettySink is used on the server and NettySource is used on the client for data transmiss",
|
||
"product_code":"mrs",
|
||
"title":"Pipeline",
|
||
"uri":"mrs_01_1578.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"106",
|
||
"code":"120"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Security Configuration",
|
||
"uri":"mrs_01_0593.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"121"
|
||
},
|
||
{
|
||
"desc":"All Flink cluster components support authentication.The Kerberos authentication is supported between Flink cluster components and external components, such as Yarn, HDFS,",
|
||
"product_code":"mrs",
|
||
"title":"Security Features",
|
||
"uri":"mrs_01_1579.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"121",
|
||
"code":"122"
|
||
},
|
||
{
|
||
"desc":"Sample project data of Flink is stored in Kafka. A user with Kafka permission can send data to Kafka and receive data from it.Run Linux command line to create a topic. Be",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Kafka",
|
||
"uri":"mrs_01_1580.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"121",
|
||
"code":"123"
|
||
},
|
||
{
|
||
"desc":"File configurationnettyconnector.registerserver.topic.storage: (Mandatory) Configures the path (on a third-party server) to information about IP address, port numbers, an",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Pipeline",
|
||
"uri":"mrs_01_1581.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"121",
|
||
"code":"124"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Security Hardening",
|
||
"uri":"mrs_01_0594.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"125"
|
||
},
|
||
{
|
||
"desc":"Flink uses the following three authentication modes:Kerberos authentication: It is used between the Flink Yarn client and Yarn ResourceManager, JobManager and ZooKeeper, ",
|
||
"product_code":"mrs",
|
||
"title":"Authentication and Encryption",
|
||
"uri":"mrs_01_1583.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"125",
|
||
"code":"126"
|
||
},
|
||
{
|
||
"desc":"In HA mode of Flink, ZooKeeper can be used to manage clusters and discover services. Zookeeper supports SASL ACL control. Only users who have passed the SASL (Kerberos) a",
|
||
"product_code":"mrs",
|
||
"title":"ACL Control",
|
||
"uri":"mrs_01_1584.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"125",
|
||
"code":"127"
|
||
},
|
||
{
|
||
"desc":"Note: The same coding mode is used on the web service client and server to prevent garbled characters and to enable input verification.Security hardening: apply UTF-8 to ",
|
||
"product_code":"mrs",
|
||
"title":"Web Security",
|
||
"uri":"mrs_01_1585.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"125",
|
||
"code":"128"
|
||
},
|
||
{
|
||
"desc":"All security functions of Flink are provided by the open source community or self-developed. Security features that need to be configured by users, such as authentication",
|
||
"product_code":"mrs",
|
||
"title":"Security Statement",
|
||
"uri":"mrs_01_1586.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"129"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using the Flink Web UI",
|
||
"uri":"mrs_01_24014.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"130"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Overview",
|
||
"uri":"mrs_01_24015.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"131"
|
||
},
|
||
{
|
||
"desc":"Flink web UI provides a web-based visual development platform. You only need to compile SQL statements to develop jobs, slashing the job development threshold. In additio",
|
||
"product_code":"mrs",
|
||
"title":"Introduction to Flink Web UI",
|
||
"uri":"mrs_01_24016.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"131",
|
||
"code":"132"
|
||
},
|
||
{
|
||
"desc":"The Flink web UI application process is shown as follows:",
|
||
"product_code":"mrs",
|
||
"title":"Flink Web UI Application Process",
|
||
"uri":"mrs_01_24017.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"131",
|
||
"code":"133"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"FlinkServer Permissions Management",
|
||
"uri":"mrs_01_24047.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"134"
|
||
},
|
||
{
|
||
"desc":"User admin of Manager does not have the FlinkServer service operation permission. To perform FlinkServer service operations, you need to grant related permission to the u",
|
||
"product_code":"mrs",
|
||
"title":"Overview",
|
||
"uri":"mrs_01_24048.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"134",
|
||
"code":"135"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create and configure a FlinkServer role on Manager as the system administrator. A FlinkServer role can be configured with FlinkServer admini",
|
||
"product_code":"mrs",
|
||
"title":"Authentication Based on Users and Roles",
|
||
"uri":"mrs_01_24049.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"134",
|
||
"code":"136"
|
||
},
|
||
{
|
||
"desc":"After Flink is installed in an MRS cluster, you can connect to clusters and data as well as manage stream tables and jobs using the Flink web UI.This section describes ho",
|
||
"product_code":"mrs",
|
||
"title":"Accessing the Flink Web UI",
|
||
"uri":"mrs_01_24019.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"137"
|
||
},
|
||
{
|
||
"desc":"Applications can be used to isolate different upper-layer services.After the application is created, you can switch to the application to be operated in the upper left co",
|
||
"product_code":"mrs",
|
||
"title":"Creating an Application on the Flink Web UI",
|
||
"uri":"mrs_01_24020.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"138"
|
||
},
|
||
{
|
||
"desc":"Different clusters can be accessed by configuring the cluster connection.To obtain the cluster client configuration files, perform the following steps:Log in to FusionIns",
|
||
"product_code":"mrs",
|
||
"title":"Creating a Cluster Connection on the Flink Web UI",
|
||
"uri":"mrs_01_24021.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"139"
|
||
},
|
||
{
|
||
"desc":"Different data services can be accessed through data connections. Currently, FlinkServer supports HDFS, Kafka data connections.",
|
||
"product_code":"mrs",
|
||
"title":"Creating a Data Connection on the Flink Web UI",
|
||
"uri":"mrs_01_24022.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"140"
|
||
},
|
||
{
|
||
"desc":"Data tables can be used to define basic attributes and parameters of source tables, dimension tables, and output tables.",
|
||
"product_code":"mrs",
|
||
"title":"Managing Tables on the Flink Web UI",
|
||
"uri":"mrs_01_24023.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"141"
|
||
},
|
||
{
|
||
"desc":"Define Flink jobs, including Flink SQL and Flink JAR jobs.Creating a Flink SQL jobDevelop the job on the job development page.Click Check Semantic to check the input cont",
|
||
"product_code":"mrs",
|
||
"title":"Managing Jobs on the Flink Web UI",
|
||
"uri":"mrs_01_24024.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"142"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Managing UDFs on the Flink Web UI",
|
||
"uri":"mrs_01_24223.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"143"
|
||
},
|
||
{
|
||
"desc":"You can customize functions to extend SQL statements to meet personalized requirements. These functions are called user-defined functions (UDFs). You can upload and manag",
|
||
"product_code":"mrs",
|
||
"title":"Managing UDFs on the Flink Web UI",
|
||
"uri":"mrs_01_24211.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"143",
|
||
"code":"144"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"UDF Java and SQL Examples",
|
||
"uri":"mrs_01_24224.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"143",
|
||
"code":"145"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"UDAF Java and SQL Examples",
|
||
"uri":"mrs_01_24225.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"143",
|
||
"code":"146"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"UDTF Java and SQL Examples",
|
||
"uri":"mrs_01_24227.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"143",
|
||
"code":"147"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Interconnecting FlinkServer with External Components",
|
||
"uri":"mrs_01_24226.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"130",
|
||
"code":"148"
|
||
},
|
||
{
|
||
"desc":"Flink interconnects with the ClickHouseBalancer instance of ClickHouse to read and write data, preventing ClickHouse traffic distribution problems.Services such as ClickH",
|
||
"product_code":"mrs",
|
||
"title":"Interconnecting FlinkServer with ClickHouse",
|
||
"uri":"mrs_01_24148.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"148",
|
||
"code":"149"
|
||
},
|
||
{
|
||
"desc":"FlinkServer can be interconnected with HBase. The details are as follows:It can be interconnected with dimension tables and sink tables.When HBase and Flink are in the sa",
|
||
"product_code":"mrs",
|
||
"title":"Interconnecting FlinkServer with HBase",
|
||
"uri":"mrs_01_24120.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"148",
|
||
"code":"150"
|
||
},
|
||
{
|
||
"desc":"This section describes the data definition language (DDL) of HDFS as a sink table, as well as the WITH parameters and example code for creating a sink table, and provides",
|
||
"product_code":"mrs",
|
||
"title":"Interconnecting FlinkServer with HDFS",
|
||
"uri":"mrs_01_24247.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"148",
|
||
"code":"151"
|
||
},
|
||
{
|
||
"desc":"Currently, FlinkServer interconnects with Hive MetaStore. Therefore, the MetaStore function must be enabled for Hive. Hive can be used as source, sink, and dimension tabl",
|
||
"product_code":"mrs",
|
||
"title":"Interconnecting FlinkServer with Hive",
|
||
"uri":"mrs_01_24179.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"148",
|
||
"code":"152"
|
||
},
|
||
{
|
||
"desc":"This section describes how to interconnect FlinkServer with Hudi through Flink SQL jobs.The HDFS, Yarn, Flink, and Hudi services have been installed in a cluster.The clie",
|
||
"product_code":"mrs",
|
||
"title":"Interconnecting FlinkServer with Hudi",
|
||
"uri":"mrs_01_24180.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"148",
|
||
"code":"153"
|
||
},
|
||
{
|
||
"desc":"This section describes the data definition language (DDL) of Kafka as a source or sink table, as well as the WITH parameters and example code for creating a table, and pr",
|
||
"product_code":"mrs",
|
||
"title":"Interconnecting FlinkServer with Kafka",
|
||
"uri":"mrs_01_24248.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"148",
|
||
"code":"154"
|
||
},
|
||
{
|
||
"desc":"If a Flink task stops unexpectedly, some directories may reside in the ZooKeeper and HDFS services. To delete the residual directories, set ClearUpEnabled to true.A Flink",
|
||
"product_code":"mrs",
|
||
"title":"Deleting Residual Information About Flink Tasks",
|
||
"uri":"mrs_01_24256.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"155"
|
||
},
|
||
{
|
||
"desc":"Log path:Run logs of a Flink job: ${BIGDATA_DATA_HOME}/hadoop/data${i}/nm/containerlogs/application_${appid}/container_{$contid}The logs of executing tasks are stored in ",
|
||
"product_code":"mrs",
|
||
"title":"Flink Log Overview",
|
||
"uri":"mrs_01_0596.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"156"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Flink Performance Tuning",
|
||
"uri":"mrs_01_0597.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"157"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Optimization DataStream",
|
||
"uri":"mrs_01_1587.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"157",
|
||
"code":"158"
|
||
},
|
||
{
|
||
"desc":"The computing of Flink depends on memory. If the memory is insufficient, the performance of Flink will be greatly deteriorated. One solution is to monitor garbage collect",
|
||
"product_code":"mrs",
|
||
"title":"Memory Configuration Optimization",
|
||
"uri":"mrs_01_1588.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"158",
|
||
"code":"159"
|
||
},
|
||
{
|
||
"desc":"The degree of parallelism (DOP) indicates the number of tasks to be executed concurrently. It determines the number of data blocks after the operation. Configuring the DO",
|
||
"product_code":"mrs",
|
||
"title":"Configuring DOP",
|
||
"uri":"mrs_01_1589.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"158",
|
||
"code":"160"
|
||
},
|
||
{
|
||
"desc":"In Flink on Yarn mode, there are JobManagers and TaskManagers. JobManagers and TaskManagers schedule and run tasks.Therefore, configuring parameters of JobManagers and Ta",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Process Parameters",
|
||
"uri":"mrs_01_1590.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"158",
|
||
"code":"161"
|
||
},
|
||
{
|
||
"desc":"The divide of tasks can be optimized by optimizing the partitioning method. If data skew occurs in a certain task, the whole execution process is delayed. Therefore, when",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing the Design of Partitioning Method",
|
||
"uri":"mrs_01_1591.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"158",
|
||
"code":"162"
|
||
},
|
||
{
|
||
"desc":"The communication of Flink is based on Netty network. The network performance determines the data switching speed and task execution efficiency. Therefore, the performanc",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Netty Network Communication",
|
||
"uri":"mrs_01_1592.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"158",
|
||
"code":"163"
|
||
},
|
||
{
|
||
"desc":"If data skew occurs (certain data volume is large), the execution time of tasks is inconsistent even if no garbage collection is performed.Redefine keys. Use keys of smal",
|
||
"product_code":"mrs",
|
||
"title":"Summarization",
|
||
"uri":"mrs_01_1593.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"158",
|
||
"code":"164"
|
||
},
|
||
{
|
||
"desc":"Before running the Flink shell commands, perform the following steps:source /opt/client/bigdata_envkinit Service user",
|
||
"product_code":"mrs",
|
||
"title":"Common Flink Shell Commands",
|
||
"uri":"mrs_01_0598.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"165"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Reference",
|
||
"uri":"mrs_01_0620.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"103",
|
||
"code":"166"
|
||
},
|
||
{
|
||
"desc":"Generate the generate_keystore.sh script based on the sample code and save the script to the bin directory on the Flink client.Run the sh generate_keystore.sh<password> c",
|
||
"product_code":"mrs",
|
||
"title":"Example of Issuing a Certificate",
|
||
"uri":"mrs_01_0621.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"166",
|
||
"code":"167"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Flume",
|
||
"uri":"mrs_01_0390.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"168"
|
||
},
|
||
{
|
||
"desc":"You can use Flume to import collected log information to Kafka.A streaming cluster with Kerberos authentication enabled has been created.The Flume client has been install",
|
||
"product_code":"mrs",
|
||
"title":"Using Flume from Scratch",
|
||
"uri":"mrs_01_0397.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"169"
|
||
},
|
||
{
|
||
"desc":"Flume is a distributed, reliable, and highly available system for aggregating massive logs, which can efficiently collect, aggregate, and move massive log data from diffe",
|
||
"product_code":"mrs",
|
||
"title":"Overview",
|
||
"uri":"mrs_01_0391.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"170"
|
||
},
|
||
{
|
||
"desc":"To use Flume to collect logs, you must install the Flume client on a log host.A cluster with the Flume component has been created.The log host is in the same VPC and subn",
|
||
"product_code":"mrs",
|
||
"title":"Installing the Flume Client on Clusters",
|
||
"uri":"mrs_01_1595.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"171"
|
||
},
|
||
{
|
||
"desc":"You can view logs to locate faults.The Flume client has been installed.ls -lR flume-client-*A log file is shown as follows:In the log file, FlumeClient.log is the run log",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Flume Client Logs",
|
||
"uri":"mrs_01_0393.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"172"
|
||
},
|
||
{
|
||
"desc":"You can stop and start the Flume client or uninstall the Flume client when the Flume data ingestion channel is not required.Stop the Flume client of the Flume role.Assume",
|
||
"product_code":"mrs",
|
||
"title":"Stopping or Uninstalling the Flume Client",
|
||
"uri":"mrs_01_0394.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"173"
|
||
},
|
||
{
|
||
"desc":"You can use the encryption tool provided by the Flume client to encrypt some parameter values in the configuration file.The Flume client has been installed.cd fusioninsig",
|
||
"product_code":"mrs",
|
||
"title":"Using the Encryption Tool of the Flume Client",
|
||
"uri":"mrs_01_0395.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"174"
|
||
},
|
||
{
|
||
"desc":"This configuration guide describes how to configure common Flume services. For non-common Source, Channel, and Sink configuration, see the user manual provided by the Flu",
|
||
"product_code":"mrs",
|
||
"title":"Flume Service Configuration Guide",
|
||
"uri":"mrs_01_1057.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"175"
|
||
},
|
||
{
|
||
"desc":"Some parameters can be configured on Manager.This section describes how to configure the sources, channels, and sinks of Flume, and modify the configuration items of each",
|
||
"product_code":"mrs",
|
||
"title":"Flume Configuration Parameter Description",
|
||
"uri":"mrs_01_0396.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"176"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use environment variables in the properties.properties configuration file.The Flume service is running properly and the Flume client has bee",
|
||
"product_code":"mrs",
|
||
"title":"Using Environment Variables in the properties.properties File",
|
||
"uri":"mrs_01_1058.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"177"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Non-Encrypted Transmission",
|
||
"uri":"mrs_01_1059.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"178"
|
||
},
|
||
{
|
||
"desc":"This section describes how to configure Flume server and client parameters after the cluster and the Flume service are installed to ensure proper running of the service.B",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Non-encrypted Transmission",
|
||
"uri":"mrs_01_1060.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"178",
|
||
"code":"179"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Flume client to collect static logs from a local host and save them to the topic list (test1) of Kafka.By default, the cluster network e",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to Kafka",
|
||
"uri":"mrs_01_1061.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"178",
|
||
"code":"180"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Flume client to collect static logs from a local PC and save them to the /flume/test directory on HDFS.By default, the cluster network e",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS",
|
||
"uri":"mrs_01_1063.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"178",
|
||
"code":"181"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Flume client to collect dynamic logs from a local PC and save them to the /flume/test directory on HDFS.By default, the cluster network ",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Collecting Local Dynamic Logs and Uploading Them to HDFS",
|
||
"uri":"mrs_01_1064.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"178",
|
||
"code":"182"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Flume client to collect logs from the Topic list (test1) of Kafka and save them to the /flume/test directory on HDFS.By default, the clu",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS",
|
||
"uri":"mrs_01_1065.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"178",
|
||
"code":"183"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Flume client to collect logs from the Topic list (test1) of Kafka client and save them to the /flume/test directory on HDFS.By default, ",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Collecting Logs from Kafka and Uploading Them to HDFS Through the Flume Client",
|
||
"uri":"mrs_01_1066.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"178",
|
||
"code":"184"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Flume client to collect static logs from a local computer and upload them to the flume_test table of HBase.By default, the cluster netwo",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HBase",
|
||
"uri":"mrs_01_1067.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"178",
|
||
"code":"185"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Encrypted Transmission",
|
||
"uri":"mrs_01_1068.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"186"
|
||
},
|
||
{
|
||
"desc":"This section describes how to configure the server and client parameters of the Flume service (including the Flume and MonitorServer roles) after the cluster is installed",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Encrypted Transmission",
|
||
"uri":"mrs_01_1069.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"186",
|
||
"code":"187"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Flume client to collect static logs from a local PC and save them to the /flume/test directory on HDFS.The cluster, HDFS and Flume servi",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Collecting Local Static Logs and Uploading Them to HDFS",
|
||
"uri":"mrs_01_1070.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"186",
|
||
"code":"188"
|
||
},
|
||
{
|
||
"desc":"The Flume client outside the FusionInsight cluster is a part of the end-to-end data collection. Both the Flume client outside the cluster and the Flume server in the clus",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Flume Client Monitoring Information",
|
||
"uri":"mrs_01_1596.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"189"
|
||
},
|
||
{
|
||
"desc":"This section describes how to connect to Kafka using the Flume client in security mode.Set keyTab and principal based on site requirements. The configured principal must ",
|
||
"product_code":"mrs",
|
||
"title":"Connecting Flume to Kafka in Security Mode",
|
||
"uri":"mrs_01_1071.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"190"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Flume to connect to Hive (version 3.1.0) in the cluster.Flume and Hive have been correctly installed in the cluster. The services are ru",
|
||
"product_code":"mrs",
|
||
"title":"Connecting Flume with Hive in Security Mode",
|
||
"uri":"mrs_01_1072.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"191"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Flume Service Model",
|
||
"uri":"mrs_01_1073.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"192"
|
||
},
|
||
{
|
||
"desc":"Guide a reasonable Flume service configuration by providing performance differences between Flume common modules, to avoid a nonstandard overall service performance cause",
|
||
"product_code":"mrs",
|
||
"title":"Overview",
|
||
"uri":"mrs_01_1074.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"192",
|
||
"code":"193"
|
||
},
|
||
{
|
||
"desc":"During Flume service configuration and module selection, the ultimate throughput of a sink must be greater than the maximum throughput of a source. Otherwise, in extreme ",
|
||
"product_code":"mrs",
|
||
"title":"Service Model Configuration Guide",
|
||
"uri":"mrs_01_1075.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"192",
|
||
"code":"194"
|
||
},
|
||
{
|
||
"desc":"Log path: The default path of Flume log files is /var/log/Bigdata/Role name.FlumeServer: /var/log/Bigdata/flume/flumeFlumeClient: /var/log/Bigdata/flume-client-n/flumeMon",
|
||
"product_code":"mrs",
|
||
"title":"Introduction to Flume Logs",
|
||
"uri":"mrs_01_1081.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"195"
|
||
},
|
||
{
|
||
"desc":"This section describes how to join and log out of a cgroup, query the cgroup status, and change the cgroup CPU threshold.Join CgroupAssume that the Flume client installat",
|
||
"product_code":"mrs",
|
||
"title":"Flume Client Cgroup Usage Guide",
|
||
"uri":"mrs_01_1082.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"196"
|
||
},
|
||
{
|
||
"desc":"This section describes how to perform secondary development for third-party plug-ins.You have obtained the third-party JAR package.You have installed Flume server or clie",
|
||
"product_code":"mrs",
|
||
"title":"Secondary Development Guide for Flume Third-Party Plug-ins",
|
||
"uri":"mrs_01_1083.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"197"
|
||
},
|
||
{
|
||
"desc":"Flume logs are stored in /var/log/Bigdata/flume/flume/flumeServer.log. Most data transmission exceptions and data transmission failures are recorded in logs. You can run ",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Flume",
|
||
"uri":"mrs_01_1598.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"168",
|
||
"code":"198"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using HBase",
|
||
"uri":"mrs_01_0500.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"199"
|
||
},
|
||
{
|
||
"desc":"HBase is a column-based distributed storage system that features high reliability, performance, and scalability. This section describes how to use HBase from scratch, inc",
|
||
"product_code":"mrs",
|
||
"title":"Using HBase from Scratch",
|
||
"uri":"mrs_01_0368.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"200"
|
||
},
|
||
{
|
||
"desc":"This section guides the system administrator to create and configure an HBase role on Manager. The HBase role can set HBase administrator permissions and read (R), write ",
|
||
"product_code":"mrs",
|
||
"title":"Creating HBase Roles",
|
||
"uri":"mrs_01_1608.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"201"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use the HBase client in an O&M scenario or a service scenario.The client has been installed. For example, the installation directory is /opt",
|
||
"product_code":"mrs",
|
||
"title":"Using an HBase Client",
|
||
"uri":"mrs_01_24041.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"202"
|
||
},
|
||
{
|
||
"desc":"As a key feature to ensure high availability of the HBase cluster system, HBase cluster replication provides HBase with remote data replication in real time. It provides ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring HBase Replication",
|
||
"uri":"mrs_01_0501.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"203"
|
||
},
|
||
{
|
||
"desc":"DistCp is used to copy the data stored on HDFS from a cluster to another cluster. DistCp depends on the cross-cluster copy function, which is disabled by default. This fu",
|
||
"product_code":"mrs",
|
||
"title":"Enabling Cross-Cluster Copy",
|
||
"uri":"mrs_01_0502.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"204"
|
||
},
|
||
{
|
||
"desc":"You can create tables and indexes using createTable of org.apache.luna.client.LunaAdmin and specify table names, column family names, requests for creating indexes, as we",
|
||
"product_code":"mrs",
|
||
"title":"Supporting Full-Text Index",
|
||
"uri":"mrs_01_0493.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"205"
|
||
},
|
||
{
|
||
"desc":"Active and standby clusters have been installed and started.Time is consistent between the active and standby clusters and the NTP service on the active and standby clust",
|
||
"product_code":"mrs",
|
||
"title":"Using the ReplicationSyncUp Tool",
|
||
"uri":"mrs_01_0510.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"206"
|
||
},
|
||
{
|
||
"desc":"HBase disaster recovery (DR), a key feature that is used to ensure high availability (HA) of the HBase cluster system, provides the real-time remote DR function for HBase",
|
||
"product_code":"mrs",
|
||
"title":"Configuring HBase DR",
|
||
"uri":"mrs_01_1609.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"207"
|
||
},
|
||
{
|
||
"desc":"The system administrator can configure HBase cluster DR to improve system availability. If the active cluster in the DR environment is faulty and the connection to the HB",
|
||
"product_code":"mrs",
|
||
"title":"Performing an HBase DR Service Switchover",
|
||
"uri":"mrs_01_1610.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"208"
|
||
},
|
||
{
|
||
"desc":"HBase encodes data blocks in HFiles to reduce duplicate keys in KeyValues, reducing used space. Currently, the following data block encoding modes are supported: NONE, PR",
|
||
"product_code":"mrs",
|
||
"title":"Configuring HBase Data Compression and Encoding",
|
||
"uri":"en-us_topic_0000001295898904.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"209"
|
||
},
|
||
{
|
||
"desc":"The HBase cluster in the current environment is a DR cluster. Due to some reasons, the active and standby clusters need to be switched over. That is, the standby cluster ",
|
||
"product_code":"mrs",
|
||
"title":"Performing an HBase DR Active/Standby Cluster Switchover",
|
||
"uri":"mrs_01_1611.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"210"
|
||
},
|
||
{
|
||
"desc":"The Apache HBase official website provides the function of importing data in batches. For details, see the description of the Import and ImportTsv tools at http://hbase.a",
|
||
"product_code":"mrs",
|
||
"title":"Community BulkLoad Tool",
|
||
"uri":"mrs_01_1612.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"211"
|
||
},
|
||
{
|
||
"desc":"In the actual application scenario, data in various sizes needs to be stored, for example, image data and documents. Data whose size is smaller than 10 MB can be stored i",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the MOB",
|
||
"uri":"mrs_01_1631.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"212"
|
||
},
|
||
{
|
||
"desc":"This topic provides the procedure to configure the secure HBase replication during cross-realm Kerberos setup in security mode.Mapping for all the FQDNs to their realms s",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Secure HBase Replication",
|
||
"uri":"mrs_01_1009.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"213"
|
||
},
|
||
{
|
||
"desc":"In a faulty environment, there are possibilities that a region may be stuck in transition for longer duration due to various reasons like slow region server response, uns",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Region In Transition Recovery Chore Service",
|
||
"uri":"mrs_01_1010.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"214"
|
||
},
|
||
{
|
||
"desc":"HIndex enables HBase indexing based on specific column values, making the retrieval of data highly efficient and fast.Column families are separated by semicolons (;).Colu",
|
||
"product_code":"mrs",
|
||
"title":"Using a Secondary Index",
|
||
"uri":"mrs_01_1635.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"215"
|
||
},
|
||
{
|
||
"desc":"Log path: The default storage path of HBase logs is /var/log/Bigdata/hbase/Role name.HMaster: /var/log/Bigdata/hbase/hm (run logs) and /var/log/Bigdata/audit/hbase/hm (au",
|
||
"product_code":"mrs",
|
||
"title":"HBase Log Overview",
|
||
"uri":"mrs_01_1056.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"216"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"HBase Performance Tuning",
|
||
"uri":"mrs_01_1013.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"217"
|
||
},
|
||
{
|
||
"desc":"BulkLoad uses MapReduce jobs to directly generate files that comply with the internal data format of HBase, and then loads the generated StoreFiles to a running cluster. ",
|
||
"product_code":"mrs",
|
||
"title":"Improving the BulkLoad Efficiency",
|
||
"uri":"mrs_01_1636.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"217",
|
||
"code":"218"
|
||
},
|
||
{
|
||
"desc":"In the scenario where a large number of requests are continuously put, setting the following two parameters to false can greatly improve the Put performance.hbase.regions",
|
||
"product_code":"mrs",
|
||
"title":"Improving Put Performance",
|
||
"uri":"mrs_01_1637.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"217",
|
||
"code":"219"
|
||
},
|
||
{
|
||
"desc":"HBase has many configuration parameters related to read and write performance. The configuration parameters need to be adjusted based on the read/write request loads. Thi",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Put and Scan Performance",
|
||
"uri":"mrs_01_1016.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"217",
|
||
"code":"220"
|
||
},
|
||
{
|
||
"desc":"Scenarios where data needs to be written to HBase in real time, or large-scale and consecutive put scenariosThe HBase put or delete interface can be used to save data to ",
|
||
"product_code":"mrs",
|
||
"title":"Improving Real-time Data Write Performance",
|
||
"uri":"mrs_01_1017.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"217",
|
||
"code":"221"
|
||
},
|
||
{
|
||
"desc":"HBase data needs to be read.The get or scan interface of HBase has been invoked and data is read in real time from HBase.Data reading server tuningParameter portal:Go to ",
|
||
"product_code":"mrs",
|
||
"title":"Improving Real-time Data Read Performance",
|
||
"uri":"mrs_01_1018.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"217",
|
||
"code":"222"
|
||
},
|
||
{
|
||
"desc":"When the number of clusters reaches a certain scale, the default settings of the Java virtual machine (JVM) cannot meet the cluster requirements. In this case, the cluste",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing JVM Parameters",
|
||
"uri":"mrs_01_1019.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"217",
|
||
"code":"223"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About HBase",
|
||
"uri":"mrs_01_1638.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"199",
|
||
"code":"224"
|
||
},
|
||
{
|
||
"desc":"A HBase server is faulty and cannot provide services. In this case, when a table operation is performed on the HBase client, why is the operation suspended and no respons",
|
||
"product_code":"mrs",
|
||
"title":"Why Does a Client Keep Failing to Connect to a Server for a Long Time?",
|
||
"uri":"mrs_01_1639.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"225"
|
||
},
|
||
{
|
||
"desc":"Why submitted operations fail by stopping BulkLoad on the client during BulkLoad data importing?When BulkLoad is enabled on the client, a partitioner file is generated an",
|
||
"product_code":"mrs",
|
||
"title":"Operation Failures Occur in Stopping BulkLoad On the Client",
|
||
"uri":"mrs_01_1640.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"226"
|
||
},
|
||
{
|
||
"desc":"When HBase consecutively deletes and creates the same table, why may a table creation exception occur?Execution process: Disable Table > Drop Table > Create Table > Disab",
|
||
"product_code":"mrs",
|
||
"title":"Why May a Table Creation Exception Occur When HBase Deletes or Creates the Same Table Consecutively?",
|
||
"uri":"mrs_01_1641.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"227"
|
||
},
|
||
{
|
||
"desc":"Why other services become unstable if HBase sets up a large number of connections over the network port?When the OS command lsof or netstat is run, it is found that many ",
|
||
"product_code":"mrs",
|
||
"title":"Why Other Services Become Unstable If HBase Sets up A Large Number of Connections over the Network Port?",
|
||
"uri":"mrs_01_1642.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"228"
|
||
},
|
||
{
|
||
"desc":"The HBase bulkLoad task (a single table contains 26 TB data) has 210,000 maps and 10,000 reduce tasks, and the task fails.ZooKeeper I/O bottleneck observation methods:On ",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the HBase BulkLoad Task (One Table Has 26 TB Data) Consisting of 210,000 Map Tasks and 10,000 Reduce Tasks Fail?",
|
||
"uri":"mrs_01_1643.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"229"
|
||
},
|
||
{
|
||
"desc":"How do I restore a region in the RIT state for a long time?Log in to the HMaster WebUI, choose Procedure & Locks in the navigation tree, and check whether any process ID ",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Restore a Region in the RIT State for a Long Time?",
|
||
"uri":"mrs_01_1644.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"230"
|
||
},
|
||
{
|
||
"desc":"Why does HMaster exit due to timeout when waiting for the namespace table to go online?During the HMaster active/standby switchover or startup, HMaster performs WAL split",
|
||
"product_code":"mrs",
|
||
"title":"Why Does HMaster Exits Due to Timeout When Waiting for the Namespace Table to Go Online?",
|
||
"uri":"mrs_01_1645.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"231"
|
||
},
|
||
{
|
||
"desc":"Why does the following exception occur on the client when I use the HBase client to operate table data?At the same time, the following log is displayed on RegionServer:Th",
|
||
"product_code":"mrs",
|
||
"title":"Why Does SocketTimeoutException Occur When a Client Queries HBase?",
|
||
"uri":"mrs_01_1646.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"232"
|
||
},
|
||
{
|
||
"desc":"Why modified and deleted data can still be queried by using the scan command?Because of the scalability of HBase, all values specific to the versions in the queried colum",
|
||
"product_code":"mrs",
|
||
"title":"Why Modified and Deleted Data Can Still Be Queried by Using the Scan Command?",
|
||
"uri":"mrs_01_1647.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"233"
|
||
},
|
||
{
|
||
"desc":"Why \"java.lang.UnsatisfiedLinkError: Permission denied\" exception thrown while starting HBase shell?During HBase shell execution JRuby create temporary files under java.i",
|
||
"product_code":"mrs",
|
||
"title":"Why \"java.lang.UnsatisfiedLinkError: Permission denied\" exception thrown while starting HBase shell?",
|
||
"uri":"mrs_01_1648.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"234"
|
||
},
|
||
{
|
||
"desc":"When does the RegionServers listed under \"Dead Region Servers\" on HMaster WebUI gets cleared?When an online RegionServer goes down abruptly, it is displayed under \"Dead R",
|
||
"product_code":"mrs",
|
||
"title":"When does the RegionServers listed under \"Dead Region Servers\" on HMaster WebUI gets cleared?",
|
||
"uri":"mrs_01_1649.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"235"
|
||
},
|
||
{
|
||
"desc":"If the data to be imported by HBase bulkload has identical rowkeys, the data import is successful but identical query criteria produce different query results.Data with a",
|
||
"product_code":"mrs",
|
||
"title":"Why Are Different Query Results Returned After I Use Same Query Criteria to Query Data Successfully Imported by HBase bulkload?",
|
||
"uri":"mrs_01_1650.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"236"
|
||
},
|
||
{
|
||
"desc":"What should I do if I fail to create tables due to the FAILED_OPEN state of Regions?If a network, HDFS, or Active HMaster fault occurs during the creation of tables, some",
|
||
"product_code":"mrs",
|
||
"title":"What Should I Do If I Fail to Create Tables Due to the FAILED_OPEN State of Regions?",
|
||
"uri":"mrs_01_1651.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"237"
|
||
},
|
||
{
|
||
"desc":"In security mode, names of tables that failed to be created are unnecessarily retained in the table-lock node (default directory is /hbase/table-lock) of ZooKeeper. How d",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Delete Residual Table Names in the /hbase/table-lock Directory of ZooKeeper?",
|
||
"uri":"mrs_01_1652.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"238"
|
||
},
|
||
{
|
||
"desc":"Why does HBase become faulty when I set quota for the directory used by HBase in HDFS?The flush operation of a table is to write memstore data to HDFS.If the HDFS directo",
|
||
"product_code":"mrs",
|
||
"title":"Why Does HBase Become Faulty When I Set a Quota for the Directory Used by HBase in HDFS?",
|
||
"uri":"mrs_01_1653.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"239"
|
||
},
|
||
{
|
||
"desc":"Why HMaster times out while waiting for namespace table to be assigned after rebuilding meta using OfflineMetaRepair tool and startups failed?HMaster abort with following",
|
||
"product_code":"mrs",
|
||
"title":"Why HMaster Times Out While Waiting for Namespace Table to be Assigned After Rebuilding Meta Using OfflineMetaRepair Tool and Startups Failed",
|
||
"uri":"mrs_01_1654.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"240"
|
||
},
|
||
{
|
||
"desc":"Why messages containing FileNotFoundException and no lease are frequently displayed in the HMaster logs during the WAL splitting process?During the WAL splitting process,",
|
||
"product_code":"mrs",
|
||
"title":"Why Messages Containing FileNotFoundException and no lease Are Frequently Displayed in the HMaster Logs During the WAL Splitting Process?",
|
||
"uri":"mrs_01_1655.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"241"
|
||
},
|
||
{
|
||
"desc":"When a tenant accesses Phoenix, a message is displayed indicating that the tenant has insufficient rights.You need to associate the HBase service and Yarn queues when cre",
|
||
"product_code":"mrs",
|
||
"title":"Insufficient Rights When a Tenant Accesses Phoenix",
|
||
"uri":"mrs_01_1657.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"242"
|
||
},
|
||
{
|
||
"desc":"The system automatically rolls back data after an HBase recovery task fails. If \"Rollback recovery failed\" is displayed, the rollback fails. After the rollback fails, dat",
|
||
"product_code":"mrs",
|
||
"title":"What Can I Do When HBase Fails to Recover a Task and a Message Is Displayed Stating \"Rollback recovery failed\"?",
|
||
"uri":"mrs_01_1659.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"243"
|
||
},
|
||
{
|
||
"desc":"When the HBaseFsck tool is used to check the region status, if the log contains ERROR: (regions region1 and region2) There is an overlap in the region chain or ERROR: (re",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Fix Region Overlapping?",
|
||
"uri":"mrs_01_1660.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"244"
|
||
},
|
||
{
|
||
"desc":"Check the hbase-omm-*.out log of the node where RegionServer fails to be started. It is found that the log contains An error report file with more information is saved as",
|
||
"product_code":"mrs",
|
||
"title":"Why Does RegionServer Fail to Be Started When GC Parameters Xms and Xmx of HBase RegionServer Are Set to 31 GB?",
|
||
"uri":"mrs_01_1661.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"245"
|
||
},
|
||
{
|
||
"desc":"Why does the LoadIncrementalHFiles tool fail to be executed and \"Permission denied\" is displayed when a Linux user is manually created in a normal cluster and DataNode in",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the LoadIncrementalHFiles Tool Fail to Be Executed and \"Permission denied\" Is Displayed When Nodes in a Cluster Are Used to Import Data in Batches?",
|
||
"uri":"mrs_01_0625.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"246"
|
||
},
|
||
{
|
||
"desc":"When the sqlline script is used on the client, the error message \"import argparse\" is displayed.",
|
||
"product_code":"mrs",
|
||
"title":"Why Is the Error Message \"import argparse\" Displayed When the Phoenix sqlline Script Is Used?",
|
||
"uri":"mrs_01_2210.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"247"
|
||
},
|
||
{
|
||
"desc":"When the indexed field data is updated, if a batch of data exists in the user table, the BulkLoad tool cannot update the global and partial mutable indexes.Problem Analys",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Deal with the Restrictions of the Phoenix BulkLoad Tool?",
|
||
"uri":"mrs_01_2211.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"248"
|
||
},
|
||
{
|
||
"desc":"When CTBase accesses the HBase service with the Ranger plug-ins enabled and you are creating a cluster table, a message is displayed indicating that the permission is ins",
|
||
"product_code":"mrs",
|
||
"title":"Why a Message Is Displayed Indicating that the Permission is Insufficient When CTBase Connects to the Ranger Plug-ins?",
|
||
"uri":"mrs_01_2212.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"224",
|
||
"code":"249"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using HDFS",
|
||
"uri":"mrs_01_0790.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"250"
|
||
},
|
||
{
|
||
"desc":"In HDFS, each file object needs to register corresponding information in the NameNode and occupies certain storage space. As the number of files increases, if the origina",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Memory Management",
|
||
"uri":"mrs_01_0791.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"251"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create and configure an HDFS role on FusionInsight Manager. The HDFS role is granted the rights to read, write, and execute HDFS directories",
|
||
"product_code":"mrs",
|
||
"title":"Creating an HDFS Role",
|
||
"uri":"mrs_01_1662.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"252"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use the HDFS client in an O&M scenario or service scenario.The client has been installed.For example, the installation directory is /opt/had",
|
||
"product_code":"mrs",
|
||
"title":"Using the HDFS Client",
|
||
"uri":"mrs_01_1663.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"253"
|
||
},
|
||
{
|
||
"desc":"DistCp is a tool used to perform large-amount data replication between clusters or in a cluster. It uses MapReduce tasks to implement distributed copy of a large amount o",
|
||
"product_code":"mrs",
|
||
"title":"Running the DistCp Command",
|
||
"uri":"mrs_01_0794.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"254"
|
||
},
|
||
{
|
||
"desc":"This section describes the directory structure in HDFS, as shown in the following table.",
|
||
"product_code":"mrs",
|
||
"title":"Overview of HDFS File System Directories",
|
||
"uri":"mrs_01_0795.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"255"
|
||
},
|
||
{
|
||
"desc":"If the storage directory defined by the HDFS DataNode is incorrect or the HDFS storage plan changes, the system administrator needs to modify the DataNode storage directo",
|
||
"product_code":"mrs",
|
||
"title":"Changing the DataNode Storage Directory",
|
||
"uri":"mrs_01_1664.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"256"
|
||
},
|
||
{
|
||
"desc":"The permission for some HDFS directories is 777 or 750 by default, which brings potential security risks. You are advised to modify the permission for the HDFS directorie",
|
||
"product_code":"mrs",
|
||
"title":"Configuring HDFS Directory Permission",
|
||
"uri":"mrs_01_0797.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"257"
|
||
},
|
||
{
|
||
"desc":"Before deploying a cluster, you can deploy a Network File System (NFS) server based on requirements to store NameNode metadata to enhance data reliability.If the NFS serv",
|
||
"product_code":"mrs",
|
||
"title":"Configuring NFS",
|
||
"uri":"mrs_01_1665.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"258"
|
||
},
|
||
{
|
||
"desc":"In HDFS, DataNode stores user files and directories as blocks, and file objects are generated on the NameNode to map each file, directory, and block on the DataNode.The f",
|
||
"product_code":"mrs",
|
||
"title":"Planning HDFS Capacity",
|
||
"uri":"mrs_01_0799.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"259"
|
||
},
|
||
{
|
||
"desc":"When you open an HDFS file, an error occurs due to the limit on the number of file handles. Information similar to the following is displayed.You can contact the system a",
|
||
"product_code":"mrs",
|
||
"title":"Configuring ulimit for HBase and HDFS",
|
||
"uri":"mrs_01_0801.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"260"
|
||
},
|
||
{
|
||
"desc":"In the HDFS cluster, unbalanced disk usage among DataNodes may occur, for example, when new DataNodes are added to the cluster. Unbalanced disk usage may result in multip",
|
||
"product_code":"mrs",
|
||
"title":"Balancing DataNode Capacity",
|
||
"uri":"mrs_01_1667.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"261"
|
||
},
|
||
{
|
||
"desc":"By default, NameNode randomly selects a DataNode to write files. If the disk capacity of some DataNodes in a cluster is inconsistent (the total disk capacity of some node",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Replica Replacement Policy for Heterogeneous Capacity Among DataNodes",
|
||
"uri":"mrs_01_0804.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"262"
|
||
},
|
||
{
|
||
"desc":"Generally, multiple services are deployed in a cluster, and the storage of most services depends on the HDFS file system. Different components such as Spark and Yarn or c",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Number of Files in a Single HDFS Directory",
|
||
"uri":"mrs_01_0805.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"263"
|
||
},
|
||
{
|
||
"desc":"On HDFS, deleted files are moved to the recycle bin (trash can) so that the data deleted by mistake can be restored.You can set the time threshold for storing files in th",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Recycle Bin Mechanism",
|
||
"uri":"mrs_01_0806.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"264"
|
||
},
|
||
{
|
||
"desc":"HDFS allows users to modify the default permissions of files and directories. The default mask provided by the HDFS for creating file and directory permissions is 022. If",
|
||
"product_code":"mrs",
|
||
"title":"Setting Permissions on Files and Directories",
|
||
"uri":"mrs_01_0807.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"265"
|
||
},
|
||
{
|
||
"desc":"In security mode, users can flexibly set the maximum token lifetime and token renewal interval in HDFS based on cluster requirements.Navigation path for setting parameter",
|
||
"product_code":"mrs",
|
||
"title":"Setting the Maximum Lifetime and Renewal Interval of a Token",
|
||
"uri":"mrs_01_0808.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"266"
|
||
},
|
||
{
|
||
"desc":"In the open source version, if multiple data storage volumes are configured for a DataNode, the DataNode stops providing services by default if one of the volumes is dama",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Damaged Disk Volume",
|
||
"uri":"mrs_01_1669.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"267"
|
||
},
|
||
{
|
||
"desc":"Encrypted channel is an encryption protocol of remote procedure call (RPC) in HDFS. When a user invokes RPC, the user's login name will be transmitted to RPC through RPC ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Encrypted Channels",
|
||
"uri":"mrs_01_0810.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"268"
|
||
},
|
||
{
|
||
"desc":"Clients probably encounter running errors when the network is not stable. Users can adjust the following parameter values to improve the running efficiency.Go to the All ",
|
||
"product_code":"mrs",
|
||
"title":"Reducing the Probability of Abnormal Client Application Operation When the Network Is Not Stable",
|
||
"uri":"mrs_01_0811.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"269"
|
||
},
|
||
{
|
||
"desc":"In the existing default DFSclient failover proxy provider, if a NameNode in a process is faulty, all HDFS client instances in the same process attempt to connect to the N",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the NameNode Blacklist",
|
||
"uri":"mrs_01_1670.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"270"
|
||
},
|
||
{
|
||
"desc":"Several finished Hadoop clusters are faulty because the NameNode is overloaded and unresponsive.Such problem is caused by the initial design of Hadoop: In Hadoop, the Nam",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing HDFS NameNode RPC QoS",
|
||
"uri":"mrs_01_1672.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"271"
|
||
},
|
||
{
|
||
"desc":"When the speed at which the client writes data to the HDFS is greater than the disk bandwidth of the DataNode, the disk bandwidth is fully occupied. As a result, the Data",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing HDFS DataNode RPC QoS",
|
||
"uri":"mrs_01_1673.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"272"
|
||
},
|
||
{
|
||
"desc":"When the Yarn local directory and DataNode directory are on the same disk, the disk with larger capacity can run more tasks. Therefore, more intermediate data is stored i",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Reserved Percentage of Disk Usage on DataNodes",
|
||
"uri":"mrs_01_1675.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"273"
|
||
},
|
||
{
|
||
"desc":"You need to configure the nodes for storing HDFS file data blocks based on data features. You can configure a label expression to an HDFS directory or file and assign one",
|
||
"product_code":"mrs",
|
||
"title":"Configuring HDFS NodeLabel",
|
||
"uri":"mrs_01_1676.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"274"
|
||
},
|
||
{
|
||
"desc":"DiskBalancer is an online disk balancer that balances disk data on running DataNodes based on various indicators. It works in the similar way of the HDFS Balancer. The di",
|
||
"product_code":"mrs",
|
||
"title":"Configuring HDFS DiskBalancer",
|
||
"uri":"mrs_01_1678.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"275"
|
||
},
|
||
{
|
||
"desc":"Performing this operation can concurrently modify file and directory permissions and access control tools in a cluster.Performing concurrent file modification operations ",
|
||
"product_code":"mrs",
|
||
"title":"Performing Concurrent Operations on HDFS Files",
|
||
"uri":"mrs_01_1684.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"276"
|
||
},
|
||
{
|
||
"desc":"Log path: The default path of HDFS logs is /var/log/Bigdata/hdfs/Role name.NameNode: /var/log/Bigdata/hdfs/nn (run logs) and /var/log/Bigdata/audit/hdfs/nn (audit logs)Da",
|
||
"product_code":"mrs",
|
||
"title":"Introduction to HDFS Logs",
|
||
"uri":"mrs_01_0828.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"277"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"HDFS Performance Tuning",
|
||
"uri":"mrs_01_0829.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"278"
|
||
},
|
||
{
|
||
"desc":"Improve the HDFS write performance by modifying the HDFS attributes.Navigation path for setting parameters:On FusionInsight Manager, choose Cluster >Name of the desired c",
|
||
"product_code":"mrs",
|
||
"title":"Improving Write Performance",
|
||
"uri":"mrs_01_1687.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"278",
|
||
"code":"279"
|
||
},
|
||
{
|
||
"desc":"Improve the HDFS read performance by using the client to cache the metadata for block locations.This function is recommended only for reading files that are not modified ",
|
||
"product_code":"mrs",
|
||
"title":"Improving Read Performance Using Client Metadata Cache",
|
||
"uri":"mrs_01_1688.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"278",
|
||
"code":"280"
|
||
},
|
||
{
|
||
"desc":"When HDFS is deployed in high availability (HA) mode with multiple NameNode instances, the HDFS client needs to connect to each NameNode in sequence to determine which is",
|
||
"product_code":"mrs",
|
||
"title":"Improving the Connection Between the Client and NameNode Using Current Active Cache",
|
||
"uri":"mrs_01_1689.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"278",
|
||
"code":"281"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"FAQ",
|
||
"uri":"mrs_01_1690.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"250",
|
||
"code":"282"
|
||
},
|
||
{
|
||
"desc":"The NameNode startup is slow when it is restarted immediately after a large number of files (for example, 1 million files) are deleted.It takes time for the DataNode to d",
|
||
"product_code":"mrs",
|
||
"title":"NameNode Startup Is Slow",
|
||
"uri":"mrs_01_1691.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"283"
|
||
},
|
||
{
|
||
"desc":"Why MapReduce or Yarn tasks using the viewFS function fail to be executed in the environment with multiple NameServices?When viewFS is used, only directories mounted to v",
|
||
"product_code":"mrs",
|
||
"title":"Why MapReduce Tasks Fails in the Environment with Multiple NameServices?",
|
||
"uri":"mrs_01_1692.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"284"
|
||
},
|
||
{
|
||
"desc":"The DataNode is normal, but cannot report data blocks. As a result, the existing data blocks cannot be used.This error may occur when the number of data blocks in a data ",
|
||
"product_code":"mrs",
|
||
"title":"DataNode Is Normal but Cannot Report Data Blocks",
|
||
"uri":"mrs_01_1693.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"285"
|
||
},
|
||
{
|
||
"desc":"When errors occur in the dfs.datanode.data.dir directory of DataNode due to the permission or disk damage, HDFS WebUI does not display information about damaged data.Afte",
|
||
"product_code":"mrs",
|
||
"title":"HDFS WebUI Cannot Properly Update Information About Damaged Data",
|
||
"uri":"mrs_01_1694.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"286"
|
||
},
|
||
{
|
||
"desc":"Why distcp command fails in the secure cluster with the following error displayed?Client side exceptionServer side exceptionThe preceding error may occur if webhdfs:// is",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the Distcp Command Fail in the Secure Cluster, Causing an Exception?",
|
||
"uri":"mrs_01_1695.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"287"
|
||
},
|
||
{
|
||
"desc":"If the number of disks specified by dfs.datanode.data.dir is equal to the value of dfs.datanode.failed.volumes.tolerated, DataNode startup will fail.By default, the failu",
|
||
"product_code":"mrs",
|
||
"title":"Why Does DataNode Fail to Start When the Number of Disks Specified by dfs.datanode.data.dir Equals dfs.datanode.failed.volumes.tolerated?",
|
||
"uri":"mrs_01_1696.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"288"
|
||
},
|
||
{
|
||
"desc":"DataNode capacity count incorrect if several data.dir configured in one disk partition.Currently calculation will be done based on the disk like df command in linux. Idea",
|
||
"product_code":"mrs",
|
||
"title":"Why Does an Error Occur During DataNode Capacity Calculation When Multiple data.dir Are Configured in a Partition?",
|
||
"uri":"mrs_01_1697.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"289"
|
||
},
|
||
{
|
||
"desc":"When the standby NameNode is powered off during metadata (namespace) storage, it fails to be started and the following error information is displayed.When the standby Nam",
|
||
"product_code":"mrs",
|
||
"title":"Standby NameNode Fails to Be Restarted When the System Is Powered off During Metadata (Namespace) Storage",
|
||
"uri":"mrs_01_1698.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"290"
|
||
},
|
||
{
|
||
"desc":"Why data in the buffer is lost if a power outage occurs during storage of small files?Because of a power outage, the blocks in the buffer are not written to the disk imme",
|
||
"product_code":"mrs",
|
||
"title":"Why Data in the Buffer Is Lost If a Power Outage Occurs During Storage of Small Files",
|
||
"uri":"mrs_01_1699.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"291"
|
||
},
|
||
{
|
||
"desc":"When HDFS calls the FileInputFormat getSplit method, the ArrayIndexOutOfBoundsException: 0 appears in the following log:The elements of each block correspondent frame are",
|
||
"product_code":"mrs",
|
||
"title":"Why Does Array Border-crossing Occur During FileInputFormat Split?",
|
||
"uri":"mrs_01_1700.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"292"
|
||
},
|
||
{
|
||
"desc":"When the storage policy of the file is set to LAZY_PERSIST, the storage type of the first replica should be RAM_DISK, and the storage type of other replicas should be DIS",
|
||
"product_code":"mrs",
|
||
"title":"Why Is the Storage Type of File Copies DISK When the Tiered Storage Policy Is LAZY_PERSIST?",
|
||
"uri":"mrs_01_1701.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"293"
|
||
},
|
||
{
|
||
"desc":"When the NameNode node is overloaded (100% of the CPU is occupied), the NameNode is unresponsive. The HDFS clients that are connected to the overloaded NameNode fail to r",
|
||
"product_code":"mrs",
|
||
"title":"The HDFS Client Is Unresponsive When the NameNode Is Overloaded for a Long Time",
|
||
"uri":"mrs_01_1702.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"294"
|
||
},
|
||
{
|
||
"desc":"In DataNode, the storage directory of data blocks is specified by dfs.datanode.data.dir.Can I modify dfs.datanode.data.dir tomodify the data storage directory?Can I modif",
|
||
"product_code":"mrs",
|
||
"title":"Can I Delete or Modify the Data Storage Directory in DataNode?",
|
||
"uri":"mrs_01_1703.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"295"
|
||
},
|
||
{
|
||
"desc":"Why are some blocks missing on the NameNode UI after the rollback is successful?This problem occurs because blocks with new IDs or genstamps may exist on the DataNode. Th",
|
||
"product_code":"mrs",
|
||
"title":"Blocks Miss on the NameNode UI After the Successful Rollback",
|
||
"uri":"mrs_01_1704.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"296"
|
||
},
|
||
{
|
||
"desc":"Why is an \"java.net.SocketException: No buffer space available\" exception reported when data is written to HDFS?This problem occurs when files are written to the HDFS. Ch",
|
||
"product_code":"mrs",
|
||
"title":"Why Is \"java.net.SocketException: No buffer space available\" Reported When Data Is Written to HDFS",
|
||
"uri":"mrs_01_1705.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"297"
|
||
},
|
||
{
|
||
"desc":"Why are there two standby NameNodes after the active NameNode is restarted?When this problem occurs, check the ZooKeeper and ZooKeeper FC logs. You can find that the sess",
|
||
"product_code":"mrs",
|
||
"title":"Why are There Two Standby NameNodes After the active NameNode Is Restarted?",
|
||
"uri":"mrs_01_1706.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"298"
|
||
},
|
||
{
|
||
"desc":"After I start a Balance process in HDFS, the process is shut down abnormally. If I attempt to execute the Balance process again, it fails again.After a Balance process is",
|
||
"product_code":"mrs",
|
||
"title":"When Does a Balance Process in HDFS, Shut Down and Fail to be Executed Again?",
|
||
"uri":"mrs_01_1707.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"299"
|
||
},
|
||
{
|
||
"desc":"Occasionally, nternet Explorer 9, Explorer 10, or Explorer 11 fails to access the native HDFS UI.Internet Explorer 9, Explorer 10, or Explorer 11 fails to access the nati",
|
||
"product_code":"mrs",
|
||
"title":"\"This page can't be displayed\" Is Displayed When Internet Explorer Fails to Access the Native HDFS UI",
|
||
"uri":"mrs_01_1708.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"300"
|
||
},
|
||
{
|
||
"desc":"If a JournalNode server is powered off, the data directory disk is fully occupied, and the network is abnormal, the EditLog sequence number on the JournalNode is inconsec",
|
||
"product_code":"mrs",
|
||
"title":"NameNode Fails to Be Restarted Due to EditLog Discontinuity",
|
||
"uri":"mrs_01_1709.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"282",
|
||
"code":"301"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using HetuEngine",
|
||
"uri":"mrs_01_1710.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"302"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use HetuEngine to connect to the Hive data source and query database tables of the Hive data source of the cluster through HetuEngine.The He",
|
||
"product_code":"mrs",
|
||
"title":"Using HetuEngine from Scratch",
|
||
"uri":"mrs_01_1711.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"303"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"HetuEngine Permission Management",
|
||
"uri":"mrs_01_1721.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"304"
|
||
},
|
||
{
|
||
"desc":"HetuEngine supports permission control for clusters in security mode. For clusters in non-security mode, permission control is not performed.In security mode, HetuEngine ",
|
||
"product_code":"mrs",
|
||
"title":"HetuEngine Permission Management Overview",
|
||
"uri":"mrs_01_1722.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"304",
|
||
"code":"305"
|
||
},
|
||
{
|
||
"desc":"Before using the HetuEngine service in a security cluster, a cluster administrator needs to create a user and grant operation permissions to the user to meet service requ",
|
||
"product_code":"mrs",
|
||
"title":"Creating a HetuEngine User",
|
||
"uri":"mrs_01_1714.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"304",
|
||
"code":"306"
|
||
},
|
||
{
|
||
"desc":"Newly installed clusters use Ranger for authentication by default. System administrators can use Ranger to configure the permissions to manage databases, tables, and colu",
|
||
"product_code":"mrs",
|
||
"title":"HetuEngine Ranger-based Permission Control",
|
||
"uri":"mrs_01_1723.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"304",
|
||
"code":"307"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"HetuEngine MetaStore-based Permission Control",
|
||
"uri":"mrs_01_1724.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"304",
|
||
"code":"308"
|
||
},
|
||
{
|
||
"desc":"Constraints: This parameter applies only to the Hive data source.When multiple HetuEngine clusters are deployed for collaborative computing, the metadata is centrally man",
|
||
"product_code":"mrs",
|
||
"title":"Overview",
|
||
"uri":"mrs_01_1725.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"308",
|
||
"code":"309"
|
||
},
|
||
{
|
||
"desc":"The system administrator can create and set a HetuEngine role on FusionInsight Manager. The HetuEngine role can be configured with the HetuEngine administrator permission",
|
||
"product_code":"mrs",
|
||
"title":"Creating a HetuEngine Role",
|
||
"uri":"mrs_01_2350.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"308",
|
||
"code":"310"
|
||
},
|
||
{
|
||
"desc":"If a user needs to access HetuEngine tables or databases created by other users, the user needs to be granted with related permissions. HetuEngine supports permission con",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Permissions for Tables, Columns, and Databases",
|
||
"uri":"mrs_01_2352.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"308",
|
||
"code":"311"
|
||
},
|
||
{
|
||
"desc":"Access data sources in the same cluster using HetuEngineIf Ranger authentication is enabled for HetuEngine, the PBAC permission policy of Ranger is used for authenticatio",
|
||
"product_code":"mrs",
|
||
"title":"Permission Principles and Constraints",
|
||
"uri":"mrs_01_1728.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"304",
|
||
"code":"312"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create a HetuEngine compute instance. If you want to stop the cluster where compute instances are successfully created, you need to manually",
|
||
"product_code":"mrs",
|
||
"title":"Creating HetuEngine Compute Instances",
|
||
"uri":"mrs_01_1731.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"313"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Data Sources",
|
||
"uri":"mrs_01_2314.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"314"
|
||
},
|
||
{
|
||
"desc":"HetuEngine supports quick joint query of multiple data sources and GUI-based data source configuration and management. You can quickly add a data source on the HSConsole ",
|
||
"product_code":"mrs",
|
||
"title":"Before You Start",
|
||
"uri":"mrs_01_2315.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"314",
|
||
"code":"315"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Configuring a Hive Data Source",
|
||
"uri":"mrs_01_24174.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"314",
|
||
"code":"316"
|
||
},
|
||
{
|
||
"desc":"This section describes how to add a Hive data source of the same Hadoop cluster as HetuEngine on HSConsole.Currently, HetuEngine supports data sources of the following tr",
|
||
"product_code":"mrs",
|
||
"title":"Configuring a Co-deployed Hive Data Source",
|
||
"uri":"mrs_01_24253.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"316",
|
||
"code":"317"
|
||
},
|
||
{
|
||
"desc":"This section describes how to add a Hive data source on HSConsole.Currently, HetuEngine supports data sources of the following traditional data formats: AVRO, TEXT, RCTEX",
|
||
"product_code":"mrs",
|
||
"title":"Configuring a Traditional Data Source",
|
||
"uri":"mrs_01_2348.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"316",
|
||
"code":"318"
|
||
},
|
||
{
|
||
"desc":"HetuEngine can be connected to the Hudi data source of the cluster of MRS 3.1.1 or later.HetuEngine does not support the reading of Hudi bootstrap tables.You have created",
|
||
"product_code":"mrs",
|
||
"title":"Configuring a Hudi Data Source",
|
||
"uri":"mrs_01_2363.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"316",
|
||
"code":"319"
|
||
},
|
||
{
|
||
"desc":"This section describes how to add an HBase data source on HSConsole.The domain name of the cluster where the data source is located must be different from the HetuEngine ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring an HBase Data Source",
|
||
"uri":"mrs_01_2349.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"314",
|
||
"code":"320"
|
||
},
|
||
{
|
||
"desc":"This section describes how to add a GaussDB JDBC data source on the HSConsole page.The domain name of the cluster where the data source is located must be different from ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring a GaussDB Data Source",
|
||
"uri":"mrs_01_2351.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"314",
|
||
"code":"321"
|
||
},
|
||
{
|
||
"desc":"This section describes how to add another HetuEngine data source on the HSConsole page for a cluster in security mode.Currently, the following data sources are supported:",
|
||
"product_code":"mrs",
|
||
"title":"Configuring a HetuEngine Data Source",
|
||
"uri":"mrs_01_1719.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"314",
|
||
"code":"322"
|
||
},
|
||
{
|
||
"desc":"Currently, HetuEngine supports the interconnection with the ClickHouse data source in the cluster of MRS 3.1.1 or later.The HetuEngine cluster in security mode supports t",
|
||
"product_code":"mrs",
|
||
"title":"Configuring a ClickHouse Data Source",
|
||
"uri":"mrs_01_24146.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"314",
|
||
"code":"323"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Managing Data Sources",
|
||
"uri":"mrs_01_1720.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"324"
|
||
},
|
||
{
|
||
"desc":"On the HetuEngine web UI, you can view, edit, and delete an added data source.You have created a HetuEngine administrator for accessing the HetuEngine web UI. For details",
|
||
"product_code":"mrs",
|
||
"title":"Managing an External Data Source",
|
||
"uri":"mrs_01_24061.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"324",
|
||
"code":"325"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Managing Compute Instances",
|
||
"uri":"mrs_01_1729.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"326"
|
||
},
|
||
{
|
||
"desc":"The resource group mechanism controls the overall query load of the instance from the perspective of resource allocation and implements queuing policies for queries. Mult",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Resource Groups",
|
||
"uri":"mrs_01_1732.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"326",
|
||
"code":"327"
|
||
},
|
||
{
|
||
"desc":"On the HetuEngine web UI, you can adjust the number of worker nodes for a compute instance. In this way, resources can be expanded for the compute instance when resources",
|
||
"product_code":"mrs",
|
||
"title":"Adjusting the Number of Worker Nodes",
|
||
"uri":"mrs_01_2320.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"326",
|
||
"code":"328"
|
||
},
|
||
{
|
||
"desc":"On the HetuEngine web UI, you can start, stop, delete, and roll-restart a single compute instance or compute instances in batches.Restarting HetuEngineDuring the restart ",
|
||
"product_code":"mrs",
|
||
"title":"Managing a HetuEngine Compute Instance",
|
||
"uri":"mrs_01_1736.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"326",
|
||
"code":"329"
|
||
},
|
||
{
|
||
"desc":"On the HetuEngine web UI, you can import or export the instance configuration file and download the instance configuration template.You have created a user for accessing ",
|
||
"product_code":"mrs",
|
||
"title":"Importing and Exporting Compute Instance Configurations",
|
||
"uri":"mrs_01_1733.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"326",
|
||
"code":"330"
|
||
},
|
||
{
|
||
"desc":"On the HetuEngine web UI, you can view the detailed information about a specified service, including the execution status of each SQL statement. If the current cluster us",
|
||
"product_code":"mrs",
|
||
"title":"Viewing the Instance Monitoring Page",
|
||
"uri":"mrs_01_1734.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"326",
|
||
"code":"331"
|
||
},
|
||
{
|
||
"desc":"On the HetuEngine web UI, you can view Coordinator and Worker logs on the Yarn web UI.You have created a user for accessing the HetuEngine web UI. For details, see Creati",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Coordinator and Worker Logs",
|
||
"uri":"mrs_01_1735.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"326",
|
||
"code":"332"
|
||
},
|
||
{
|
||
"desc":"By default, coordinator and worker nodes randomly start on Yarn NodeManager nodes, and you have to open all ports on all NodeManager nodes. Using resource labels of Yarn,",
|
||
"product_code":"mrs",
|
||
"title":"Using Resource Labels to Specify on Which Node Coordinators Should Run",
|
||
"uri":"mrs_01_24260.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"326",
|
||
"code":"333"
|
||
},
|
||
{
|
||
"desc":"If a compute instance is not created or started, you can log in to the HetuEngine client to create or start the compute instance. This section describes how to manage a c",
|
||
"product_code":"mrs",
|
||
"title":"Using the HetuEngine Client",
|
||
"uri":"mrs_01_1737.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"334"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using the HetuEngine Cross-Source Function",
|
||
"uri":"mrs_01_1738.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"335"
|
||
},
|
||
{
|
||
"desc":"Enterprises usually store massive data, such as from various databases and warehouses, for management and information collection. However, diversified data sources, hybri",
|
||
"product_code":"mrs",
|
||
"title":"Introduction to HetuEngine Cross-Source Function",
|
||
"uri":"mrs_01_1739.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"335",
|
||
"code":"336"
|
||
},
|
||
{
|
||
"desc":"The format of the statement for creating a mapping table is as follows:CREATE TABLE schemaName.tableName (\n rowId VARCHAR,\n qualifier1 TINYINT,\n qualifier2 SMALLINT,\n ",
|
||
"product_code":"mrs",
|
||
"title":"Usage Guide of HetuEngine Cross-Source Function",
|
||
"uri":"mrs_01_2341.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"335",
|
||
"code":"337"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using HetuEngine Cross-Domain Function",
|
||
"uri":"mrs_01_2342.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"338"
|
||
},
|
||
{
|
||
"desc":"HetuEngine provide unified standard SQL to implement efficient access to multiple data sources distributed in multiple regions (or data centers), shields data differences",
|
||
"product_code":"mrs",
|
||
"title":"Introduction to HetuEngine Cross-Source Function",
|
||
"uri":"mrs_01_2334.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"338",
|
||
"code":"339"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"HetuEngine Cross-Domain Function Usage",
|
||
"uri":"mrs_01_2335.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"338",
|
||
"code":"340"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"HetuEngine Cross-Domain Rate Limit Function",
|
||
"uri":"mrs_01_24284.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"338",
|
||
"code":"341"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using a Third-Party Visualization Tool to Access HetuEngine",
|
||
"uri":"mrs_01_2336.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"342"
|
||
},
|
||
{
|
||
"desc":"To access the dual-plane environment, the cluster service plane must be able to communicate with the local Windows environment.",
|
||
"product_code":"mrs",
|
||
"title":"Usage Instruction",
|
||
"uri":"mrs_01_24178.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"342",
|
||
"code":"343"
|
||
},
|
||
{
|
||
"desc":"This section uses DBeaver 6.3.5 as an example to describe how to perform operations on HetuEngine.The DBeaver has been installed properly. Download the DBeaver software f",
|
||
"product_code":"mrs",
|
||
"title":"Using DBeaver to Access HetuEngine",
|
||
"uri":"mrs_01_2337.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"342",
|
||
"code":"344"
|
||
},
|
||
{
|
||
"desc":"Tableau has been installed.The JDBC JAR file has been obtained. For details, see 1.A human-machine user has been created in the cluster. For details about how to create a",
|
||
"product_code":"mrs",
|
||
"title":"Using Tableau to Access HetuEngine",
|
||
"uri":"mrs_01_24010.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"342",
|
||
"code":"345"
|
||
},
|
||
{
|
||
"desc":"PowerBI has been installed.The JDBC JAR file has been obtained. For details, see 1.A human-machine user has been created in the cluster. For details about how to create a",
|
||
"product_code":"mrs",
|
||
"title":"Using PowerBI to Access HetuEngine",
|
||
"uri":"mrs_01_24012.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"342",
|
||
"code":"346"
|
||
},
|
||
{
|
||
"desc":"Yonghong BI has been installed.The JDBC JAR file has been obtained. For details, see 1.A human-machine user has been created in the cluster. For details about how to crea",
|
||
"product_code":"mrs",
|
||
"title":"Using Yonghong BI to Access HetuEngine",
|
||
"uri":"mrs_01_24013.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"342",
|
||
"code":"347"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Function & UDF Development and Application",
|
||
"uri":"mrs_01_2338.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"348"
|
||
},
|
||
{
|
||
"desc":"You can customize functions to extend SQL statements to meet personalized requirements. These functions are called UDFs.This section describes how to develop and apply He",
|
||
"product_code":"mrs",
|
||
"title":"HetuEngine Function Plugin Development and Application",
|
||
"uri":"mrs_01_2339.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"348",
|
||
"code":"349"
|
||
},
|
||
{
|
||
"desc":"You can customize functions to extend SQL statements to meet personalized requirements. These functions are called UDFs.This section describes how to develop and apply Hi",
|
||
"product_code":"mrs",
|
||
"title":"Hive UDF Development and Application",
|
||
"uri":"mrs_01_1743.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"348",
|
||
"code":"350"
|
||
},
|
||
{
|
||
"desc":"Log paths:The HetuEngine logs are stored in /var/log/Bigdata/hetuengine/ and /var/log/Bigdata/audit/hetuengine/.Log archiving rules:Log archiving rules use the FixedWindo",
|
||
"product_code":"mrs",
|
||
"title":"Introduction to HetuEngine Logs",
|
||
"uri":"mrs_01_1744.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"351"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"HetuEngine Performance Tuning",
|
||
"uri":"mrs_01_1745.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"352"
|
||
},
|
||
{
|
||
"desc":"HetuEngine depends on the resource allocation and control capabilities provided by Yarn. You need to adjust the Yarn service configuration based on the actual service and",
|
||
"product_code":"mrs",
|
||
"title":"Adjusting the Yarn Service Configuration",
|
||
"uri":"mrs_01_1740.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"352",
|
||
"code":"353"
|
||
},
|
||
{
|
||
"desc":"The default memory size and disk overflow path of HetuEngine are not the best. You need to adjust node resources in the cluster based on the actual service and server con",
|
||
"product_code":"mrs",
|
||
"title":"Adjusting Cluster Node Resource Configurations",
|
||
"uri":"mrs_01_1741.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"352",
|
||
"code":"354"
|
||
},
|
||
{
|
||
"desc":"HetuEngine provides the execution plan cache function. For the same query that needs to be executed for multiple times, this function reduces the time required for genera",
|
||
"product_code":"mrs",
|
||
"title":"Adjusting Execution Plan Cache",
|
||
"uri":"mrs_01_1742.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"352",
|
||
"code":"355"
|
||
},
|
||
{
|
||
"desc":"When HetuEngine accesses the Hive data source, it needs to access the Hive metastore to obtain the metadata information. HetuEngine provides the metadata cache function. ",
|
||
"product_code":"mrs",
|
||
"title":"Adjusting Metadata Cache",
|
||
"uri":"mrs_01_1746.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"352",
|
||
"code":"356"
|
||
},
|
||
{
|
||
"desc":"If a table or common table expression (CTE) contained in a query appears multiple times and has the same projection and filter, you can enable the CTE reuse function to c",
|
||
"product_code":"mrs",
|
||
"title":"Modifying the CTE Configuration",
|
||
"uri":"mrs_01_24181.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"352",
|
||
"code":"357"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About HetuEngine",
|
||
"uri":"mrs_01_1747.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"302",
|
||
"code":"358"
|
||
},
|
||
{
|
||
"desc":"After the domain name is changed, the installed client configuration and data source configuration become invalid, and the created cluster is unavailable. When data sourc",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Perform Operations After the Domain Name Is Changed?",
|
||
"uri":"mrs_01_2321.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"358",
|
||
"code":"359"
|
||
},
|
||
{
|
||
"desc":"If the cluster startup on the client takes a long time, the waiting times out and the waiting page exits.If the cluster startup times out, the waiting page automatically ",
|
||
"product_code":"mrs",
|
||
"title":"What Do I Do If Starting a Cluster on the Client Times Out?",
|
||
"uri":"mrs_01_2322.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"358",
|
||
"code":"360"
|
||
},
|
||
{
|
||
"desc":"Why is the data source lost when I log in to the client to check the data source connected to the HSConsole page?The possible cause of data source loss is that the DBServ",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Handle Data Source Loss?",
|
||
"uri":"mrs_01_2323.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"358",
|
||
"code":"361"
|
||
},
|
||
{
|
||
"desc":"Log in to FusionInsight Manager and HetuEngine alarms are generated for the cluster.Log in to FusionInsight Manager, go to the O&M page, and view alarm details. You can c",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Handle HetuEngine Alarms?",
|
||
"uri":"mrs_01_2329.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"358",
|
||
"code":"362"
|
||
},
|
||
{
|
||
"desc":"A new host is added to the cluster in security mode, the NodeManager instance is added, and the parameters of the HetuEngine compute instance are adjusted. After the Hetu",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Do If Coordinators and Workers Cannot Be Started on the New Node?",
|
||
"uri":"mrs_01_24050.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"358",
|
||
"code":"363"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Hive",
|
||
"uri":"mrs_01_0581.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"364"
|
||
},
|
||
{
|
||
"desc":"Hive is a data warehouse framework built on Hadoop. It maps structured data files to a database table and provides SQL-like functions to analyze and process data. It also",
|
||
"product_code":"mrs",
|
||
"title":"Using Hive from Scratch",
|
||
"uri":"mrs_01_0442.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"365"
|
||
},
|
||
{
|
||
"desc":"Go to the Hive configurations page by referring to Modifying Cluster Service Configuration Parameters.",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Hive Parameters",
|
||
"uri":"mrs_01_0582.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"366"
|
||
},
|
||
{
|
||
"desc":"Hive SQL supports all features of Hive-3.1.0. For details, see https://cwiki.apache.org/confluence/display/hive/languagemanual.Table 1 describes the extended Hive stateme",
|
||
"product_code":"mrs",
|
||
"title":"Hive SQL",
|
||
"uri":"mrs_01_2330.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"367"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Permission Management",
|
||
"uri":"mrs_01_0947.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"368"
|
||
},
|
||
{
|
||
"desc":"Hive is a data warehouse framework built on Hadoop. It provides basic data analysis services using the Hive query language (HQL), a language like the structured query lan",
|
||
"product_code":"mrs",
|
||
"title":"Hive Permission",
|
||
"uri":"mrs_01_0948.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"368",
|
||
"code":"369"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create and configure a Hive role on Manager as the system administrator. The Hive role can be granted the permissions of the Hive administra",
|
||
"product_code":"mrs",
|
||
"title":"Creating a Hive Role",
|
||
"uri":"mrs_01_0949.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"368",
|
||
"code":"370"
|
||
},
|
||
{
|
||
"desc":"You can configure related permissions if you need to access tables or databases created by other users. Hive supports column-based permission control. If a user needs to ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Permissions for Hive Tables, Columns, or Databases",
|
||
"uri":"mrs_01_0950.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"368",
|
||
"code":"371"
|
||
},
|
||
{
|
||
"desc":"Hive may need to be associated with other components. For example, Yarn permissions are required in the scenario of using HQL statements to trigger MapReduce jobs, and HB",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Permissions to Use Other Components for Hive",
|
||
"uri":"mrs_01_0951.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"368",
|
||
"code":"372"
|
||
},
|
||
{
|
||
"desc":"This section guides users to use a Hive client in an O&M or service scenario.The client has been installed. For example, the client is installed in the /opt/hadoopclient ",
|
||
"product_code":"mrs",
|
||
"title":"Using a Hive Client",
|
||
"uri":"mrs_01_0952.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"373"
|
||
},
|
||
{
|
||
"desc":"HDFS Colocation is the data location control function provided by HDFS. The HDFS Colocation API stores associated data or data on which associated operations are performe",
|
||
"product_code":"mrs",
|
||
"title":"Using HDFS Colocation to Store Hive Tables",
|
||
"uri":"mrs_01_0953.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"374"
|
||
},
|
||
{
|
||
"desc":"Hive supports encryption of one or more columns in a table. When creating a Hive table, you can specify the columns to be encrypted and encryption algorithm. When data is",
|
||
"product_code":"mrs",
|
||
"title":"Using the Hive Column Encryption Function",
|
||
"uri":"mrs_01_0954.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"375"
|
||
},
|
||
{
|
||
"desc":"In most cases, a carriage return character is used as the row delimiter in Hive tables stored in text files, that is, the carriage return character is used as the termina",
|
||
"product_code":"mrs",
|
||
"title":"Customizing Row Separators",
|
||
"uri":"mrs_01_0955.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"376"
|
||
},
|
||
{
|
||
"desc":"Due to the limitations of underlying storage systems, Hive does not support the ability to delete a single piece of table data. In Hive on HBase, MRS Hive supports the ab",
|
||
"product_code":"mrs",
|
||
"title":"Deleting Single-Row Records from Hive on HBase",
|
||
"uri":"mrs_01_0956.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"377"
|
||
},
|
||
{
|
||
"desc":"WebHCat provides external REST APIs for Hive. By default, the open-source community version uses the HTTP protocol.MRS Hive supports the HTTPS protocol that is more secur",
|
||
"product_code":"mrs",
|
||
"title":"Configuring HTTPS/HTTP-based REST APIs",
|
||
"uri":"mrs_01_0957.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"378"
|
||
},
|
||
{
|
||
"desc":"The Transform function is not allowed by Hive of the open source version.MRS Hive supports the configuration of the Transform function. The function is disabled by defaul",
|
||
"product_code":"mrs",
|
||
"title":"Enabling or Disabling the Transform Function",
|
||
"uri":"mrs_01_0958.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"379"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create a view on Hive when MRS is configured in security mode, authorize access permissions to different users, and specify that different u",
|
||
"product_code":"mrs",
|
||
"title":"Access Control of a Dynamic Table View on Hive",
|
||
"uri":"mrs_01_0959.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"380"
|
||
},
|
||
{
|
||
"desc":"You must have ADMIN permission when creating temporary functions on Hive of the open source community version.MRS Hive supports the configuration of the function for crea",
|
||
"product_code":"mrs",
|
||
"title":"Specifying Whether the ADMIN Permissions Is Required for Creating Temporary Functions",
|
||
"uri":"mrs_01_0960.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"381"
|
||
},
|
||
{
|
||
"desc":"Hive allows users to create external tables to associate with other relational databases. External tables read data from associated relational databases and support Join ",
|
||
"product_code":"mrs",
|
||
"title":"Using Hive to Read Data in a Relational Database",
|
||
"uri":"mrs_01_0961.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"382"
|
||
},
|
||
{
|
||
"desc":"Hive supports the following types of traditional relational database syntax:GroupingEXCEPT and INTERSECTSyntax description:Grouping takes effect only when the Group by st",
|
||
"product_code":"mrs",
|
||
"title":"Supporting Traditional Relational Database Syntax in Hive",
|
||
"uri":"mrs_01_0962.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"383"
|
||
},
|
||
{
|
||
"desc":"When built-in functions of Hive cannot meet requirements, you can compile user-defined functions (UDFs) and use them for query.According to implementation methods, UDFs a",
|
||
"product_code":"mrs",
|
||
"title":"Creating User-Defined Hive Functions",
|
||
"uri":"mrs_01_0963.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"384"
|
||
},
|
||
{
|
||
"desc":"When the beeline client is disconnected due to network exceptions during the execution of a batch processing task, tasks submitted before beeline is disconnected can be p",
|
||
"product_code":"mrs",
|
||
"title":"Enhancing beeline Reliability",
|
||
"uri":"mrs_01_0965.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"385"
|
||
},
|
||
{
|
||
"desc":"This function is applicable to Hive and Spark2x in.With this function enabled, if the select permission is granted to a user during Hive table creation, the user can run ",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Table Structures Using the show create Statement as Users with the select Permission",
|
||
"uri":"mrs_01_0966.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"386"
|
||
},
|
||
{
|
||
"desc":"This function applies to Hive.After this function is enabled, run the following command to write a directory into Hive: insert overwrite directory \"/path1\".... After the ",
|
||
"product_code":"mrs",
|
||
"title":"Writing a Directory into Hive with the Old Data Removed to the Recycle Bin",
|
||
"uri":"mrs_01_0967.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"387"
|
||
},
|
||
{
|
||
"desc":"This function applies to Hive.With this function enabled, run the insert overwrite directory/path1/path2/path3... command to write a subdirectory. The permission of the /",
|
||
"product_code":"mrs",
|
||
"title":"Inserting Data to a Directory That Does Not Exist",
|
||
"uri":"mrs_01_0968.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"388"
|
||
},
|
||
{
|
||
"desc":"This function is applicable to Hive and Spark2x.After this function is enabled, only the Hive administrator can create databases and tables in the default database. Other",
|
||
"product_code":"mrs",
|
||
"title":"Creating Databases and Creating Tables in the Default Database Only as the Hive Administrator",
|
||
"uri":"mrs_01_0969.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"389"
|
||
},
|
||
{
|
||
"desc":"This function is applicable to Hive and Spark2x.After this function is enabled, the location keyword cannot be specified when a Hive internal table is created. Specifical",
|
||
"product_code":"mrs",
|
||
"title":"Disabling of Specifying the location Keyword When Creating an Internal Hive Table",
|
||
"uri":"mrs_01_0970.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"390"
|
||
},
|
||
{
|
||
"desc":"This function is applicable to Hive and Spark2x.After this function is enabled, the user or user group that has the read and execute permissions on a directory can create",
|
||
"product_code":"mrs",
|
||
"title":"Enabling the Function of Creating a Foreign Table in a Directory That Can Only Be Read",
|
||
"uri":"mrs_01_0971.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"391"
|
||
},
|
||
{
|
||
"desc":"This function applies to Hive.The number of OS user groups is limited, and the number of roles that can be created in Hive cannot exceed 32. After this function is enable",
|
||
"product_code":"mrs",
|
||
"title":"Authorizing Over 32 Roles in Hive",
|
||
"uri":"mrs_01_0972.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"392"
|
||
},
|
||
{
|
||
"desc":"This function applies to Hive.This function is used to limit the maximum number of maps for Hive tasks on the server to avoid performance deterioration caused by overload",
|
||
"product_code":"mrs",
|
||
"title":"Restricting the Maximum Number of Maps for Hive Tasks",
|
||
"uri":"mrs_01_0973.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"393"
|
||
},
|
||
{
|
||
"desc":"This function applies to Hive.This function can be enabled to specify specific users to access HiveServer services on specific nodes, achieving HiveServer resource isolat",
|
||
"product_code":"mrs",
|
||
"title":"HiveServer Lease Isolation",
|
||
"uri":"mrs_01_0974.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"394"
|
||
},
|
||
{
|
||
"desc":"Hive supports transactions at the table and partition levels. When the transaction mode is enabled, transaction tables can be incrementally updated, deleted, and read, im",
|
||
"product_code":"mrs",
|
||
"title":"Hive Supporting Transactions",
|
||
"uri":"mrs_01_0975.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"395"
|
||
},
|
||
{
|
||
"desc":"Hive can use the Tez engine to process data computing tasks. Before executing a task, you can manually switch the execution engine to Tez.The TimelineServer role of the Y",
|
||
"product_code":"mrs",
|
||
"title":"Switching the Hive Execution Engine to Tez",
|
||
"uri":"mrs_01_1750.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"396"
|
||
},
|
||
{
|
||
"desc":"RDS indicates the relational database in this section. This section describes how to connect Hive with the open-source MySQL and Postgres databases.After an external meta",
|
||
"product_code":"mrs",
|
||
"title":"Connecting Hive with External RDS",
|
||
"uri":"mrs_01_1751.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"397"
|
||
},
|
||
{
|
||
"desc":"The MetaStore service of Hive can cache the metadata of some tables in Redis.The Redis service has been installed in a cluster.If the cluster is installed in non-security",
|
||
"product_code":"mrs",
|
||
"title":"Redis-based CacheStore of HiveMetaStore",
|
||
"uri":"mrs_01_2302.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"398"
|
||
},
|
||
{
|
||
"desc":"A Hive materialized view is a special table obtained based on the query results of Hive internal tables. A materialized view can be considered as an intermediate table th",
|
||
"product_code":"mrs",
|
||
"title":"Hive Materialized View",
|
||
"uri":"mrs_01_2311.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"399"
|
||
},
|
||
{
|
||
"desc":"A Hudi source table corresponds to a copy of HDFS data. The Hudi table data can be mapped to a Hive external table through the Spark component, Flink component, or Hudi c",
|
||
"product_code":"mrs",
|
||
"title":"Hive Supporting Reading Hudi Tables",
|
||
"uri":"mrs_01_24040.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"400"
|
||
},
|
||
{
|
||
"desc":"The metadata that have not been used for a long time is moved to a backup table to reduce the pressure on metadata databases. This process is called partitioned data free",
|
||
"product_code":"mrs",
|
||
"title":"Hive Supporting Cold and Hot Storage of Partitioned Metadata",
|
||
"uri":"mrs_01_24118.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"401"
|
||
},
|
||
{
|
||
"desc":"Zstandard (ZSTD) is an open-source lossless data compression algorithm. Its compression performance and compression ratio are better than those of other compression algor",
|
||
"product_code":"mrs",
|
||
"title":"Hive Supporting ZSTD Compression Formats",
|
||
"uri":"mrs_01_24121.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"402"
|
||
},
|
||
{
|
||
"desc":"Log path: The default save path of Hive logs is /var/log/Bigdata/hive/role name, the default save path of Hive1 logs is /var/log/Bigdata/hive1/role name, and the others f",
|
||
"product_code":"mrs",
|
||
"title":"Hive Log Overview",
|
||
"uri":"mrs_01_0976.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"403"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Hive Performance Tuning",
|
||
"uri":"mrs_01_0977.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"404"
|
||
},
|
||
{
|
||
"desc":"During the Select query, Hive generally scans the entire table, which is time-consuming. To improve query efficiency, create table partitions based on service requirement",
|
||
"product_code":"mrs",
|
||
"title":"Creating Table Partitions",
|
||
"uri":"mrs_01_0978.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"404",
|
||
"code":"405"
|
||
},
|
||
{
|
||
"desc":"When the Join statement is used, the command execution speed and query speed may be slow in case of large data volume. To resolve this problem, you can optimize Join.Join",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Join",
|
||
"uri":"mrs_01_0979.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"404",
|
||
"code":"406"
|
||
},
|
||
{
|
||
"desc":"Optimize the Group by statement to accelerate the command execution and query speed.During the Group by operation, Map performs grouping and distributes the groups to Red",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Group By",
|
||
"uri":"mrs_01_0980.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"404",
|
||
"code":"407"
|
||
},
|
||
{
|
||
"desc":"ORC is an efficient column storage format and has higher compression ratio and reading efficiency than other file formats.You are advised to use ORC as the default Hive t",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Data Storage",
|
||
"uri":"mrs_01_0981.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"404",
|
||
"code":"408"
|
||
},
|
||
{
|
||
"desc":"When SQL statements are executed on Hive, if the (a&b) or (a&c) logic exists in the statements, you are advised to change the logic to a & (b or c).If condition a is p_pa",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing SQL Statements",
|
||
"uri":"mrs_01_0982.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"404",
|
||
"code":"409"
|
||
},
|
||
{
|
||
"desc":"When joining multiple tables in Hive, Hive supports Cost-Based Optimization (CBO). The system automatically selects the optimal plan based on the table statistics, such a",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing the Query Function Using Hive CBO",
|
||
"uri":"mrs_01_0983.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"404",
|
||
"code":"410"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Hive",
|
||
"uri":"mrs_01_1752.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"364",
|
||
"code":"411"
|
||
},
|
||
{
|
||
"desc":"How can I delete permanent user-defined functions (UDFs) on multiple HiveServers at the same time?Multiple HiveServers share one MetaStore database. Therefore, there is a",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Delete UDFs on Multiple HiveServers at the Same Time?",
|
||
"uri":"mrs_01_1753.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"412"
|
||
},
|
||
{
|
||
"desc":"Why cannot the DROP operation be performed for a backed up Hive table?Snapshots have been created for an HDFS directory mapping to the backed up Hive table, so the HDFS d",
|
||
"product_code":"mrs",
|
||
"title":"Why Cannot the DROP operation Be Performed on a Backed-up Hive Table?",
|
||
"uri":"mrs_01_1754.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"413"
|
||
},
|
||
{
|
||
"desc":"How to perform operations on local files (such as reading the content of a file) with Hive user-defined functions?By default, you can perform operations on local files wi",
|
||
"product_code":"mrs",
|
||
"title":"How to Perform Operations on Local Files with Hive User-Defined Functions",
|
||
"uri":"mrs_01_1755.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"414"
|
||
},
|
||
{
|
||
"desc":"How do I stop a MapReduce task manually if the task is suspended for a long time?",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Forcibly Stop MapReduce Jobs Executed by Hive?",
|
||
"uri":"mrs_01_1756.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"415"
|
||
},
|
||
{
|
||
"desc":"How do I monitor the Hive table size?The HDFS refined monitoring function allows you to monitor the size of a specified table directory.The Hive and HDFS components are r",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Monitor the Hive Table Size?",
|
||
"uri":"mrs_01_1758.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"416"
|
||
},
|
||
{
|
||
"desc":"How do I prevent key directories from data loss caused by misoperations of the insert overwrite statement?During monitoring of key Hive databases, tables, or directories,",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Prevent Key Directories from Data Loss Caused by Misoperations of the insert overwrite Statement?",
|
||
"uri":"mrs_01_1759.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"417"
|
||
},
|
||
{
|
||
"desc":"This function applies to Hive.Perform the following operations to configure parameters. When Hive on Spark tasks are executed in the environment where the HBase is not in",
|
||
"product_code":"mrs",
|
||
"title":"Why Is Hive on Spark Task Freezing When HBase Is Not Installed?",
|
||
"uri":"mrs_01_1760.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"418"
|
||
},
|
||
{
|
||
"desc":"When a table with more than 32,000 partitions is created in Hive, an exception occurs during the query with the WHERE partition. In addition, the exception information pr",
|
||
"product_code":"mrs",
|
||
"title":"Error Reported When the WHERE Condition Is Used to Query Tables with Excessive Partitions in FusionInsight Hive",
|
||
"uri":"mrs_01_1761.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"419"
|
||
},
|
||
{
|
||
"desc":"When users check the JDK version used by the client, if the JDK version is IBM JDK, the Beeline client needs to be reconstructed. Otherwise, the client will fail to conne",
|
||
"product_code":"mrs",
|
||
"title":"Why Cannot I Connect to HiveServer When I Use IBM JDK to Access the Beeline Client?",
|
||
"uri":"mrs_01_1762.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"420"
|
||
},
|
||
{
|
||
"desc":"Does a Hive Table Can Be Stored Either in OBS or HDFS?The location of a common Hive table stored on OBS can be set to an HDFS path.In the same Hive service, you can creat",
|
||
"product_code":"mrs",
|
||
"title":"Description of Hive Table Location (Either Be an OBS or HDFS Path)",
|
||
"uri":"mrs_01_1763.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"421"
|
||
},
|
||
{
|
||
"desc":"Hive uses the Tez engine to execute union-related statements to write data. After Hive is switched to the MapReduce engine for query, no data is found.When Hive uses the ",
|
||
"product_code":"mrs",
|
||
"title":"Why Cannot Data Be Queried After the MapReduce Engine Is Switched After the Tez Engine Is Used to Execute Union-related Statements?",
|
||
"uri":"mrs_01_2309.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"422"
|
||
},
|
||
{
|
||
"desc":"Why Does Data Inconsistency Occur When Data Is Concurrently Written to a Hive Table Through an API?Hive does not support concurrent data insertion for the same table or p",
|
||
"product_code":"mrs",
|
||
"title":"Why Does Hive Not Support Concurrent Data Writing to the Same Table or Partition?",
|
||
"uri":"mrs_01_2310.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"423"
|
||
},
|
||
{
|
||
"desc":"When the vectorized parameterhive.vectorized.execution.enabled is set to true, why do some null pointers or type conversion exceptions occur occasionally when Hive on Tez",
|
||
"product_code":"mrs",
|
||
"title":"Why Does Hive Not Support Vectorized Query?",
|
||
"uri":"mrs_01_2325.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"424"
|
||
},
|
||
{
|
||
"desc":"The error message \"java.lang.OutOfMemoryError: Java heap space.\" is displayed during Hive SQL execution.Solution:For MapReduce tasks, increase the values of the following",
|
||
"product_code":"mrs",
|
||
"title":"Hive Configuration Problems",
|
||
"uri":"mrs_01_24117.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"411",
|
||
"code":"425"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Hudi",
|
||
"uri":"mrs_01_24025.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"426"
|
||
},
|
||
{
|
||
"desc":"This section describes capabilities of Hudi using spark-shell. Using the Spark data source, this section describes how to insert and update a Hudi dataset of the default ",
|
||
"product_code":"mrs",
|
||
"title":"Quick Start",
|
||
"uri":"mrs_01_24033.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"426",
|
||
"code":"427"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Basic Operations",
|
||
"uri":"mrs_01_24062.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"426",
|
||
"code":"428"
|
||
},
|
||
{
|
||
"desc":"When writing data, Hudi generates a Hudi table based on attributes such as the storage path, table name, and partition structure.Hudi table data files can be stored in th",
|
||
"product_code":"mrs",
|
||
"title":"Hudi Table Schema",
|
||
"uri":"mrs_01_24103.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"428",
|
||
"code":"429"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Write",
|
||
"uri":"mrs_01_24034.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"428",
|
||
"code":"430"
|
||
},
|
||
{
|
||
"desc":"Hudi provides multiple write modes. For details, see the configuration item hoodie.datasource.write.operation. This section describes upsert, insert, and bulk_insert.inse",
|
||
"product_code":"mrs",
|
||
"title":"Batch Write",
|
||
"uri":"mrs_01_24035.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"430",
|
||
"code":"431"
|
||
},
|
||
{
|
||
"desc":"The HoodieDeltaStreamer tool provided by Hudi supports stream write. You can also use SparkStreaming to write data in microbatch mode. HoodieDeltaStreamer provides the fo",
|
||
"product_code":"mrs",
|
||
"title":"Stream Write",
|
||
"uri":"mrs_01_24036.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"430",
|
||
"code":"432"
|
||
},
|
||
{
|
||
"desc":"The bootstrapping function provided by Hudi converts historical tables into Hudi tables without any change by generating Hoodie management files based on historical Parqu",
|
||
"product_code":"mrs",
|
||
"title":"Bootstrapping",
|
||
"uri":"mrs_01_24069.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"430",
|
||
"code":"433"
|
||
},
|
||
{
|
||
"desc":"You can run run_hive_sync_tool.sh to synchronize data in the Hudi table to Hive.For example, run the following command to synchronize the Hudi table in the hdfs://haclust",
|
||
"product_code":"mrs",
|
||
"title":"Synchronizing Hudi Table Data to Hive",
|
||
"uri":"mrs_01_24064.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"430",
|
||
"code":"434"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Read",
|
||
"uri":"mrs_01_24037.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"428",
|
||
"code":"435"
|
||
},
|
||
{
|
||
"desc":"Reading the real-time view (using Hive and SparkSQL as an example): Directly read the Hudi table stored in Hive.select count(*) from test;Reading the real-time view (usin",
|
||
"product_code":"mrs",
|
||
"title":"Reading COW Table Views",
|
||
"uri":"mrs_01_24098.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"435",
|
||
"code":"436"
|
||
},
|
||
{
|
||
"desc":"After the MOR table is synchronized to Hive, the following two tables are synchronized to Hive: Table name_rt and Table name_ro. The table suffixed with rt indicates the ",
|
||
"product_code":"mrs",
|
||
"title":"Reading MOR Table Views",
|
||
"uri":"mrs_01_24099.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"435",
|
||
"code":"437"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Data Management and Maintenance",
|
||
"uri":"mrs_01_24038.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"428",
|
||
"code":"438"
|
||
},
|
||
{
|
||
"desc":"IntroductionA metadata table is a special Hudi metadata table, which is hidden from users. The table stores metadata of a common Hudi table.The metadata table is included",
|
||
"product_code":"mrs",
|
||
"title":"Metadata Table",
|
||
"uri":"mrs_01_24164.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"438",
|
||
"code":"439"
|
||
},
|
||
{
|
||
"desc":"Clustering reorganizes data layout to improve query performance without affecting the ingestion speed.Hudi provides different operations, such as insert, upsert, and bulk",
|
||
"product_code":"mrs",
|
||
"title":"Clustering",
|
||
"uri":"mrs_01_24088.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"438",
|
||
"code":"440"
|
||
},
|
||
{
|
||
"desc":"Cleaning is used to delete data of versions that are no longer required.Hudi uses the cleaner working in the background to continuously delete unnecessary data of old ver",
|
||
"product_code":"mrs",
|
||
"title":"Cleaning",
|
||
"uri":"mrs_01_24089.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"438",
|
||
"code":"441"
|
||
},
|
||
{
|
||
"desc":"A compaction merges base and log files of MOR tables.For MOR tables, data is stored in columnar Parquet files and row-based Avro files, updates are recorded in incrementa",
|
||
"product_code":"mrs",
|
||
"title":"Compaction",
|
||
"uri":"mrs_01_24090.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"438",
|
||
"code":"442"
|
||
},
|
||
{
|
||
"desc":"Savepoints are used to save and restore data of the customized version.Savepoints provided by Hudi can save different commits so that the cleaner program does not delete ",
|
||
"product_code":"mrs",
|
||
"title":"Savepoint",
|
||
"uri":"mrs_01_24091.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"438",
|
||
"code":"443"
|
||
},
|
||
{
|
||
"desc":"Uses an external service (ZooKeeper or Hive MetaStore) as the distributed mutex lock service.Files can be concurrently written, but commits cannot be concurrent. The comm",
|
||
"product_code":"mrs",
|
||
"title":"Single-Table Concurrent Write",
|
||
"uri":"mrs_01_24165.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"438",
|
||
"code":"444"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using the Hudi Client",
|
||
"uri":"mrs_01_24100.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"428",
|
||
"code":"445"
|
||
},
|
||
{
|
||
"desc":"You have created a user and added the user to user groups hadoop and hive on Manager.The Hudi client has been downloaded and installed.Log in to the client node as user r",
|
||
"product_code":"mrs",
|
||
"title":"Operating a Hudi Table Using hudi-cli.sh",
|
||
"uri":"mrs_01_24063.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"445",
|
||
"code":"446"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Configuration Reference",
|
||
"uri":"mrs_01_24032.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"428",
|
||
"code":"447"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Write Configuration",
|
||
"uri":"mrs_01_24093.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"447",
|
||
"code":"448"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Configuration of Hive Table Synchronization",
|
||
"uri":"mrs_01_24094.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"447",
|
||
"code":"449"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Index Configuration",
|
||
"uri":"mrs_01_24095.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"447",
|
||
"code":"450"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Storage Configuration",
|
||
"uri":"mrs_01_24096.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"447",
|
||
"code":"451"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Compaction and Cleaning Configurations",
|
||
"uri":"mrs_01_24097.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"447",
|
||
"code":"452"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Metadata Table Configuration",
|
||
"uri":"mrs_01_24166.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"447",
|
||
"code":"453"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Single-Table Concurrent Write Configuration",
|
||
"uri":"mrs_01_24167.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"447",
|
||
"code":"454"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Hudi Performance Tuning",
|
||
"uri":"mrs_01_24039.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"426",
|
||
"code":"455"
|
||
},
|
||
{
|
||
"desc":"In the current version, Spark is recommended for Hudi write operations. Therefore, the tuning methods of Hudi are similar to those of Spark. For details, see Spark2x Perf",
|
||
"product_code":"mrs",
|
||
"title":"Performance Tuning Methods",
|
||
"uri":"mrs_01_24101.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"455",
|
||
"code":"456"
|
||
},
|
||
{
|
||
"desc":"For MOR tables:The essence of MOR tables is to write incremental files, so the tuning is based on the data size (dataSize) of Hudi.If dataSize is only several GBs, you ar",
|
||
"product_code":"mrs",
|
||
"title":"Recommended Resource Configuration",
|
||
"uri":"mrs_01_24102.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"455",
|
||
"code":"457"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Hudi SQL Syntax Reference",
|
||
"uri":"mrs_01_24261.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"426",
|
||
"code":"458"
|
||
},
|
||
{
|
||
"desc":"Hudi 0.9.0 adds Spark SQL DDL and DML statements for using Hudi, making it easier for all users (including non-engineers or analysts) to access and operate Hudi.You can u",
|
||
"product_code":"mrs",
|
||
"title":"Constraints",
|
||
"uri":"mrs_01_24262.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"458",
|
||
"code":"459"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"DDL",
|
||
"uri":"mrs_01_24263.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"458",
|
||
"code":"460"
|
||
},
|
||
{
|
||
"desc":"This command is used to create a Hudi table by specifying the list of fields along with the table options.CREATE TABLE [ IF NOT EXISTS] [database_name.]table_name[ (colu",
|
||
"product_code":"mrs",
|
||
"title":"CREATE TABLE",
|
||
"uri":"mrs_01_24264.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"460",
|
||
"code":"461"
|
||
},
|
||
{
|
||
"desc":"This command is used to create a Hudi table by specifying the list of fields along with the table options.CREATE TABLE [ IF NOT EXISTS] [database_name.]table_nameUSING h",
|
||
"product_code":"mrs",
|
||
"title":"CREATE TABLE AS SELECT",
|
||
"uri":"mrs_01_24265.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"460",
|
||
"code":"462"
|
||
},
|
||
{
|
||
"desc":"This command is used to delete an existing table.DROP TABLE [IF EXISTS] [db_name.]table_name;In this command, IF EXISTS and db_name are optional.DROP TABLE IF EXISTS hudi",
|
||
"product_code":"mrs",
|
||
"title":"DROP TABLE",
|
||
"uri":"mrs_01_24266.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"460",
|
||
"code":"463"
|
||
},
|
||
{
|
||
"desc":"This command is used to display all tables in current database or all tables in a specific database.SHOW TABLES [IN db_name];IN db_Name is optional. It is required only w",
|
||
"product_code":"mrs",
|
||
"title":"SHOW TABLE",
|
||
"uri":"mrs_01_24267.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"460",
|
||
"code":"464"
|
||
},
|
||
{
|
||
"desc":"This command is used to rename an existing table.ALTERTABLEoldTableName RENAMETO newTableNameThe table name is changed. You can run the SHOW TABLES command to display the",
|
||
"product_code":"mrs",
|
||
"title":"ALTER RENAME TABLE",
|
||
"uri":"mrs_01_24268.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"460",
|
||
"code":"465"
|
||
},
|
||
{
|
||
"desc":"This command is used to add columns to an existing table.ALTER TABLEtableIdentifierADD COLUMNS(colAndType (,colAndType)*)The columns are added to the table. You can run t",
|
||
"product_code":"mrs",
|
||
"title":"ALTER ADD COLUMNS",
|
||
"uri":"mrs_01_24269.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"460",
|
||
"code":"466"
|
||
},
|
||
{
|
||
"desc":"This command is used to clear all data in a specific table.TRUNCATE TABLEtableIdentifierData in the table is cleared. You can run the QUERY statement to check whether dat",
|
||
"product_code":"mrs",
|
||
"title":"TRUNCATE TABLE",
|
||
"uri":"mrs_01_24271.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"460",
|
||
"code":"467"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"DML",
|
||
"uri":"mrs_01_24272.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"458",
|
||
"code":"468"
|
||
},
|
||
{
|
||
"desc":"This command is used to insert the output of the SELECT statement to a Hudi table.INSERT INTOtableIndentifier select query;Insert mode: Hudi supports three insert modes f",
|
||
"product_code":"mrs",
|
||
"title":"INSERT INTO",
|
||
"uri":"mrs_01_24273.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"468",
|
||
"code":"469"
|
||
},
|
||
{
|
||
"desc":"This command is used to query another table based on the join condition of a table or subquery. If UPDATE or DELETE is executed for the table matching the join condition,",
|
||
"product_code":"mrs",
|
||
"title":"MERGE INTO",
|
||
"uri":"mrs_01_24274.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"468",
|
||
"code":"470"
|
||
},
|
||
{
|
||
"desc":"This command is used to update the Hudi table based on the column expression and optional filtering conditions.UPDATE tableIdentifier SET column = EXPRESSION(,column = EX",
|
||
"product_code":"mrs",
|
||
"title":"UPDATE",
|
||
"uri":"mrs_01_24275.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"468",
|
||
"code":"471"
|
||
},
|
||
{
|
||
"desc":"This command is used to delete records from a Hudi table.DELETE from tableIdentifier [ WHERE boolExpression]Example 1:delete from h0 where column1 = 'country';Example 2:d",
|
||
"product_code":"mrs",
|
||
"title":"DELETE",
|
||
"uri":"mrs_01_24276.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"468",
|
||
"code":"472"
|
||
},
|
||
{
|
||
"desc":"This command is used to convert row-based log files in MOR tables into column-based data files in parquet tables to accelerate record search.SCHEDULE COMPACTION on tableI",
|
||
"product_code":"mrs",
|
||
"title":"COMPACTION",
|
||
"uri":"mrs_01_24277.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"468",
|
||
"code":"473"
|
||
},
|
||
{
|
||
"desc":"This command is used to dynamically add, update, display, or reset Hudi parameters without restarting the driver.Add or update a parameter value:SET parameter_name=parame",
|
||
"product_code":"mrs",
|
||
"title":"SET/RESET",
|
||
"uri":"mrs_01_24278.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"468",
|
||
"code":"474"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Hudi",
|
||
"uri":"mrs_01_24065.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"426",
|
||
"code":"475"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Data Write",
|
||
"uri":"mrs_01_24070.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"475",
|
||
"code":"476"
|
||
},
|
||
{
|
||
"desc":"The following error is reported when data is written:You are advised to evolve schemas in backward compatible mode while using Hudi. This error usually occurs when you de",
|
||
"product_code":"mrs",
|
||
"title":"Parquet/Avro schema Is Reported When Updated Data Is Written",
|
||
"uri":"mrs_01_24071.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"476",
|
||
"code":"477"
|
||
},
|
||
{
|
||
"desc":"The following error is reported when data is written:This error will occur again because schema evolutions are in non-backwards compatible mode. Basically, there is some ",
|
||
"product_code":"mrs",
|
||
"title":"UnsupportedOperationException Is Reported When Updated Data Is Written",
|
||
"uri":"mrs_01_24072.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"476",
|
||
"code":"478"
|
||
},
|
||
{
|
||
"desc":"The following error is reported when data is written:This error may occur if a schema contains some non-nullable field whose value is not present or is null.You are advis",
|
||
"product_code":"mrs",
|
||
"title":"SchemaCompatabilityException Is Reported When Updated Data Is Written",
|
||
"uri":"mrs_01_24073.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"476",
|
||
"code":"479"
|
||
},
|
||
{
|
||
"desc":"Hudi consumes much space in a temporary folder during upsert.Hudi will spill part of input data to disk if the maximum memory for merge is reached when much input data is",
|
||
"product_code":"mrs",
|
||
"title":"What Should I Do If Hudi Consumes Much Space in a Temporary Folder During Upsert?",
|
||
"uri":"mrs_01_24074.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"476",
|
||
"code":"480"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Data Collection",
|
||
"uri":"mrs_01_24075.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"475",
|
||
"code":"481"
|
||
},
|
||
{
|
||
"desc":"The error \"org.apache.kafka.common.KafkaException: Failed to construct kafka consumer\" is reported in the main thread, and the following error is reported.This error may ",
|
||
"product_code":"mrs",
|
||
"title":"IllegalArgumentException Is Reported When Kafka Is Used to Collect Data",
|
||
"uri":"mrs_01_24077.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"481",
|
||
"code":"482"
|
||
},
|
||
{
|
||
"desc":"The following error is reported when data is collected:This error usually occurs when a field marked as recordKey or partitionKey is not present in the input record. Cros",
|
||
"product_code":"mrs",
|
||
"title":"HoodieException Is Reported When Data Is Collected",
|
||
"uri":"mrs_01_24078.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"481",
|
||
"code":"483"
|
||
},
|
||
{
|
||
"desc":"Is it possible to use a nullable field that contains null records as a primary key when creating a Hudi table?No. HoodieKeyException will be thrown.",
|
||
"product_code":"mrs",
|
||
"title":"HoodieKeyException Is Reported When Data Is Collected",
|
||
"uri":"mrs_01_24079.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"481",
|
||
"code":"484"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Hive Synchronization",
|
||
"uri":"mrs_01_24080.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"475",
|
||
"code":"485"
|
||
},
|
||
{
|
||
"desc":"The following error is reported during Hive data synchronization:This error usually occurs when you try to add a new column to an existing Hive table using the HiveSyncTo",
|
||
"product_code":"mrs",
|
||
"title":"SQLException Is Reported During Hive Data Synchronization",
|
||
"uri":"mrs_01_24081.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"485",
|
||
"code":"486"
|
||
},
|
||
{
|
||
"desc":"The following error is reported during Hive data synchronization:This error occurs because HiveSyncTool currently supports only few compatible data type conversions. The ",
|
||
"product_code":"mrs",
|
||
"title":"HoodieHiveSyncException Is Reported During Hive Data Synchronization",
|
||
"uri":"mrs_01_24082.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"485",
|
||
"code":"487"
|
||
},
|
||
{
|
||
"desc":"The following error is reported during Hive data synchronization:This error usually occurs when Hive synchronization is performed on the Hudi dataset but the configured h",
|
||
"product_code":"mrs",
|
||
"title":"SemanticException Is Reported During Hive Data Synchronization",
|
||
"uri":"mrs_01_24083.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"485",
|
||
"code":"488"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Hue",
|
||
"uri":"mrs_01_0130.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"489"
|
||
},
|
||
{
|
||
"desc":"Hue aggregates interfaces which interact with most Apache Hadoop components and enables you to use Hadoop components with ease on a web UI. You can operate components suc",
|
||
"product_code":"mrs",
|
||
"title":"Using Hue from Scratch",
|
||
"uri":"mrs_01_0131.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"490"
|
||
},
|
||
{
|
||
"desc":"After Hue is installed in an MRS cluster, users can use Hadoop-related components on the Hue web UI.This section describes how to open the Hue web UI on the MRS cluster.T",
|
||
"product_code":"mrs",
|
||
"title":"Accessing the Hue Web UI",
|
||
"uri":"mrs_01_0132.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"491"
|
||
},
|
||
{
|
||
"desc":"Go to the All Configurations page of the Hue service by referring to Modifying Cluster Service Configuration Parameters.For details about Hue common parameters, see Table",
|
||
"product_code":"mrs",
|
||
"title":"Hue Common Parameters",
|
||
"uri":"mrs_01_0133.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"492"
|
||
},
|
||
{
|
||
"desc":"Users can use the Hue web UI to execute HiveQL statements in an MRS cluster.Hive supports the following functions:Executes and manages HiveQL statements.Views the HiveQL ",
|
||
"product_code":"mrs",
|
||
"title":"Using HiveQL Editor on the Hue Web UI",
|
||
"uri":"mrs_01_0134.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"493"
|
||
},
|
||
{
|
||
"desc":"Users can use the Hue web UI to manage Hive metadata in an MRS cluster.Access the Hue web UI. For details, see Accessing the Hue Web UI.Viewing metadata of Hive tablesCli",
|
||
"product_code":"mrs",
|
||
"title":"Using the Metadata Browser on the Hue Web UI",
|
||
"uri":"mrs_01_0135.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"494"
|
||
},
|
||
{
|
||
"desc":"Users can use the Hue web UI to manage files in HDFS.The Hue page is used to view and analyze data such as files and tables. Do not perform high-risk management operation",
|
||
"product_code":"mrs",
|
||
"title":"Using File Browser on the Hue Web UI",
|
||
"uri":"mrs_01_0136.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"495"
|
||
},
|
||
{
|
||
"desc":"Users can use the Hue web UI to query all jobs in an MRS cluster.View the jobs in the current cluster.The number on Job Browser indicates the total number of jobs in the ",
|
||
"product_code":"mrs",
|
||
"title":"Using Job Browser on the Hue Web UI",
|
||
"uri":"mrs_01_0137.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"496"
|
||
},
|
||
{
|
||
"desc":"You can use Hue to create or query HBase tables in a cluster and run tasks on the Hue web UI.",
|
||
"product_code":"mrs",
|
||
"title":"Using HBase on the Hue Web UI",
|
||
"uri":"mrs_01_2371.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"497"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenarios",
|
||
"uri":"mrs_01_0138.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"498"
|
||
},
|
||
{
|
||
"desc":"Hue provides the file browser function for users to use HDFS in GUI mode.The Hue page is used to view and analyze data such as files and tables. Do not perform high-risk ",
|
||
"product_code":"mrs",
|
||
"title":"HDFS on Hue",
|
||
"uri":"mrs_01_0139.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"498",
|
||
"code":"499"
|
||
},
|
||
{
|
||
"desc":"Hue provides the Hive GUI management function so that users can query Hive data in GUI mode.Access the Hue web UI. For details, see Accessing the Hue Web UI.In the naviga",
|
||
"product_code":"mrs",
|
||
"title":"Hive on Hue",
|
||
"uri":"mrs_01_0141.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"498",
|
||
"code":"500"
|
||
},
|
||
{
|
||
"desc":"Hue provides the Oozie job manager function, in this case, you can use Oozie in GUI mode.The Hue page is used to view and analyze data such as files and tables. Do not pe",
|
||
"product_code":"mrs",
|
||
"title":"Oozie on Hue",
|
||
"uri":"mrs_01_0144.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"498",
|
||
"code":"501"
|
||
},
|
||
{
|
||
"desc":"Log paths: The default paths of Hue logs are /var/log/Bigdata/hue (for storing run logs) and /var/log/Bigdata/audit/hue (for storing audit logs).Log archive rules: The au",
|
||
"product_code":"mrs",
|
||
"title":"Hue Log Overview",
|
||
"uri":"mrs_01_0147.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"502"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Hue",
|
||
"uri":"mrs_01_1764.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"489",
|
||
"code":"503"
|
||
},
|
||
{
|
||
"desc":"What do I do if all HQL statements fail to be executed when I use Internet Explorer to access Hive Editor in Hue and the message \"There was an error with your query\" is d",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Solve the Problem that HQL Fails to Be Executed in Hue Using Internet Explorer?",
|
||
"uri":"mrs_01_1765.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"503",
|
||
"code":"504"
|
||
},
|
||
{
|
||
"desc":"When Hive is used, the use database statement is entered in the text box to switch the database, and other statements are also entered, why does the database fail to be s",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the use database Statement Become Invalid When Hive Is Used?",
|
||
"uri":"mrs_01_1766.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"503",
|
||
"code":"505"
|
||
},
|
||
{
|
||
"desc":"What can I do if an error message shown in the following figure is displayed, indicating that the HDFS file cannot be accessed when I use Hue web UI to access the HDFS fi",
|
||
"product_code":"mrs",
|
||
"title":"What Can I Do If HDFS Files Fail to Be Accessed Using Hue WebUI?",
|
||
"uri":"mrs_01_0156.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"503",
|
||
"code":"506"
|
||
},
|
||
{
|
||
"desc":"If the Hive service is not installed in the cluster, the native Hue service page is blank.In the current version, Hue depends on the Hive component. If this occurs, check",
|
||
"product_code":"mrs",
|
||
"title":"Hue Page Cannot Be Displayed When the Hive Service Is Not Installed in a Cluster",
|
||
"uri":"mrs_01_2368.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"503",
|
||
"code":"507"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Kafka",
|
||
"uri":"mrs_01_0375.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"508"
|
||
},
|
||
{
|
||
"desc":"You can create, query, and delete topics on a cluster client.The client has been installed. For example, the client is installed in the /opt/hadoopclient directory. The c",
|
||
"product_code":"mrs",
|
||
"title":"Using Kafka from Scratch",
|
||
"uri":"mrs_01_1031.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"509"
|
||
},
|
||
{
|
||
"desc":"You can manage Kafka topics on a cluster client based on service requirements. Management permission is required for clusters with Kerberos authentication enabled.You hav",
|
||
"product_code":"mrs",
|
||
"title":"Managing Kafka Topics",
|
||
"uri":"mrs_01_0376.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"510"
|
||
},
|
||
{
|
||
"desc":"You can query existing Kafka topics on MRS.Log in to FusionInsight Manager. For details, see Accessing FusionInsight Manager. Choose Cluster > Name of the desired cluster",
|
||
"product_code":"mrs",
|
||
"title":"Querying Kafka Topics",
|
||
"uri":"mrs_01_0377.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"511"
|
||
},
|
||
{
|
||
"desc":"For clusters with Kerberos authentication enabled, using Kafka requires relevant permissions. MRS clusters can grant the use permission of Kafka to different users.Table ",
|
||
"product_code":"mrs",
|
||
"title":"Managing Kafka User Permissions",
|
||
"uri":"mrs_01_0378.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"512"
|
||
},
|
||
{
|
||
"desc":"You can produce or consume messages in Kafka topics using the MRS cluster client. For clusters with Kerberos authentication enabled, you must have the permission to perfo",
|
||
"product_code":"mrs",
|
||
"title":"Managing Messages in Kafka Topics",
|
||
"uri":"mrs_01_0379.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"513"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create and configure a Kafka role.Users can create Kafka roles only in security mode.If the current component uses Ranger for permission con",
|
||
"product_code":"mrs",
|
||
"title":"Creating a Kafka Role",
|
||
"uri":"mrs_01_1032.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"514"
|
||
},
|
||
{
|
||
"desc":"For details about how to set parameters, see Modifying Cluster Service Configuration Parameters.",
|
||
"product_code":"mrs",
|
||
"title":"Kafka Common Parameters",
|
||
"uri":"mrs_01_1033.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"515"
|
||
},
|
||
{
|
||
"desc":"Producer APIIndicates the API defined in org.apache.kafka.clients.producer.KafkaProducer. When kafka-console-producer.sh is used, the API is used by default.Indicates the",
|
||
"product_code":"mrs",
|
||
"title":"Safety Instructions on Using Kafka",
|
||
"uri":"mrs_01_1035.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"516"
|
||
},
|
||
{
|
||
"desc":"The maximum number of topics depends on the number of file handles (mainly used by data and index files on site) opened in the process.Run the ulimit -n command to view t",
|
||
"product_code":"mrs",
|
||
"title":"Kafka Specifications",
|
||
"uri":"mrs_01_1036.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"517"
|
||
},
|
||
{
|
||
"desc":"This section guides users to use a Kafka client in an O&M or service scenario.The client has been installed. For example, the installation directory is /opt/client.Servic",
|
||
"product_code":"mrs",
|
||
"title":"Using the Kafka Client",
|
||
"uri":"mrs_01_1767.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"518"
|
||
},
|
||
{
|
||
"desc":"For the Kafka message transmission assurance mechanism, different parameters are available for meeting different performance and reliability requirements. This section de",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Kafka HA and High Reliability Parameters",
|
||
"uri":"mrs_01_1037.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"519"
|
||
},
|
||
{
|
||
"desc":"When a broker storage directory is added, the system administrator needs to change the broker storage directory on FusionInsight Manager, to ensure that the Kafka can wor",
|
||
"product_code":"mrs",
|
||
"title":"Changing the Broker Storage Directory",
|
||
"uri":"mrs_01_1038.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"520"
|
||
},
|
||
{
|
||
"desc":"This section describes how to view the current expenditure on the client based on service requirements.The system administrator has understood service requirements and pr",
|
||
"product_code":"mrs",
|
||
"title":"Checking the Consumption Status of Consumer Group",
|
||
"uri":"mrs_01_1039.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"521"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use the Kafka balancing tool on a client to balance the load of the Kafka cluster based on service requirements in scenarios such as node de",
|
||
"product_code":"mrs",
|
||
"title":"Kafka Balancing Tool Instructions",
|
||
"uri":"mrs_01_1040.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"522"
|
||
},
|
||
{
|
||
"desc":"Operations need to be performed on tokens when the token authentication mechanism is used.The system administrator has understood service requirements and prepared a syst",
|
||
"product_code":"mrs",
|
||
"title":"Kafka Token Authentication Mechanism Tool Usage",
|
||
"uri":"mrs_01_1041.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"523"
|
||
},
|
||
{
|
||
"desc":"Feature description: The function of creating idempotent producers is introduced in Kafka 0.11.0.0. After this function is enabled, producers are automatically upgraded t",
|
||
"product_code":"mrs",
|
||
"title":"Kafka Feature Description",
|
||
"uri":"mrs_01_2312.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"524"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Kafka UI",
|
||
"uri":"mrs_01_24130.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"525"
|
||
},
|
||
{
|
||
"desc":"After the Kafka component is installed in an MRS cluster, you can use Kafka UI to query cluster information, node status, topic partitions, and data production and consum",
|
||
"product_code":"mrs",
|
||
"title":"Accessing Kafka UI",
|
||
"uri":"mrs_01_24134.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"525",
|
||
"code":"526"
|
||
},
|
||
{
|
||
"desc":"After logging in to Kafka UI, you can view the basic information about the existing topics, brokers, and consumer groups in the current cluster on the home page. You can ",
|
||
"product_code":"mrs",
|
||
"title":"Kafka UI Overview",
|
||
"uri":"mrs_01_24135.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"525",
|
||
"code":"527"
|
||
},
|
||
{
|
||
"desc":"Create a topic on Kafka UI.You can click Advanced Options to set advanced topic parameters based on service requirements. Generally, retain the default values.In a cluste",
|
||
"product_code":"mrs",
|
||
"title":"Creating a Topic on Kafka UI",
|
||
"uri":"mrs_01_24136.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"525",
|
||
"code":"528"
|
||
},
|
||
{
|
||
"desc":"Migrate a partition on Kafka UI.In security mode, the user who migrates a partition must belong to the kafkaadmin user group. Otherwise, the operation fails due to authen",
|
||
"product_code":"mrs",
|
||
"title":"Migrating a Partition on Kafka UI",
|
||
"uri":"mrs_01_24137.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"525",
|
||
"code":"529"
|
||
},
|
||
{
|
||
"desc":"On Kafka UI, you can view topic details, modify topic configurations, add topic partitions, delete topics, and view the number of data records produced in different time ",
|
||
"product_code":"mrs",
|
||
"title":"Managing Topics on Kafka UI",
|
||
"uri":"mrs_01_24138.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"525",
|
||
"code":"530"
|
||
},
|
||
{
|
||
"desc":"On Kafka UI, you can view broker details and JMX metrics of the broker node data traffic.",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Brokers on Kafka UI",
|
||
"uri":"mrs_01_24139.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"525",
|
||
"code":"531"
|
||
},
|
||
{
|
||
"desc":"On Kafka UI, you can view the basic information about a consumer group and the consumption status of topics in the group.MRS clusters do not support redirection by clicki",
|
||
"product_code":"mrs",
|
||
"title":"Viewing a Consumer Group on Kafka UI",
|
||
"uri":"mrs_01_24133.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"525",
|
||
"code":"532"
|
||
},
|
||
{
|
||
"desc":"Log paths: The default storage path of Kafka logs is /var/log/Bigdata/kafka. The default storage path of audit logs is /var/log/Bigdata/audit/kafka.Broker: /var/log/Bigda",
|
||
"product_code":"mrs",
|
||
"title":"Introduction to Kafka Logs",
|
||
"uri":"mrs_01_1042.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"533"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Performance Tuning",
|
||
"uri":"mrs_01_1043.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"534"
|
||
},
|
||
{
|
||
"desc":"You can modify Kafka server parameters to improve Kafka processing capabilities in specific service scenarios.Modify the service configuration parameters. For details, se",
|
||
"product_code":"mrs",
|
||
"title":"Kafka Performance Tuning",
|
||
"uri":"mrs_01_1044.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"534",
|
||
"code":"535"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Kafka",
|
||
"uri":"mrs_01_1768.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"508",
|
||
"code":"536"
|
||
},
|
||
{
|
||
"desc":"How do I delete a Kafka topic if it fails to be deleted?Possible cause 1: The delete.topic.enable configuration item is not set to true. The deletion can be performed onl",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Solve the Problem that Kafka Topics Cannot Be Deleted?",
|
||
"uri":"mrs_01_1769.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"536",
|
||
"code":"537"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Loader",
|
||
"uri":"mrs_01_0400.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"538"
|
||
},
|
||
{
|
||
"desc":"For details about the how to set parameters, see Modifying Cluster Service Configuration Parameters.Because it needs time to calculate the fault tolerance rate, you are r",
|
||
"product_code":"mrs",
|
||
"title":"Common Loader Parameters",
|
||
"uri":"mrs_01_1784.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"538",
|
||
"code":"539"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create and configure a Loader role on FusionInsight Manager. The Loader role can set Loader administrator permissions, job connections, job ",
|
||
"product_code":"mrs",
|
||
"title":"Creating a Loader Role",
|
||
"uri":"mrs_01_1085.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"538",
|
||
"code":"540"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Importing Data",
|
||
"uri":"mrs_01_1086.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"538",
|
||
"code":"541"
|
||
},
|
||
{
|
||
"desc":"Loader is an ETL tool that enables MRS to exchange data and files with external data sources, such as relational databases, SFTP servers, and FTP servers. It allows data ",
|
||
"product_code":"mrs",
|
||
"title":"Overview",
|
||
"uri":"mrs_01_1087.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"542"
|
||
},
|
||
{
|
||
"desc":"This section describes how to import data from external data sources to MRS.Generally, you can manually manage data import and export jobs on the Loader UI. To use shell ",
|
||
"product_code":"mrs",
|
||
"title":"Importing Data Using Loader",
|
||
"uri":"mrs_01_1088.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"543"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from an SFTP server to HDFS or OBS.You have obtained the service username and password for creating a Loader job.You have had the permission to ",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from an SFTP Server to HDFS or OBS",
|
||
"uri":"mrs_01_1089.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"544"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from an SFTP server to HBase.You have obtained the service username and password for creating a Loader job.You have had the permission to access",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from an SFTP Server to HBase",
|
||
"uri":"mrs_01_1090.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"545"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from an SFTP server to Hive.You have obtained the service username and password for creating a Loader job.You have had the permission to access ",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from an SFTP Server to Hive",
|
||
"uri":"mrs_01_1091.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"546"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from an SFTP server to Spark.You have obtained the service username and password for creating a Loader job.You have had the permission to access",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from an SFTP Server to Spark",
|
||
"uri":"mrs_01_1092.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"547"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from an FTP server to HBase.You have obtained the service username and password for creating a Loader job.You have obtained the username and pas",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from an FTP Server to HBase",
|
||
"uri":"mrs_01_1093.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"548"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from a relational database to HDFS or OBS.You have obtained the service username and password for creating a Loader job.You have had the permiss",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from a Relational Database to HDFS or OBS",
|
||
"uri":"mrs_01_1094.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"549"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from a relational database to HBase.You have obtained the service username and password for creating a Loader job.You have had the permission to",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from a Relational Database to HBase",
|
||
"uri":"mrs_01_1095.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"550"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from a relational database to Hive.You have obtained the service username and password for creating a Loader job.You have had the permission to ",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from a Relational Database to Hive",
|
||
"uri":"mrs_01_1096.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"551"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from a relational database to Spark.You have obtained the service username and password for creating a Loader job.You have had the permission to",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from a Relational Database to Spark",
|
||
"uri":"mrs_01_1097.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"552"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from HDFS or OBS to HBase.You have obtained the service username and password for creating a Loader job.You have had the permission to access th",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from HDFS or OBS to HBase",
|
||
"uri":"mrs_01_1098.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"553"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Loader to import data from a relational database to ClickHouse using MySQL as an example.You have obtained the service username and pass",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from a Relational Database to ClickHouse",
|
||
"uri":"mrs_01_24172.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"554"
|
||
},
|
||
{
|
||
"desc":"Use Loader to import data from HDFS to ClickHouse.You have obtained the service username and password for creating a Loader job.You have had the permission to access the ",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from HDFS to ClickHouse",
|
||
"uri":"mrs_01_24173.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"541",
|
||
"code":"555"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Exporting Data",
|
||
"uri":"mrs_01_1100.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"538",
|
||
"code":"556"
|
||
},
|
||
{
|
||
"desc":"Loader is an extract, transform, and load (ETL) tool for exchanging data and files between MRS and relational databases and file systems. You can use the Loader to export",
|
||
"product_code":"mrs",
|
||
"title":"Overview",
|
||
"uri":"mrs_01_1101.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"557"
|
||
},
|
||
{
|
||
"desc":"This task enables you to export data from MRS to external data sources.Generally, users can manually manage data import and export jobs on the Loader UI. To use shell scr",
|
||
"product_code":"mrs",
|
||
"title":"Using Loader to Export Data",
|
||
"uri":"mrs_01_1102.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"558"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Loader to export data from HDFS/OBS to an SFTP server.You have obtained the service username and password for creating a Loader job.You ",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Exporting Data from HDFS/OBS to an SFTP Server",
|
||
"uri":"mrs_01_1103.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"559"
|
||
},
|
||
{
|
||
"desc":"Use Loader to export data from HBase to an SFTP server.You have obtained the service username and password for creating a Loader job.You have had the permission to access",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Exporting Data from HBase to an SFTP Server",
|
||
"uri":"mrs_01_1104.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"560"
|
||
},
|
||
{
|
||
"desc":"Use Loader to export data from Hive to an SFTP server.You have obtained the service username and password for creating a Loader job.You have had the permission to access ",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Exporting Data from Hive to an SFTP Server",
|
||
"uri":"mrs_01_1105.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"561"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Loader to export data from Spark to an SFTP server.You have obtained the service username and password for creating a Loader job.You hav",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Exporting Data from Spark to an SFTP Server",
|
||
"uri":"mrs_01_1106.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"562"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Loader to export data from HDFS/OBS to a relational database.You have obtained the service username and password for creating a Loader j",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Exporting Data from HDFS/OBS to a Relational Database",
|
||
"uri":"mrs_01_1107.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"563"
|
||
},
|
||
{
|
||
"desc":"Use Loader to export data from HBase to a relational database.You have obtained the service username and password for creating a Loader job.You have had the permission to",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Exporting Data from HBase to a Relational Database",
|
||
"uri":"mrs_01_1108.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"564"
|
||
},
|
||
{
|
||
"desc":"Use Loader to export data from Hive to a relational database.You have obtained the service username and password for creating a Loader job.You have had the permission to ",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Exporting Data from Hive to a Relational Database",
|
||
"uri":"mrs_01_1109.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"565"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Loader to export data from Spark to a relational database.You have obtained the service username and password for creating a Loader job.",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Exporting Data from Spark to a Relational Database",
|
||
"uri":"mrs_01_1110.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"566"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Loader to export data from HBase to HDFS/OBS.You have obtained the service user name and password for creating a Loader job.You have had",
|
||
"product_code":"mrs",
|
||
"title":"Typical Scenario: Importing Data from HBase to HDFS/OBS",
|
||
"uri":"mrs_01_1111.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"556",
|
||
"code":"567"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Job Management",
|
||
"uri":"mrs_01_1113.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"538",
|
||
"code":"568"
|
||
},
|
||
{
|
||
"desc":"Loader allows jobs to be migrated in batches from a group (source group) to another group (target group).The source group and target group exist.The current user has the ",
|
||
"product_code":"mrs",
|
||
"title":"Migrating Loader Jobs in Batches",
|
||
"uri":"mrs_01_1114.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"568",
|
||
"code":"569"
|
||
},
|
||
{
|
||
"desc":"Loader allows existing jobs to be deleted in batches.The current user has the Edit permission for the jobs to be deleted or the Jobs Edit permission for the group to whic",
|
||
"product_code":"mrs",
|
||
"title":"Deleting Loader Jobs in Batches",
|
||
"uri":"mrs_01_1115.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"568",
|
||
"code":"570"
|
||
},
|
||
{
|
||
"desc":"Loader allows all jobs of a configuration file to be imported in batches.The current user has the Jobs Edit permission of the group to which the jobs to be imported belon",
|
||
"product_code":"mrs",
|
||
"title":"Importing Loader Jobs in Batches",
|
||
"uri":"mrs_01_1116.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"568",
|
||
"code":"571"
|
||
},
|
||
{
|
||
"desc":"Loader allows existing jobs to be exported in batches.The current user has the Edit permission for the jobs to be exported or the Jobs Edit permission of the group to whi",
|
||
"product_code":"mrs",
|
||
"title":"Exporting Loader Jobs in Batches",
|
||
"uri":"mrs_01_1117.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"568",
|
||
"code":"572"
|
||
},
|
||
{
|
||
"desc":"Query the execution status and execution duration of a Loader job during routine maintenance. You can perform the following operations on the job:Dirty Data: Query data t",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Historical Job Information",
|
||
"uri":"mrs_01_1118.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"568",
|
||
"code":"573"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Operator Help",
|
||
"uri":"mrs_01_1119.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"538",
|
||
"code":"574"
|
||
},
|
||
{
|
||
"desc":"Loader reads data at the source end, uses an input operator to convert data into fields by certain rules, use a conversion operator to clean or convert the fields, and fi",
|
||
"product_code":"mrs",
|
||
"title":"Overview",
|
||
"uri":"mrs_01_1120.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"574",
|
||
"code":"575"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Input Operators",
|
||
"uri":"mrs_01_1121.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"574",
|
||
"code":"576"
|
||
},
|
||
{
|
||
"desc":"The CSV File Input operator imports all files that can be opened by using a text editor.Input: test filesOutput: fieldsEach data line is separated into multiple fields by",
|
||
"product_code":"mrs",
|
||
"title":"CSV File Input",
|
||
"uri":"mrs_01_1122.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"576",
|
||
"code":"577"
|
||
},
|
||
{
|
||
"desc":"The Fixed File Input operator converts each line in a file into multiple fields by character or byte of a configurable length.Input: text fileOutput: fieldsThe source fil",
|
||
"product_code":"mrs",
|
||
"title":"Fixed File Input",
|
||
"uri":"mrs_01_1123.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"576",
|
||
"code":"578"
|
||
},
|
||
{
|
||
"desc":"Table Input operator converts specified columns in a relational database table into input fields of the same quantity.Input: table columnsOutput: fieldsFields are generat",
|
||
"product_code":"mrs",
|
||
"title":"Table Input",
|
||
"uri":"mrs_01_1124.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"576",
|
||
"code":"579"
|
||
},
|
||
{
|
||
"desc":"The HBase Input operator converts specified columns in an HBase table into input fields of the same quantity.Input: HBase table columnsOutput: fieldsIf the HBase table na",
|
||
"product_code":"mrs",
|
||
"title":"HBase Input",
|
||
"uri":"mrs_01_1125.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"576",
|
||
"code":"580"
|
||
},
|
||
{
|
||
"desc":"HTML Input operator imports a regular HTML file and converts elements in the HTML file into input fields.Input: HTML fileOutput: multiple fieldsparent tag is configured f",
|
||
"product_code":"mrs",
|
||
"title":"HTML Input",
|
||
"uri":"mrs_01_1126.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"576",
|
||
"code":"581"
|
||
},
|
||
{
|
||
"desc":"The Hive Input operator converts specified columns in an HBase table into input fields of the same quantity.Input: Hive table columnsOutput: fieldsIf the Hive table name ",
|
||
"product_code":"mrs",
|
||
"title":"Hive input",
|
||
"uri":"mrs_01_1128.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"576",
|
||
"code":"582"
|
||
},
|
||
{
|
||
"desc":"The Spark Input operator converts specified columns in an SparkSQL table into input fields of the same quantity.Input: SparkSQL table columnOutput: fieldsIf the SparkSQL ",
|
||
"product_code":"mrs",
|
||
"title":"Spark Input",
|
||
"uri":"mrs_01_1129.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"576",
|
||
"code":"583"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Conversion Operators",
|
||
"uri":"mrs_01_1130.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"574",
|
||
"code":"584"
|
||
},
|
||
{
|
||
"desc":"The Long Date Conversion operator performs long integer and date conversion.Input: fields to be convertedOutput: new fieldsIf the original data includes null values, no c",
|
||
"product_code":"mrs",
|
||
"title":"Long Date Conversion",
|
||
"uri":"mrs_01_1131.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"585"
|
||
},
|
||
{
|
||
"desc":"The null value conversion operator replaces null values with specified values.Input: fields with null valuesOutput: original fields with new valuesWhen field values are e",
|
||
"product_code":"mrs",
|
||
"title":"Null Value Conversion",
|
||
"uri":"mrs_01_1132.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"586"
|
||
},
|
||
{
|
||
"desc":"The Add Constants operator generates constant fields.Input: noneOutput: constant fieldsThis operator generates constant fields of the specified type.Use the CSV File Inpu",
|
||
"product_code":"mrs",
|
||
"title":"Constant Field Addition",
|
||
"uri":"mrs_01_1133.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"587"
|
||
},
|
||
{
|
||
"desc":"Generate Random operator configures new values as random value fields.Input: noneOutput: random value fieldsThe operator generates random value fields of specified type.U",
|
||
"product_code":"mrs",
|
||
"title":"Random Value Conversion",
|
||
"uri":"mrs_01_1134.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"588"
|
||
},
|
||
{
|
||
"desc":"The Concat Fields operator concatenates existing fields by using delimiters to generate new fields.Input: fields to be concatenatedOutput: new fieldsUse delimiters to con",
|
||
"product_code":"mrs",
|
||
"title":"Concat Fields",
|
||
"uri":"mrs_01_1135.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"589"
|
||
},
|
||
{
|
||
"desc":"The Extract Fields separates an existing field by using delimiters to generate new fields.Input: field to be separatedOutput: new fieldsThe value of the input field is se",
|
||
"product_code":"mrs",
|
||
"title":"Extract Fields",
|
||
"uri":"mrs_01_1136.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"590"
|
||
},
|
||
{
|
||
"desc":"The Modulo Integer operator performs modulo operations on integer fields to generate new fields.Input: integer fieldsOutput: new fieldsThe operator generates new fields a",
|
||
"product_code":"mrs",
|
||
"title":"Modulo Integer",
|
||
"uri":"mrs_01_1137.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"591"
|
||
},
|
||
{
|
||
"desc":"The String Cut operator cuts existing fields to generate new fields.Input: fields to be cutOutput: new fieldsstart position and end position are used to cut the original ",
|
||
"product_code":"mrs",
|
||
"title":"String Cut",
|
||
"uri":"mrs_01_1138.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"592"
|
||
},
|
||
{
|
||
"desc":"The EL Operation operator calculates field values and generates new fields. The algorithms that are currently supported include md5sum, sha1sum, sha256sum, and sha512sum.",
|
||
"product_code":"mrs",
|
||
"title":"EL Operation",
|
||
"uri":"mrs_01_1139.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"593"
|
||
},
|
||
{
|
||
"desc":"The String Operations operator converts the upper and lower cases of existing fields to generate new fields.Input: fields whose case is to be convertedOutput: new fields ",
|
||
"product_code":"mrs",
|
||
"title":"String Operations",
|
||
"uri":"mrs_01_1140.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"594"
|
||
},
|
||
{
|
||
"desc":"The String Reverse operator reverses existing fields to generate new fields.Input: fields to be reversedOutput: new fieldsValue reversal conversion is performed for field",
|
||
"product_code":"mrs",
|
||
"title":"String Reverse",
|
||
"uri":"mrs_01_1141.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"595"
|
||
},
|
||
{
|
||
"desc":"The String Trim operator clears spaces contained in existing fields to generate new fields.Input: fields whose spaces are to be clearedOutput: new fieldsClearing spaces a",
|
||
"product_code":"mrs",
|
||
"title":"String Trim",
|
||
"uri":"mrs_01_1142.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"596"
|
||
},
|
||
{
|
||
"desc":"This Filter Rows operator filters rows that contain triggering conditions by configuring logic conditions.Input: fields used to create filter conditionsOutput: noneWhen t",
|
||
"product_code":"mrs",
|
||
"title":"Filter Rows",
|
||
"uri":"mrs_01_1143.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"584",
|
||
"code":"597"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Output Operators",
|
||
"uri":"mrs_01_1145.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"574",
|
||
"code":"598"
|
||
},
|
||
{
|
||
"desc":"The Hive Output operator exports existing fields to specified columns of a Hive table.Input: fields to be exportedOutput: Hive tableThe field values are exported to the H",
|
||
"product_code":"mrs",
|
||
"title":"Hive output",
|
||
"uri":"mrs_01_1146.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"598",
|
||
"code":"599"
|
||
},
|
||
{
|
||
"desc":"The Spark Output operator exports existing fields to specified columns of a Spark SQL table.Input: fields to be exportedOutput: SparkSQL tableThe field values are exporte",
|
||
"product_code":"mrs",
|
||
"title":"Spark Output",
|
||
"uri":"mrs_01_1147.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"598",
|
||
"code":"600"
|
||
},
|
||
{
|
||
"desc":"The Table Output operator exports output fields to specified columns in a relational database table.Input: fields to be exportedOutput: relational database tableThe field",
|
||
"product_code":"mrs",
|
||
"title":"Table Output",
|
||
"uri":"mrs_01_1148.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"598",
|
||
"code":"601"
|
||
},
|
||
{
|
||
"desc":"The File Output operator uses delimiters to concatenate existing fields and exports new fields to a file.Input: fields to be exportedOutput: filesThe field is exported to",
|
||
"product_code":"mrs",
|
||
"title":"File Output",
|
||
"uri":"mrs_01_1149.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"598",
|
||
"code":"602"
|
||
},
|
||
{
|
||
"desc":"The HBase Output operator exports existing fields to specified columns of an HBase Outputtable.Input: fields to be exportedOutput: HBase tableThe field values are exporte",
|
||
"product_code":"mrs",
|
||
"title":"HBase Output",
|
||
"uri":"mrs_01_1150.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"598",
|
||
"code":"603"
|
||
},
|
||
{
|
||
"desc":"The ClickHouse Output operator exports existing fields to specified columns of a ClickHouse table.Input: fields to be exportedOutput: ClickHouse tableThe field values are",
|
||
"product_code":"mrs",
|
||
"title":"ClickHouse Output",
|
||
"uri":"mrs_01_24177.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"598",
|
||
"code":"604"
|
||
},
|
||
{
|
||
"desc":"This section describes how to associate, import, or export the field configuration information of an operator when creating or editing a Loader job.Associating the field ",
|
||
"product_code":"mrs",
|
||
"title":"Associating, Editing, Importing, or Exporting the Field Configuration of an Operator",
|
||
"uri":"mrs_01_1152.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"574",
|
||
"code":"605"
|
||
},
|
||
{
|
||
"desc":"When creating or editing Loader jobs, users can use macro definitions during parameter configuration. Then the parameters can be automatically changed to corresponding ma",
|
||
"product_code":"mrs",
|
||
"title":"Using Macro Definitions in Configuration Items",
|
||
"uri":"mrs_01_1153.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"574",
|
||
"code":"606"
|
||
},
|
||
{
|
||
"desc":"In Loader data import and export tasks, each operator defines different processing rules for null values and empty strings in raw data. Dirty data cannot be imported or e",
|
||
"product_code":"mrs",
|
||
"title":"Operator Data Processing Rules",
|
||
"uri":"mrs_01_1154.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"574",
|
||
"code":"607"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Client Tool Description",
|
||
"uri":"mrs_01_1155.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"538",
|
||
"code":"608"
|
||
},
|
||
{
|
||
"desc":"loader-tool is a Loader client tool. It consists of three tools: lt-ucc, lt-ucj, lt-ctl.Loader supports two modes, parameter mode and job template mode. Either mode can b",
|
||
"product_code":"mrs",
|
||
"title":"loader-tool Usage Guide",
|
||
"uri":"mrs_01_1157.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"608",
|
||
"code":"609"
|
||
},
|
||
{
|
||
"desc":"loader-tool can be used to create, update, query, and delete a connector or job by using a job template or setting parameters.This section describes how to use loader-too",
|
||
"product_code":"mrs",
|
||
"title":"loader-tool Usage Example",
|
||
"uri":"mrs_01_1158.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"608",
|
||
"code":"610"
|
||
},
|
||
{
|
||
"desc":"schedule-tool is used to submit jobs of SFTP data sources. You can modify the input path and file filtering criteria before submitting a job. You can modify the output pa",
|
||
"product_code":"mrs",
|
||
"title":"schedule-tool Usage Guide",
|
||
"uri":"mrs_01_1159.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"608",
|
||
"code":"611"
|
||
},
|
||
{
|
||
"desc":"After a job is created using the Loader WebUI or Loader-tool, use schedule-tool to execute the job.The Loader client has been installed and configured.cd /opt/hadoopclien",
|
||
"product_code":"mrs",
|
||
"title":"schedule-tool Usage Example",
|
||
"uri":"mrs_01_1160.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"608",
|
||
"code":"612"
|
||
},
|
||
{
|
||
"desc":"After a job is created using the Loader WebUI or loader-tool, use loader-backup to back up data.Only Loader jobs of data export support data backup.This tool is an intern",
|
||
"product_code":"mrs",
|
||
"title":"Using loader-backup to Back Up Job Data",
|
||
"uri":"mrs_01_1161.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"608",
|
||
"code":"613"
|
||
},
|
||
{
|
||
"desc":"Sqoop-shell is a shell tool of Loader. All its functions are implemented by executing the sqoop2-shell script.The sqoop-shell tool provides the following functions:Creati",
|
||
"product_code":"mrs",
|
||
"title":"Open Source sqoop-shell Tool Usage Guide",
|
||
"uri":"mrs_01_1162.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"608",
|
||
"code":"614"
|
||
},
|
||
{
|
||
"desc":"Taking importing data from SFTP to HDFS as an example, this section introduces how to use the sqoop-shell tool to create and start Loader jobs in the interaction mode and",
|
||
"product_code":"mrs",
|
||
"title":"Example for Using the Open-Source sqoop-shell Tool (SFTP-HDFS)",
|
||
"uri":"mrs_01_1163.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"608",
|
||
"code":"615"
|
||
},
|
||
{
|
||
"desc":"Taking Importing Data from Oracle to HBase as an example, this section introduces how to use the sqoop-shell tool to create and start Loader jobs in the interaction mode ",
|
||
"product_code":"mrs",
|
||
"title":"Example for Using the Open-Source sqoop-shell Tool (Oracle-HBase)",
|
||
"uri":"mrs_01_1164.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"608",
|
||
"code":"616"
|
||
},
|
||
{
|
||
"desc":"Log path: The default storage path of Loader log files is /var/log/Bigdata/loader/Log category.runlog: /var/log/Bigdata/loader/runlog (run logs)scriptlog: /var/log/Bigdat",
|
||
"product_code":"mrs",
|
||
"title":"Loader Log Overview",
|
||
"uri":"mrs_01_1165.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"538",
|
||
"code":"617"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Loader",
|
||
"uri":"mrs_01_1785.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"538",
|
||
"code":"618"
|
||
},
|
||
{
|
||
"desc":"Internet Explorer 11 or Internet Explorer 10 is used to access the web UI of Loader. After data is submitted, an error occurs.SymptomWhen the submitted data is saved, a s",
|
||
"product_code":"mrs",
|
||
"title":"How to Resolve the Problem that Failed to Save Data When Using Internet Explorer 10 or Internet Explorer 11 ?",
|
||
"uri":"mrs_01_1786.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"618",
|
||
"code":"619"
|
||
},
|
||
{
|
||
"desc":"Three types of connectors are available for importing data from the Oracle database to HDFS using Loader. That is, generic-jdbc-connector, oracle-connector, and oracle-pa",
|
||
"product_code":"mrs",
|
||
"title":"Differences Among Connectors Used During the Process of Importing Data from the Oracle Database to HDFS",
|
||
"uri":"mrs_01_1787.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"618",
|
||
"code":"620"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using MapReduce",
|
||
"uri":"mrs_01_0834.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"621"
|
||
},
|
||
{
|
||
"desc":"The JobHistoryServer service of MapReduce is a single instance, or the single instance is used to install the MapReduce service during cluster installation. To avoid the ",
|
||
"product_code":"mrs",
|
||
"title":"Converting MapReduce from the Single Instance Mode to the HA Mode",
|
||
"uri":"mrs_01_0835.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"622"
|
||
},
|
||
{
|
||
"desc":"Job and task logs are generated during execution of a MapReduce application.Job logs are generated by the MRApplicationMaster, which record details about the start and ru",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Log Archiving and Clearing Mechanism",
|
||
"uri":"mrs_01_0836.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"623"
|
||
},
|
||
{
|
||
"desc":"When the network is unstable or the cluster I/O and CPU are overloaded, client applications might encounter running failures.Adjust the following parameters in the mapred",
|
||
"product_code":"mrs",
|
||
"title":"Reducing Client Application Failure Rate",
|
||
"uri":"mrs_01_0837.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"624"
|
||
},
|
||
{
|
||
"desc":"To submit MapReduce tasks from Windows to Linux, set mapreduce.app-submission.cross-platform to true. If this parameter does not exist in the cluster or the value of this",
|
||
"product_code":"mrs",
|
||
"title":"Transmitting MapReduce Tasks from Windows to Linux",
|
||
"uri":"mrs_01_0838.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"625"
|
||
},
|
||
{
|
||
"desc":"Distributed caching is useful in the following scenarios:Rolling UpgradeDuring the upgrade, applications must keep the text content (JAR file or configuration file) uncha",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Distributed Cache",
|
||
"uri":"mrs_01_0839.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"626"
|
||
},
|
||
{
|
||
"desc":"When the MapReduce shuffle service is started, it attempts to bind an IP address based on local host. If the MapReduce shuffle service is required to connect to a specifi",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the MapReduce Shuffle Address",
|
||
"uri":"mrs_01_0840.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"627"
|
||
},
|
||
{
|
||
"desc":"This function is used to specify the MapReduce cluster administrator.The system administrator list is specified by mapreduce.cluster.administrators. The cluster administr",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Cluster Administrator List",
|
||
"uri":"mrs_01_0841.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"628"
|
||
},
|
||
{
|
||
"desc":"Log paths:JobhistoryServer: /var/log/Bigdata/mapreduce/jobhistory (run log) and /var/log/Bigdata/audit/mapreduce/jobhistory (audit log)Container: /srv/BigData/hadoop/data",
|
||
"product_code":"mrs",
|
||
"title":"Introduction to MapReduce Logs",
|
||
"uri":"mrs_01_0842.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"629"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"MapReduce Performance Tuning",
|
||
"uri":"mrs_01_0843.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"630"
|
||
},
|
||
{
|
||
"desc":"Optimization can be performed when the number of CPU cores is large, for example, the number of CPU cores is three times the number of disks.You can set the following par",
|
||
"product_code":"mrs",
|
||
"title":"Optimization Configuration for Multiple CPU Cores",
|
||
"uri":"mrs_01_0844.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"630",
|
||
"code":"631"
|
||
},
|
||
{
|
||
"desc":"The performance optimization effect is verified by comparing actual values with the baseline data. Therefore, determining optimal job baseline is critical to performance ",
|
||
"product_code":"mrs",
|
||
"title":"Determining the Job Baseline",
|
||
"uri":"mrs_01_0845.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"630",
|
||
"code":"632"
|
||
},
|
||
{
|
||
"desc":"During the shuffle procedure of MapReduce, the Map task writes intermediate data into disks, and the Reduce task copies and adds the data to the reduce function. Hadoop p",
|
||
"product_code":"mrs",
|
||
"title":"Streamlining Shuffle",
|
||
"uri":"mrs_01_0846.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"630",
|
||
"code":"633"
|
||
},
|
||
{
|
||
"desc":"A big job containing 100,000 Map tasks fails. It is found that the failure is triggered by the slow response of ApplicationMaster (AM).When the number of tasks increases,",
|
||
"product_code":"mrs",
|
||
"title":"AM Optimization for Big Tasks",
|
||
"uri":"mrs_01_0847.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"630",
|
||
"code":"634"
|
||
},
|
||
{
|
||
"desc":"If a cluster has hundreds or thousands of nodes, the hardware or software fault of a node may prolong the execution time of the entire task (as most tasks are already com",
|
||
"product_code":"mrs",
|
||
"title":"Speculative Execution",
|
||
"uri":"mrs_01_0848.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"630",
|
||
"code":"635"
|
||
},
|
||
{
|
||
"desc":"The Slow Start feature specifies the proportion of Map tasks to be completed before Reduce tasks are started. If the Reduce tasks are started too early, resources will be",
|
||
"product_code":"mrs",
|
||
"title":"Using Slow Start",
|
||
"uri":"mrs_01_0849.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"630",
|
||
"code":"636"
|
||
},
|
||
{
|
||
"desc":"By default, if an MR job generates a large number of output files, it takes a long time for the job to commit the temporary outputs of a task to the final output director",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Performance for Committing MR Jobs",
|
||
"uri":"mrs_01_0850.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"630",
|
||
"code":"637"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About MapReduce",
|
||
"uri":"mrs_01_1788.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"621",
|
||
"code":"638"
|
||
},
|
||
{
|
||
"desc":"MapReduce job takes a very long time (more than 10minutes) when the ResourceManager switch while the job is running.This is because, ResorceManager HA is enabled but the ",
|
||
"product_code":"mrs",
|
||
"title":"Why Does It Take a Long Time to Run a Task Upon ResourceManager Active/Standby Switchover?",
|
||
"uri":"mrs_01_1789.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"638",
|
||
"code":"639"
|
||
},
|
||
{
|
||
"desc":"MapReduce job is not progressing for long timeThis is because of less memory. When the memory is less, the time taken by the job to copy the map output increases signific",
|
||
"product_code":"mrs",
|
||
"title":"Why Does a MapReduce Task Stay Unchanged for a Long Time?",
|
||
"uri":"mrs_01_1790.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"638",
|
||
"code":"640"
|
||
},
|
||
{
|
||
"desc":"Why is the client unavailable when the MR ApplicationMaster or ResourceManager is moved to the D state during job running?When a task is running, the MR ApplicationMaster",
|
||
"product_code":"mrs",
|
||
"title":"Why the Client Hangs During Job Running?",
|
||
"uri":"mrs_01_1791.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"638",
|
||
"code":"641"
|
||
},
|
||
{
|
||
"desc":"In security mode, why delegation token HDFS_DELEGATION_TOKEN is not found in the cache?In MapReduce, by default HDFS_DELEGATION_TOKEN will be canceled after the job compl",
|
||
"product_code":"mrs",
|
||
"title":"Why Cannot HDFS_DELEGATION_TOKEN Be Found in the Cache?",
|
||
"uri":"mrs_01_1792.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"638",
|
||
"code":"642"
|
||
},
|
||
{
|
||
"desc":"How do I set the job priority when submitting a MapReduce task?You can add the parameter -Dmapreduce.job.priority=<priority> in the command to set task priority when subm",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Set the Task Priority When Submitting a MapReduce Task?",
|
||
"uri":"mrs_01_1793.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"638",
|
||
"code":"643"
|
||
},
|
||
{
|
||
"desc":"After the address of MapReduce JobHistoryServer is changed, why the wrong page is displayed when I click the tracking URL on the ResourceManager WebUI?JobHistoryServer ad",
|
||
"product_code":"mrs",
|
||
"title":"After the Address of MapReduce JobHistoryServer Is Changed, Why the Wrong Page is Displayed When I Click the Tracking URL on the ResourceManager WebUI?",
|
||
"uri":"mrs_01_1797.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"638",
|
||
"code":"644"
|
||
},
|
||
{
|
||
"desc":"MapReduce or Yarn job fails in multiple nameService environment using viewFS.When using viewFS only the mount directories are accessible, so the most possible cause is th",
|
||
"product_code":"mrs",
|
||
"title":"MapReduce Job Failed in Multiple NameService Environment",
|
||
"uri":"mrs_01_1799.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"638",
|
||
"code":"645"
|
||
},
|
||
{
|
||
"desc":"MapReduce task fails and the ratio of fault nodes to all nodes is smaller than the blacklist threshold configured by yarn.resourcemanager.am-scheduling.node-blacklisting-",
|
||
"product_code":"mrs",
|
||
"title":"Why a Fault MapReduce Node Is Not Blacklisted?",
|
||
"uri":"mrs_01_1800.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"638",
|
||
"code":"646"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Oozie",
|
||
"uri":"mrs_01_1807.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"647"
|
||
},
|
||
{
|
||
"desc":"Oozie is an open-source workflow engine that is used to schedule and coordinate Hadoop jobs.Oozie can be used to submit a wide array of jobs, such as Hive, Spark2x, Loade",
|
||
"product_code":"mrs",
|
||
"title":"Using Oozie from Scratch",
|
||
"uri":"mrs_01_1808.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"647",
|
||
"code":"648"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use the Oozie client in an O&M scenario or service scenario.The client has been installed. For example, the installation directory is /opt/h",
|
||
"product_code":"mrs",
|
||
"title":"Using the Oozie Client",
|
||
"uri":"mrs_01_1810.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"647",
|
||
"code":"649"
|
||
},
|
||
{
|
||
"desc":"When multiple Oozie nodes provide services at the same time, you can use ZooKeeper to provide high availability (HA), which helps avoid single points of failure (SPOFs) a",
|
||
"product_code":"mrs",
|
||
"title":"Enabling Oozie High Availability (HA)",
|
||
"uri":"mrs_01_24233.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"647",
|
||
"code":"650"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Oozie Client to Submit an Oozie Job",
|
||
"uri":"mrs_01_1812.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"647",
|
||
"code":"651"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use the Oozie client to submit a Hive job.Hive jobs are divided into the following types:Hive jobHive job that is connected in JDBC modeHive",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Hive Job",
|
||
"uri":"mrs_01_1813.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"651",
|
||
"code":"652"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit a Spark2x job using the Oozie client.You are advised to download the latest client.The Spark2x and Oozie components and clients have ",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Spark2x Job",
|
||
"uri":"mrs_01_1814.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"651",
|
||
"code":"653"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit a Loader job using the Oozie client.You are advised to download the latest client.The Hive and Oozie components and clients have been",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Loader Job",
|
||
"uri":"mrs_01_1815.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"651",
|
||
"code":"654"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit a DistCp job using the Oozie client.You are advised to download the latest client.The HDFS and Oozie components and clients have been",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a DistCp Job",
|
||
"uri":"mrs_01_2392.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"651",
|
||
"code":"655"
|
||
},
|
||
{
|
||
"desc":"In addition to Hive, Spark2x, and Loader jobs, MapReduce, Java, Shell, HDFS, SSH, SubWorkflow, Streaming, and scheduled jobs can be submitted using the Oozie client.You a",
|
||
"product_code":"mrs",
|
||
"title":"Submitting Other Jobs",
|
||
"uri":"mrs_01_1816.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"651",
|
||
"code":"656"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Hue to Submit an Oozie Job",
|
||
"uri":"mrs_01_1817.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"647",
|
||
"code":"657"
|
||
},
|
||
{
|
||
"desc":"You can submit an Oozie job on the Hue management page, but a workflow must be created before the job is submitted.Before using Hue to submit an Oozie job, configure the ",
|
||
"product_code":"mrs",
|
||
"title":"Creating a Workflow",
|
||
"uri":"mrs_01_1818.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"657",
|
||
"code":"658"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Workflow Job",
|
||
"uri":"mrs_01_1819.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"657",
|
||
"code":"659"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit an Oozie job of the Hive2 type on the Hue web UI.For example, if the input parameter is INPUT=/user/admin/examples/input-data/table, ",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Hive2 Job",
|
||
"uri":"mrs_01_1820.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"660"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit an Oozie job of the Spark2x type on Hue.For example, add the following parameters:hdfs://hacluster/user/admin/examples/input-data/tex",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Spark2x Job",
|
||
"uri":"mrs_01_1821.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"661"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit an Oozie job of the Java type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow)",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Java Job",
|
||
"uri":"mrs_01_1822.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"662"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit an Oozie job of the Loader type on the Hue web UI.Job id is the ID of the Loader job to be orchestrated and can be obtained from the ",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Loader Job",
|
||
"uri":"mrs_01_1823.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"663"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit an Oozie job of the MapReduce type on the Hue web UI.For example, set the value of mapred.input.dir to /user/admin/examples/input-dat",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a MapReduce Job",
|
||
"uri":"mrs_01_1824.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"664"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit an Oozie job of the Sub-workflow type on the Hue web UI.If you need to modify the job name before saving the job (default value: My W",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Sub-workflow Job",
|
||
"uri":"mrs_01_1825.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"665"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit an Oozie job of the Shell type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Shell Job",
|
||
"uri":"mrs_01_1826.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"666"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit an Oozie job of the HDFS type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow)",
|
||
"product_code":"mrs",
|
||
"title":"Submitting an HDFS Job",
|
||
"uri":"mrs_01_1827.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"667"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit an Oozie job of the DistCp type on the Hue web UI.If yes, go to 4.If no, go to 7.source_ip: service address of the HDFS NameNode in t",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a DistCp Job",
|
||
"uri":"mrs_01_1829.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"668"
|
||
},
|
||
{
|
||
"desc":"This section guides you to enable unidirectional password-free mutual trust when Oozie nodes are used to execute shell scripts of external nodes through SSH jobs.You have",
|
||
"product_code":"mrs",
|
||
"title":"Example of Mutual Trust Operations",
|
||
"uri":"mrs_01_1830.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"669"
|
||
},
|
||
{
|
||
"desc":"This section guides you to submit an Oozie job of the SSH type on the Hue web UI.If you need to modify the job name before saving the job (default value: My Workflow), cl",
|
||
"product_code":"mrs",
|
||
"title":"Submitting an SSH Job",
|
||
"uri":"mrs_01_1831.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"670"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit a Hive job on the Hue web UI.After the job is submitted, you can view the related contents of the job, such as the detailed informati",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Hive Script",
|
||
"uri":"mrs_01_2372.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"671"
|
||
},
|
||
{
|
||
"desc":"This section describes how to add an email job on the Hue web UI.To addresses: specifies the recipient email address. Separate multiple email addresses with commas (,).Su",
|
||
"product_code":"mrs",
|
||
"title":"Submitting an Email Job",
|
||
"uri":"mrs_01_24114.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"659",
|
||
"code":"672"
|
||
},
|
||
{
|
||
"desc":"This section describes how to submit a job of the periodic scheduling type on the Hue web UI.Required workflow jobs have been configured before the coordinator task is su",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Coordinator Periodic Scheduling Job",
|
||
"uri":"mrs_01_1840.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"657",
|
||
"code":"673"
|
||
},
|
||
{
|
||
"desc":"In the case that multiple scheduled jobs exist at the same time, you can manage the jobs in batches over the Bundle task. This section describes how to submit a job of th",
|
||
"product_code":"mrs",
|
||
"title":"Submitting a Bundle Batch Processing Job",
|
||
"uri":"mrs_01_1841.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"657",
|
||
"code":"674"
|
||
},
|
||
{
|
||
"desc":"After the jobs are submitted, you can view the execution status of a specific job on Hue.",
|
||
"product_code":"mrs",
|
||
"title":"Querying the Operation Results",
|
||
"uri":"mrs_01_1842.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"657",
|
||
"code":"675"
|
||
},
|
||
{
|
||
"desc":"Log path: The default storage paths of Oozie log files are as follows:Run log: /var/log/Bigdata/oozieAudit log: /var/log/Bigdata/audit/oozieLog archiving rule: Oozie logs",
|
||
"product_code":"mrs",
|
||
"title":"Oozie Log Overview",
|
||
"uri":"mrs_01_1843.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"647",
|
||
"code":"676"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Oozie",
|
||
"uri":"mrs_01_1844.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"647",
|
||
"code":"677"
|
||
},
|
||
{
|
||
"desc":"The Oozie client fails to submit a MapReduce job and a message \"Error: AUTHENTICATION: Could not authenticate, Authentication failed, status: 403, message: Forbidden\" is ",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Resolve the Problem that the Oozie Client Fails to Submit a MapReduce Job?",
|
||
"uri":"mrs_01_1845.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"677",
|
||
"code":"678"
|
||
},
|
||
{
|
||
"desc":"Why are not Coordinator scheduled jobs executed on time on the Hue or Oozie client?Use UTC time. For example, set start=2016-12-20T09:00Z in job.properties file.",
|
||
"product_code":"mrs",
|
||
"title":"Oozie Scheduled Tasks Are Not Executed on Time",
|
||
"uri":"mrs_01_1846.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"677",
|
||
"code":"679"
|
||
},
|
||
{
|
||
"desc":"Why cannot a class error be found during task execution after a new JAR file is uploaded to the /user/oozie/share/lib directory on HDFS?Restart Oozie to make the director",
|
||
"product_code":"mrs",
|
||
"title":"The Update of the share lib Directory of Oozie Does Not Take Effect",
|
||
"uri":"mrs_01_1847.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"677",
|
||
"code":"680"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Ranger",
|
||
"uri":"mrs_01_1849.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"681"
|
||
},
|
||
{
|
||
"desc":"Ranger provides a centralized permission management framework to implement fine-grained permission control on components such as HDFS, HBase, Hive, and Yarn. In addition,",
|
||
"product_code":"mrs",
|
||
"title":"Logging In to the Ranger Web UI",
|
||
"uri":"mrs_01_1850.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"682"
|
||
},
|
||
{
|
||
"desc":"This section guides you how to enable Ranger authentication. Ranger authentication is enabled by default in security mode and disabled by default in normal mode.If Enable",
|
||
"product_code":"mrs",
|
||
"title":"Enabling Ranger Authentication",
|
||
"uri":"mrs_01_2393.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"683"
|
||
},
|
||
{
|
||
"desc":"In the newly installed MRS cluster, Ranger is installed by default, with the Ranger authentication model enabled. The system administrator can set fine-grained security p",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Component Permission Policies",
|
||
"uri":"mrs_01_1851.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"684"
|
||
},
|
||
{
|
||
"desc":"The system administrator can view audit logs of the Ranger running and the permission control after Ranger authentication is enabled on the Ranger web UI.",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Ranger Audit Information",
|
||
"uri":"mrs_01_1852.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"685"
|
||
},
|
||
{
|
||
"desc":"Security zone can be configured using Ranger. Ranger administrators can divide resources of each component into multiple security zones where administrators set security ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring a Security Zone",
|
||
"uri":"mrs_01_1853.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"686"
|
||
},
|
||
{
|
||
"desc":"By default, the Ranger data source of the security cluster can be accessed by FusionInsight Manager LDAP users. By default, the Ranger data source of a common cluster can",
|
||
"product_code":"mrs",
|
||
"title":"Changing the Ranger Data Source to LDAP for a Normal Cluster",
|
||
"uri":"mrs_01_2394.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"687"
|
||
},
|
||
{
|
||
"desc":"You can view Ranger permission settings, such as users, user groups, and roles.Users: displays all user information synchronized from LDAP or OS to Ranger.Groups: display",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Ranger Permission Information",
|
||
"uri":"mrs_01_1854.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"688"
|
||
},
|
||
{
|
||
"desc":"The Ranger administrator can use Ranger to configure the read, write, and execution permissions on HDFS directories or files for HDFS users.The Ranger service has been in",
|
||
"product_code":"mrs",
|
||
"title":"Adding a Ranger Access Permission Policy for HDFS",
|
||
"uri":"mrs_01_1856.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"689"
|
||
},
|
||
{
|
||
"desc":"Ranger administrators can use Ranger to configure permissions on HBase tables, column families, and columns for HBase users.The Ranger service has been installed and is r",
|
||
"product_code":"mrs",
|
||
"title":"Adding a Ranger Access Permission Policy for HBase",
|
||
"uri":"mrs_01_1857.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"690"
|
||
},
|
||
{
|
||
"desc":"The Ranger administrator can use Ranger to set permissions for Hive users. The default administrator account of Hive is hive and the initial password is Hive@123.The Rang",
|
||
"product_code":"mrs",
|
||
"title":"Adding a Ranger Access Permission Policy for Hive",
|
||
"uri":"mrs_01_1858.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"691"
|
||
},
|
||
{
|
||
"desc":"The Ranger administrator can use Ranger to configure Yarn administrator permissions for Yarn users, allowing them to manage Yarn queue resources.The Ranger service has be",
|
||
"product_code":"mrs",
|
||
"title":"Adding a Ranger Access Permission Policy for Yarn",
|
||
"uri":"mrs_01_1859.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"692"
|
||
},
|
||
{
|
||
"desc":"The Ranger administrator can use Ranger to set permissions for Spark2x users.After Ranger authentication is enabled or disabled on Spark2x, you need to restart Spark2x.Do",
|
||
"product_code":"mrs",
|
||
"title":"Adding a Ranger Access Permission Policy for Spark2x",
|
||
"uri":"mrs_01_1860.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"693"
|
||
},
|
||
{
|
||
"desc":"The Ranger administrator can use Ranger to configure the read, write, and management permissions of the Kafka topic and the management permission of the cluster for the K",
|
||
"product_code":"mrs",
|
||
"title":"Adding a Ranger Access Permission Policy for Kafka",
|
||
"uri":"mrs_01_1861.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"694"
|
||
},
|
||
{
|
||
"desc":"Ranger administrators can use Ranger to configure the permission to manage databases, tables, and columns of data sources for HetuEngine users.The Ranger service has been",
|
||
"product_code":"mrs",
|
||
"title":"Adding a Ranger Access Permission Policy for HetuEngine",
|
||
"uri":"mrs_01_1862.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"695"
|
||
},
|
||
{
|
||
"desc":"Log path: The default storage path of Ranger logs is /var/log/Bigdata/ranger/Role name.RangerAdmin: /var/log/Bigdata/ranger/rangeradmin (run logs)TagSync: /var/log/Bigdat",
|
||
"product_code":"mrs",
|
||
"title":"Ranger Log Overview",
|
||
"uri":"mrs_01_1865.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"696"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Ranger",
|
||
"uri":"mrs_01_1866.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"681",
|
||
"code":"697"
|
||
},
|
||
{
|
||
"desc":"During cluster installation, Ranger fails to be started, and the error message \"ERROR: cannot drop sequence X_POLICY_REF_ACCESS_TYPE_SEQ \" is displayed in the task list o",
|
||
"product_code":"mrs",
|
||
"title":"Why Ranger Startup Fails During the Cluster Installation?",
|
||
"uri":"mrs_01_1867.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"697",
|
||
"code":"698"
|
||
},
|
||
{
|
||
"desc":"How do I determine whether the Ranger authentication is enabled for a service that supports the authentication?Log in to FusionInsight Manager and choose Cluster > Servic",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Determine Whether the Ranger Authentication Is Used for a Service?",
|
||
"uri":"mrs_01_1868.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"697",
|
||
"code":"699"
|
||
},
|
||
{
|
||
"desc":"When a new user logs in to Ranger, why is the 401 error reported after the password is changed?The UserSync synchronizes user data at an interval of 5 minutes by default.",
|
||
"product_code":"mrs",
|
||
"title":"Why Cannot a New User Log In to Ranger After Changing the Password?",
|
||
"uri":"mrs_01_2300.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"697",
|
||
"code":"700"
|
||
},
|
||
{
|
||
"desc":"When a Ranger access permission policy is added for HBase and wildcard characters are used to search for an existing HBase table in the policy, the table cannot be found.",
|
||
"product_code":"mrs",
|
||
"title":"When an HBase Policy Is Added or Modified on Ranger, Wildcard Characters Cannot Be Used to Search for Existing HBase Tables",
|
||
"uri":"mrs_01_2355.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"697",
|
||
"code":"701"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Spark2x",
|
||
"uri":"mrs_01_1926.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"702"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Basic Operation",
|
||
"uri":"mrs_01_1928.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"702",
|
||
"code":"703"
|
||
},
|
||
{
|
||
"desc":"This section describes how to use Spark2x to submit Spark applications, including Spark Core and Spark SQL. Spark Core is the kernel module of Spark. It executes tasks an",
|
||
"product_code":"mrs",
|
||
"title":"Getting Started",
|
||
"uri":"mrs_01_1929.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"703",
|
||
"code":"704"
|
||
},
|
||
{
|
||
"desc":"This section describes how to quickly configure common parameters and lists parameters that are not recommended to be modified when Spark2x is used.Some parameters have b",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Parameters Rapidly",
|
||
"uri":"mrs_01_1930.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"703",
|
||
"code":"705"
|
||
},
|
||
{
|
||
"desc":"This section describes common configuration items used in Spark. This section is divided into sub-sections based on features to help you quickly find required configurati",
|
||
"product_code":"mrs",
|
||
"title":"Common Parameters",
|
||
"uri":"mrs_01_1931.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"703",
|
||
"code":"706"
|
||
},
|
||
{
|
||
"desc":"Spark on HBase allows users to query HBase tables in Spark SQL and to store data for HBase tables by using the Beeline tool. You can use HBase APIs to create, read data f",
|
||
"product_code":"mrs",
|
||
"title":"Spark on HBase Overview and Basic Applications",
|
||
"uri":"mrs_01_1933.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"703",
|
||
"code":"707"
|
||
},
|
||
{
|
||
"desc":"Spark on HBase V2 allows users to query HBase tables in Spark SQL and to store data for HBase tables by using the Beeline tool. You can use HBase APIs to create, read dat",
|
||
"product_code":"mrs",
|
||
"title":"Spark on HBase V2 Overview and Basic Applications",
|
||
"uri":"mrs_01_1934.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"703",
|
||
"code":"708"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"SparkSQL Permission Management(Security Mode)",
|
||
"uri":"mrs_01_1935.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"703",
|
||
"code":"709"
|
||
},
|
||
{
|
||
"desc":"Similar to Hive, Spark SQL is a data warehouse framework built on Hadoop, providing storage of structured data like structured query language (SQL).MRS supports users, us",
|
||
"product_code":"mrs",
|
||
"title":"Spark SQL Permissions",
|
||
"uri":"mrs_01_1936.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"709",
|
||
"code":"710"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create and configure a SparkSQL role on Manager as the system administrator. The Spark SQL role can be configured with the spark dministrato",
|
||
"product_code":"mrs",
|
||
"title":"Creating a Spark SQL Role",
|
||
"uri":"mrs_01_1937.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"709",
|
||
"code":"711"
|
||
},
|
||
{
|
||
"desc":"You can configure related permissions if you need to access tables or databases created by other users. SparkSQL supports column-based permission control. If a user needs",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Permissions for SparkSQL Tables, Columns, and Databases",
|
||
"uri":"mrs_01_1938.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"709",
|
||
"code":"712"
|
||
},
|
||
{
|
||
"desc":"SparkSQL may need to be associated with other components. For example, Spark on HBase requires HBase permissions. The following describes how to associate SparkSQL with H",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Permissions for SparkSQL to Use Other Components",
|
||
"uri":"mrs_01_1939.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"709",
|
||
"code":"713"
|
||
},
|
||
{
|
||
"desc":"This section describes how to configure SparkSQL permission management functions (client configuration is similar to server configuration). To enable table permission, ad",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Client and Server",
|
||
"uri":"mrs_01_1940.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"709",
|
||
"code":"714"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Scenario-Specific Configuration",
|
||
"uri":"mrs_01_1941.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"703",
|
||
"code":"715"
|
||
},
|
||
{
|
||
"desc":"In this mode, multiple ThriftServers coexist in the cluster and the client can randomly connect any ThriftServer to perform service operations. When one or multiple Thrif",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Multi-active Instance Mode",
|
||
"uri":"mrs_01_1942.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"716"
|
||
},
|
||
{
|
||
"desc":"In multi-tenant mode, JDBCServers are bound with tenants. Each tenant corresponds to one or more JDBCServers, and a JDBCServer provides services for only one tenant. Diff",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Multi-tenant Mode",
|
||
"uri":"mrs_01_1943.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"717"
|
||
},
|
||
{
|
||
"desc":"When using a cluster, if you want to switch between multi-active instance mode and multi-tenant mode, the following configurations are required.Switch from multi-tenant m",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Switchover Between the Multi-active Instance Mode and the Multi-tenant Mode",
|
||
"uri":"mrs_01_1944.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"718"
|
||
},
|
||
{
|
||
"desc":"Functions such as UI, EventLog, and dynamic resource scheduling in Spark are implemented through event transfer. Events include SparkListenerJobStart and SparkListenerJob",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Size of the Event Queue",
|
||
"uri":"mrs_01_1945.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"719"
|
||
},
|
||
{
|
||
"desc":"When the executor off-heap memory is too small, or processes with higher priority preempt resources, the physical memory usage will exceed the maximal value. To prevent t",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Executor Off-Heap Memory",
|
||
"uri":"mrs_01_1947.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"720"
|
||
},
|
||
{
|
||
"desc":"A large amount of memory is required when Spark SQL executes a query, especially during Aggregate and Join operations. If the memory is limited, OutOfMemoryError may occu",
|
||
"product_code":"mrs",
|
||
"title":"Enhancing Stability in a Limited Memory Condition",
|
||
"uri":"mrs_01_1948.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"721"
|
||
},
|
||
{
|
||
"desc":"When yarn.log-aggregation-enable of Yarn is set to true, the container log aggregation function is enabled. Log aggregation indicates that after applications are run on Y",
|
||
"product_code":"mrs",
|
||
"title":"Viewing Aggregated Container Logs on the Web UI",
|
||
"uri":"mrs_01_1949.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"722"
|
||
},
|
||
{
|
||
"desc":"SQL statements executed by users may contain sensitive information (such as passwords). Disclosure of such information may incur security risks. You can configure the spa",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Whether to Display Spark SQL Statements Containing Sensitive Words",
|
||
"uri":"mrs_01_1950.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"723"
|
||
},
|
||
{
|
||
"desc":"Values of some configuration parameters of Spark client vary depending on its work mode (YARN-Client or YARN-Cluster). If you switch Spark client between different modes ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Environment Variables in Yarn-Client and Yarn-Cluster Modes",
|
||
"uri":"mrs_01_1951.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"724"
|
||
},
|
||
{
|
||
"desc":"By default, SparkSQL divides data into 200 data blocks during shuffle. In data-intensive scenarios, each data block may have excessive size. If a single data block of a t",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Default Number of Data Blocks Divided by SparkSQL",
|
||
"uri":"mrs_01_1952.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"725"
|
||
},
|
||
{
|
||
"desc":"The compression format of a Parquet table can be configured as follows:If the Parquet table is a partitioned one, set the parquet.compression parameter of the Parquet tab",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Compression Format of a Parquet Table",
|
||
"uri":"mrs_01_1953.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"726"
|
||
},
|
||
{
|
||
"desc":"In Spark WebUI, the Executor page can display information about Lost Executor. Executors are dynamically recycled. If the JDBCServer tasks are large, there may be too man",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Number of Lost Executors Displayed in WebUI",
|
||
"uri":"mrs_01_1954.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"727"
|
||
},
|
||
{
|
||
"desc":"In some scenarios, to locate problems or check information by changing the log level,you can add the -Dlog4j.configuration.watch=true parameter to the JVM parameter of a ",
|
||
"product_code":"mrs",
|
||
"title":"Setting the Log Level Dynamically",
|
||
"uri":"mrs_01_1957.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"728"
|
||
},
|
||
{
|
||
"desc":"When Spark is used to submit tasks, the driver obtains tokens from HBase by default. To access HBase, you need to configure the jaas.conf file for security authentication",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Whether Spark Obtains HBase Tokens",
|
||
"uri":"mrs_01_1958.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"729"
|
||
},
|
||
{
|
||
"desc":"If the Spark Streaming application is connected to Kafka, after the Spark Streaming application is terminated abnormally and restarted from the checkpoint, the system pre",
|
||
"product_code":"mrs",
|
||
"title":"Configuring LIFO for Kafka",
|
||
"uri":"mrs_01_1959.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"730"
|
||
},
|
||
{
|
||
"desc":"When the Spark Streaming application is connected to Kafka and the application is restarted, the application reads data from Kafka based on the last read topic offset and",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Reliability for Connected Kafka",
|
||
"uri":"mrs_01_1960.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"731"
|
||
},
|
||
{
|
||
"desc":"When a query statement is executed, the returned result may be large (containing more than 100,000 records). In this case, JDBCServer out of memory (OOM) may occur. There",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Streaming Reading of Driver Execution Results",
|
||
"uri":"mrs_01_1961.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"732"
|
||
},
|
||
{
|
||
"desc":"When you perform the select query in Hive partitioned tables, the FileNotFoundException exception is displayed if a specified partition path does not exist in HDFS. To av",
|
||
"product_code":"mrs",
|
||
"title":"Filtering Partitions without Paths in Partitioned Tables",
|
||
"uri":"mrs_01_1962.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"733"
|
||
},
|
||
{
|
||
"desc":"Users need to implement security protection for Spark2x web UI when some data on the UI cannot be viewed by other users. Once a user attempts to log in to the UI, Spark2x",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Spark2x Web UI ACLs",
|
||
"uri":"mrs_01_1963.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"734"
|
||
},
|
||
{
|
||
"desc":"ORC is a column-based storage format in the Hadoop ecosystem. It originates from Apache Hive and is used to reduce the Hadoop data storage space and accelerate the Hive q",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Vector-based ORC Data Reading",
|
||
"uri":"mrs_01_1964.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"735"
|
||
},
|
||
{
|
||
"desc":"In earlier versions, the predicate for pruning Hive table partitions is pushed down. Only comparison expressions between column names and integers or character strings ca",
|
||
"product_code":"mrs",
|
||
"title":"Broaden Support for Hive Partition Pruning Predicate Pushdown",
|
||
"uri":"mrs_01_1965.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"736"
|
||
},
|
||
{
|
||
"desc":"In earlier versions, when the insert overwrite syntax is used to overwrite partition tables, only partitions with specified expressions are matched, and partitions withou",
|
||
"product_code":"mrs",
|
||
"title":"Hive Dynamic Partition Overwriting Syntax",
|
||
"uri":"mrs_01_1966.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"737"
|
||
},
|
||
{
|
||
"desc":"The execution plan for SQL statements is optimized in Spark. Common optimization rules are heuristic optimization rules. Heuristic optimization rules are provided based o",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Column Statistics Histogram to Enhance the CBO Accuracy",
|
||
"uri":"mrs_01_1967.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"738"
|
||
},
|
||
{
|
||
"desc":"JobHistory can use local disks to cache the historical data of Spark applications to prevent the JobHistory memory from loading a large amount of application data, reduci",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Local Disk Cache for JobHistory",
|
||
"uri":"mrs_01_1969.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"739"
|
||
},
|
||
{
|
||
"desc":"The Spark SQL adaptive execution feature enables Spark SQL to optimize subsequent execution processes based on intermediate results to improve overall execution efficienc",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Spark SQL to Enable the Adaptive Execution Feature",
|
||
"uri":"mrs_01_1970.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"740"
|
||
},
|
||
{
|
||
"desc":"When the event log mode is enabled for Spark, that is, spark.eventLog.enabled is set to true, events are written to a configured log file to record the program running pr",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Event Log Rollover",
|
||
"uri":"mrs_01_24170.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"715",
|
||
"code":"741"
|
||
},
|
||
{
|
||
"desc":"When Ranger is used as the permission management service of Spark SQL, the certificate in the cluster is required for accessing RangerAdmin. If you use a third-party JDK ",
|
||
"product_code":"mrs",
|
||
"title":"Adapting to the Third-party JDK When Ranger Is Used",
|
||
"uri":"mrs_01_2317.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"703",
|
||
"code":"742"
|
||
},
|
||
{
|
||
"desc":"Log paths:Executor run log: ${BIGDATA_DATA_HOME}/hadoop/data${i}/nm/containerlogs/application_${appid}/container_{$contid}The logs of running tasks are stored in the prec",
|
||
"product_code":"mrs",
|
||
"title":"Spark2x Logs",
|
||
"uri":"mrs_01_1971.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"702",
|
||
"code":"743"
|
||
},
|
||
{
|
||
"desc":"Container logs of running Spark applications are distributed on multiple nodes. This section describes how to quickly obtain container logs.You can run the yarn logs comm",
|
||
"product_code":"mrs",
|
||
"title":"Obtaining Container Logs of a Running Spark Application",
|
||
"uri":"mrs_01_1972.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"702",
|
||
"code":"744"
|
||
},
|
||
{
|
||
"desc":"In a large-scale Hadoop production cluster, HDFS metadata is stored in the NameNode memory, and the cluster scale is restricted by the memory limitation of each NameNode.",
|
||
"product_code":"mrs",
|
||
"title":"Small File Combination Tools",
|
||
"uri":"mrs_01_1973.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"702",
|
||
"code":"745"
|
||
},
|
||
{
|
||
"desc":"The first query of CarbonData is slow, which may cause a delay for nodes that have high requirements on real-time performance.The tool provides the following functions:Pr",
|
||
"product_code":"mrs",
|
||
"title":"Using CarbonData for First Query",
|
||
"uri":"mrs_01_2362.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"702",
|
||
"code":"746"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Spark2x Performance Tuning",
|
||
"uri":"mrs_01_1974.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"702",
|
||
"code":"747"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Spark Core Tuning",
|
||
"uri":"mrs_01_1975.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"747",
|
||
"code":"748"
|
||
},
|
||
{
|
||
"desc":"Spark supports the following types of serialization:JavaSerializerKryoSerializerData serialization affects the Spark application performance. In specific data format, Kry",
|
||
"product_code":"mrs",
|
||
"title":"Data Serialization",
|
||
"uri":"mrs_01_1976.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"748",
|
||
"code":"749"
|
||
},
|
||
{
|
||
"desc":"Spark is a memory-based computing frame. If the memory is insufficient during computing, the Spark execution efficiency will be adversely affected. You can determine whet",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Memory Configuration",
|
||
"uri":"mrs_01_1977.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"748",
|
||
"code":"750"
|
||
},
|
||
{
|
||
"desc":"The degree of parallelism (DOP) specifies the number of tasks to be executed concurrently. It determines the number of data blocks after the shuffle operation. Configure ",
|
||
"product_code":"mrs",
|
||
"title":"Setting the DOP",
|
||
"uri":"mrs_01_1978.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"748",
|
||
"code":"751"
|
||
},
|
||
{
|
||
"desc":"Broadcast distributes data sets to each node. It allows data to be obtained locally when a data set is needed during a Spark task. If broadcast is not used, data serializ",
|
||
"product_code":"mrs",
|
||
"title":"Using Broadcast Variables",
|
||
"uri":"mrs_01_1979.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"748",
|
||
"code":"752"
|
||
},
|
||
{
|
||
"desc":"When the Spark system runs applications that contain a shuffle process, an executor process also writes shuffle data and provides shuffle data for other executors in addi",
|
||
"product_code":"mrs",
|
||
"title":"Using the external shuffle service to improve performance",
|
||
"uri":"mrs_01_1980.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"748",
|
||
"code":"753"
|
||
},
|
||
{
|
||
"desc":"Resources are a key factor that affects Spark execution efficiency. When a long-running service (such as the JDBCServer) is allocated with multiple executors without task",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Dynamic Resource Scheduling in Yarn Mode",
|
||
"uri":"mrs_01_1981.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"748",
|
||
"code":"754"
|
||
},
|
||
{
|
||
"desc":"There are three processes in Spark on Yarn mode: driver, ApplicationMaster, and executor. The Driver and Executor handle the scheduling and running of the task. The Appli",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Process Parameters",
|
||
"uri":"mrs_01_1982.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"748",
|
||
"code":"755"
|
||
},
|
||
{
|
||
"desc":"Optimal program structure helps increase execution efficiency. During application programming, avoid shuffle operations and combine narrow-dependency operations.This topi",
|
||
"product_code":"mrs",
|
||
"title":"Designing the Direction Acyclic Graph (DAG)",
|
||
"uri":"mrs_01_1983.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"748",
|
||
"code":"756"
|
||
},
|
||
{
|
||
"desc":"If the overhead of each record is high, for example:Use mapPartitions to calculate data by partition.Use mapPartitions to flexibly operate data. For example, to calculate",
|
||
"product_code":"mrs",
|
||
"title":"Experience",
|
||
"uri":"mrs_01_1984.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"748",
|
||
"code":"757"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Spark SQL and DataFrame Tuning",
|
||
"uri":"mrs_01_1985.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"747",
|
||
"code":"758"
|
||
},
|
||
{
|
||
"desc":"When two tables are joined in Spark SQL, the broadcast function (see section \"Using Broadcast Variables\") can be used to broadcast tables to each node. This minimizes shu",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing the Spark SQL Join Operation",
|
||
"uri":"mrs_01_1986.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"759"
|
||
},
|
||
{
|
||
"desc":"When multiple tables are joined in Spark SQL, skew occurs in join keys and the data volume in some Hash buckets is much higher than that in other buckets. As a result, so",
|
||
"product_code":"mrs",
|
||
"title":"Improving Spark SQL Calculation Performance Under Data Skew",
|
||
"uri":"mrs_01_1987.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"760"
|
||
},
|
||
{
|
||
"desc":"A Spark SQL table may have many small files (far smaller than an HDFS block), each of which maps to a partition on the Spark by default. In other words, each small file i",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Spark SQL Performance in the Small File Scenario",
|
||
"uri":"mrs_01_1988.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"761"
|
||
},
|
||
{
|
||
"desc":"The INSERT...SELECT operation needs to be optimized if any of the following conditions is true:Many small files need to be queried.A few large files need to be queried.Th",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing the INSERT...SELECT Operation",
|
||
"uri":"mrs_01_1989.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"762"
|
||
},
|
||
{
|
||
"desc":"Multiple clients can be connected to JDBCServer at the same time. However, if the number of concurrent tasks is too large, the default configuration of JDBCServer must be",
|
||
"product_code":"mrs",
|
||
"title":"Multiple JDBC Clients Concurrently Connecting to JDBCServer",
|
||
"uri":"mrs_01_1990.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"763"
|
||
},
|
||
{
|
||
"desc":"When SparkSQL inserts data to dynamic partitioned tables, the more partitions there are, the more HDFS files a single task generates and the more memory metadata occupies",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Memory when Data Is Inserted into Dynamic Partitioned Tables",
|
||
"uri":"mrs_01_1992.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"764"
|
||
},
|
||
{
|
||
"desc":"A Spark SQL table may have many small files (far smaller than an HDFS block), each of which maps to a partition on the Spark by default. In other words, each small file i",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Small Files",
|
||
"uri":"mrs_01_1995.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"765"
|
||
},
|
||
{
|
||
"desc":"Spark SQL supports hash aggregate algorithm. Namely, use fast aggregate hashmap as cache to improve aggregate performance. The hashmap replaces the previous ColumnarBatch",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing the Aggregate Algorithms",
|
||
"uri":"mrs_01_1996.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"766"
|
||
},
|
||
{
|
||
"desc":"Save the partition information about the datasource table to the Metastore and process partition information in the Metastore.Optimize the datasource tables, support synt",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Datasource Tables",
|
||
"uri":"mrs_01_1997.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"767"
|
||
},
|
||
{
|
||
"desc":"Spark SQL supports rule-based optimization by default. However, the rule-based optimization cannot ensure that Spark selects the optimal query plan. Cost-Based Optimizer ",
|
||
"product_code":"mrs",
|
||
"title":"Merging CBO",
|
||
"uri":"mrs_01_1998.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"768"
|
||
},
|
||
{
|
||
"desc":"This section describes how to enable or disable the query optimization for inter-source complex SQL.(Optional) Prepare for connecting to the MPPDB data source.If the data",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing SQL Query of Data of Multiple Sources",
|
||
"uri":"mrs_01_1999.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"769"
|
||
},
|
||
{
|
||
"desc":"This section describes the optimization suggestions for SQL statements in multi-level nesting and hybrid join scenarios.The following provides an example of complex query",
|
||
"product_code":"mrs",
|
||
"title":"SQL Optimization for Multi-level Nesting and Hybrid Join",
|
||
"uri":"mrs_01_2000.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"758",
|
||
"code":"770"
|
||
},
|
||
{
|
||
"desc":"Streaming is a mini-batch streaming processing framework that features second-level delay and high throughput. To optimize Streaming is to improve its throughput while ma",
|
||
"product_code":"mrs",
|
||
"title":"Spark Streaming Tuning",
|
||
"uri":"mrs_01_2001.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"747",
|
||
"code":"771"
|
||
},
|
||
{
|
||
"desc":"In the scenario where a small number of requests are frequently sent from Spark on OBS to OBS, you can disable OBS monitoring to improve performance.Modify the configurat",
|
||
"product_code":"mrs",
|
||
"title":"Spark on OBS Tuning",
|
||
"uri":"mrs_01_24056.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"747",
|
||
"code":"772"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Spark2x",
|
||
"uri":"mrs_01_2002.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"702",
|
||
"code":"773"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Spark Core",
|
||
"uri":"mrs_01_2003.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"774"
|
||
},
|
||
{
|
||
"desc":"How do I view the aggregated container logs on the page when the log aggregation function is enabled on YARN?For details, see Viewing Aggregated Container Logs on the Web",
|
||
"product_code":"mrs",
|
||
"title":"How Do I View Aggregated Spark Application Logs?",
|
||
"uri":"mrs_01_2004.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"775"
|
||
},
|
||
{
|
||
"desc":"Communication between ApplicationMaster and ResourceManager remains abnormal for a long time. Why is the driver return code inconsistent with application status on Resour",
|
||
"product_code":"mrs",
|
||
"title":"Why Is the Return Code of Driver Inconsistent with Application State Displayed on ResourceManager WebUI?",
|
||
"uri":"mrs_01_2005.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"776"
|
||
},
|
||
{
|
||
"desc":"Why cannot exit the Driver process after running the yarn application -kill applicationID command to stop the Spark Streaming application?Running the yarn application -ki",
|
||
"product_code":"mrs",
|
||
"title":"Why Cannot Exit the Driver Process?",
|
||
"uri":"mrs_01_2006.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"777"
|
||
},
|
||
{
|
||
"desc":"On a large cluster of 380 nodes, run the ScalaSort test case in the HiBench test that runs the 29T data, and configure Executor as --executor-cores 4. The following abnor",
|
||
"product_code":"mrs",
|
||
"title":"Why Does FetchFailedException Occur When the Network Connection Is Timed out",
|
||
"uri":"mrs_01_2007.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"778"
|
||
},
|
||
{
|
||
"desc":"How to configure the event queue size if the following Driver log information is displayed indicating that the event queue overflows?Common applicationsDropping SparkList",
|
||
"product_code":"mrs",
|
||
"title":"How to Configure Event Queue Size If Event Queue Overflows?",
|
||
"uri":"mrs_01_2008.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"779"
|
||
},
|
||
{
|
||
"desc":"During Spark application execution, if the driver fails to connect to ResourceManager, the following error is reported and it does not exit for a long time. What can I do",
|
||
"product_code":"mrs",
|
||
"title":"What Can I Do If the getApplicationReport Exception Is Recorded in Logs During Spark Application Execution and the Application Does Not Exit for a Long Time?",
|
||
"uri":"mrs_01_2009.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"780"
|
||
},
|
||
{
|
||
"desc":"When Spark executes an application, an error similar to the following is reported and the application ends. What can I do?Symptom: The value of spark.rpc.io.connectionTim",
|
||
"product_code":"mrs",
|
||
"title":"What Can I Do If \"Connection to ip:port has been quiet for xxx ms while there are outstanding requests\" Is Reported When Spark Executes an Application and the Application Ends?",
|
||
"uri":"mrs_01_2010.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"781"
|
||
},
|
||
{
|
||
"desc":"If the NodeManager is shut down with the Executor dynamic allocation enabled, the Executors on the node where the NodeManeger is shut down fail to be removed from the dri",
|
||
"product_code":"mrs",
|
||
"title":"Why Do Executors Fail to be Removed After the NodeManeger Is Shut Down?",
|
||
"uri":"mrs_01_2011.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"782"
|
||
},
|
||
{
|
||
"desc":"ExternalShuffle is enabled for the application that runs Spark. Task loss occurs in the application because the message \"java.lang.NullPointerException: Password cannot b",
|
||
"product_code":"mrs",
|
||
"title":"What Can I Do If the Message \"Password cannot be null if SASL is enabled\" Is Displayed?",
|
||
"uri":"mrs_01_2012.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"783"
|
||
},
|
||
{
|
||
"desc":"When inserting data into the dynamic partition table, a large number of shuffle files are damaged due to the disk disconnection, node error, and the like. In this case, w",
|
||
"product_code":"mrs",
|
||
"title":"What Should I Do If the Message \"Failed to CREATE_FILE\" Is Displayed in the Restarted Tasks When Data Is Inserted Into the Dynamic Partition Table?",
|
||
"uri":"mrs_01_2013.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"784"
|
||
},
|
||
{
|
||
"desc":"When Hash shuffle is used to run a job that consists of 1000000 map tasks x 100000 reduce tasks, run logs report many message failures and Executor heartbeat timeout, lea",
|
||
"product_code":"mrs",
|
||
"title":"Why Tasks Fail When Hash Shuffle Is Used?",
|
||
"uri":"mrs_01_2014.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"785"
|
||
},
|
||
{
|
||
"desc":"When the http(s)://<spark ip>:<spark port> mode is used to access the Spark JobHistory page, if the displayed Spark JobHistory page is not the page of FusionInsight Manag",
|
||
"product_code":"mrs",
|
||
"title":"What Can I Do If the Error Message \"DNS query failed\" Is Displayed When I Access the Aggregated Logs Page of Spark Applications?",
|
||
"uri":"mrs_01_2015.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"786"
|
||
},
|
||
{
|
||
"desc":"When I execute a 100 TB TPC-DS test suite in the JDBCServer mode, the \"Timeout waiting for task\" is displayed. As a result, shuffle fetch fails, the stage keeps retrying,",
|
||
"product_code":"mrs",
|
||
"title":"What Can I Do If Shuffle Fetch Fails Due to the \"Timeout Waiting for Task\" Exception?",
|
||
"uri":"mrs_01_2016.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"787"
|
||
},
|
||
{
|
||
"desc":"When I run Spark tasks with a large data volume, for example, 100 TB TPCDS test suite, why does the Stage retry due to Executor loss sometimes? The message \"Executor 532 ",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the Stage Retry due to the Crash of the Executor?",
|
||
"uri":"mrs_01_2017.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"788"
|
||
},
|
||
{
|
||
"desc":"When more than 50 terabytes of data is shuffled, some executors fail to register shuffle services due to timeout. The shuffle tasks then fail. Why? The error log is as fo",
|
||
"product_code":"mrs",
|
||
"title":"Why Do the Executors Fail to Register Shuffle Services During the Shuffle of a Large Amount of Data?",
|
||
"uri":"mrs_01_2018.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"789"
|
||
},
|
||
{
|
||
"desc":"During the execution of Spark applications, if the YARN External Shuffle service is enabled and there are too many shuffle tasks, the java.lang.OutofMemoryError: Direct b",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the Out of Memory Error Occur in NodeManager During the Execution of Spark Applications",
|
||
"uri":"mrs_01_2019.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"790"
|
||
},
|
||
{
|
||
"desc":"Execution of the sparkbench task (for example, Wordcount) of HiBench6 fails. The bench.log indicates that the Yarn task fails to be executed. The failure information disp",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the Realm Information Fail to Be Obtained When SparkBench is Run on HiBench for the Cluster in Security Mode?",
|
||
"uri":"mrs_01_2021.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"774",
|
||
"code":"791"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Spark SQL and DataFrame",
|
||
"uri":"mrs_01_2022.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"792"
|
||
},
|
||
{
|
||
"desc":"Suppose that there is a table src(d1, d2, m) with the following data:The results for statement \"select d1, sum(d1) from src group by d1, d2 with rollup\" are shown as belo",
|
||
"product_code":"mrs",
|
||
"title":"What Do I have to Note When Using Spark SQL ROLLUP and CUBE?",
|
||
"uri":"mrs_01_2023.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"793"
|
||
},
|
||
{
|
||
"desc":"Why temporary tables of the previous database are displayed after the database is switched?Create a temporary DataSource table, for example:create temporary table ds_parq",
|
||
"product_code":"mrs",
|
||
"title":"Why Spark SQL Is Displayed as a Temporary Table in Different Databases?",
|
||
"uri":"mrs_01_2024.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"794"
|
||
},
|
||
{
|
||
"desc":"Is it possible to assign parameter values through Spark commands, in addition to through a user interface or a configuration file?Spark configuration options can be defin",
|
||
"product_code":"mrs",
|
||
"title":"How to Assign a Parameter Value in a Spark Command?",
|
||
"uri":"mrs_01_2025.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"795"
|
||
},
|
||
{
|
||
"desc":"The following error information is displayed when a new user creates a table using SparkSQL:When you create a table using Spark SQL, the interface of Hive is called by th",
|
||
"product_code":"mrs",
|
||
"title":"What Directory Permissions Do I Need to Create a Table Using SparkSQL?",
|
||
"uri":"mrs_01_2026.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"796"
|
||
},
|
||
{
|
||
"desc":"Why do I fail to delete the UDF using another service, for example, delete the UDF created by Hive using Spark SQL.The UDF can be created using any of the following servi",
|
||
"product_code":"mrs",
|
||
"title":"Why Do I Fail to Delete the UDF Using Another Service?",
|
||
"uri":"mrs_01_2027.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"797"
|
||
},
|
||
{
|
||
"desc":"Why cannot I query newly inserted data in a parquet Hive table using SparkSQL? This problem occurs in the following scenarios:For partitioned tables and non-partitioned t",
|
||
"product_code":"mrs",
|
||
"title":"Why Cannot I Query Newly Inserted Data in a Parquet Hive Table Using SparkSQL?",
|
||
"uri":"mrs_01_2028.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"798"
|
||
},
|
||
{
|
||
"desc":"What is cache table used for? Which point should I pay attention to while using cache table?Spark SQL caches tables into memory so that data can be directly read from mem",
|
||
"product_code":"mrs",
|
||
"title":"How to Use Cache Table?",
|
||
"uri":"mrs_01_2029.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"799"
|
||
},
|
||
{
|
||
"desc":"During the repartition operation, the number of blocks (spark.sql.shuffle.partitions) is set to 4,500, and the number of keys used by repartition exceeds 4,000. It is exp",
|
||
"product_code":"mrs",
|
||
"title":"Why Are Some Partitions Empty During Repartition?",
|
||
"uri":"mrs_01_2030.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"800"
|
||
},
|
||
{
|
||
"desc":"When the default configuration is used, 16 terabytes of text data fails to be converted into 4 terabytes of parquet data, and the error information below is displayed. Wh",
|
||
"product_code":"mrs",
|
||
"title":"Why Does 16 Terabytes of Text Data Fails to Be Converted into 4 Terabytes of Parquet Data?",
|
||
"uri":"mrs_01_2031.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"801"
|
||
},
|
||
{
|
||
"desc":"When the table name is set to table, why the error information similar to the following is displayed after the drop table table command or other command is run?The word t",
|
||
"product_code":"mrs",
|
||
"title":"Why the Operation Fails When the Table Name Is TABLE?",
|
||
"uri":"mrs_01_2033.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"802"
|
||
},
|
||
{
|
||
"desc":"When the analyze table statement is executed using spark-sql, the task is suspended and the information below is displayed. Why?When the statement is executed, the SQL st",
|
||
"product_code":"mrs",
|
||
"title":"Why Is a Task Suspended When the ANALYZE TABLE Statement Is Executed and Resources Are Insufficient?",
|
||
"uri":"mrs_01_2034.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"803"
|
||
},
|
||
{
|
||
"desc":"If I access a parquet table on which I do not have permission, why a job is run before \"Missing Privileges\" is displayed?The execution sequence of Spark SQL statement par",
|
||
"product_code":"mrs",
|
||
"title":"If I Access a parquet Table on Which I Do not Have Permission, Why a Job Is Run Before \"Missing Privileges\" Is Displayed?",
|
||
"uri":"mrs_01_2035.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"804"
|
||
},
|
||
{
|
||
"desc":"When do I fail to modify the metadata in the datasource and Spark on HBase table by running the Hive command?The current Spark version does not support modifying the meta",
|
||
"product_code":"mrs",
|
||
"title":"Why Do I Fail to Modify MetaData by Running the Hive Command?",
|
||
"uri":"mrs_01_2036.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"805"
|
||
},
|
||
{
|
||
"desc":"After successfully running Spark tasks with large data volume, for example, 2-TB TPCDS test suite, why is the abnormal stack information \"RejectedExecutionException\" disp",
|
||
"product_code":"mrs",
|
||
"title":"Why Is \"RejectedExecutionException\" Displayed When I Exit Spark SQL?",
|
||
"uri":"mrs_01_2037.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"806"
|
||
},
|
||
{
|
||
"desc":"During a health check, if the concurrent statements exceed the threshold of the thread pool, the health check statements fail to be executed, the health check program tim",
|
||
"product_code":"mrs",
|
||
"title":"What Should I Do If the JDBCServer Process is Mistakenly Killed During a Health Check?",
|
||
"uri":"mrs_01_2038.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"807"
|
||
},
|
||
{
|
||
"desc":"Why no result is found when 2016-6-30 is set in the date field as the filter condition?As shown in the following figure, trx_dte_par in the select count (*) from trxfintr",
|
||
"product_code":"mrs",
|
||
"title":"Why No Result Is found When 2016-6-30 Is Set in the Date Field as the Filter Condition?",
|
||
"uri":"mrs_01_2039.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"808"
|
||
},
|
||
{
|
||
"desc":"Why does the --hivevaroption I specified in the command for starting spark-beeline fail to take effect?In the V100R002C60 version, if I use the --hivevar <VAR_NAME>=<var_",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the \"--hivevar\" Option I Specified in the Command for Starting spark-beeline Fail to Take Effect?",
|
||
"uri":"mrs_01_2040.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"809"
|
||
},
|
||
{
|
||
"desc":"In normal mode, when I create a temporary table or view in spark-beeline, the error message \"Permission denied\" is displayed, indicating that I have no permissions on the",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the \"Permission denied\" Exception Occur When I Create a Temporary Table or View in Spark-beeline?",
|
||
"uri":"mrs_01_2041.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"810"
|
||
},
|
||
{
|
||
"desc":"When I run a complex SQL statement, for example, SQL statements with multiple layers of nesting statements and a single layer statement contains a large number of logic c",
|
||
"product_code":"mrs",
|
||
"title":"Why Is the \"Code of method ... grows beyond 64 KB\" Error Message Displayed When I Run Complex SQL Statements?",
|
||
"uri":"mrs_01_2042.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"811"
|
||
},
|
||
{
|
||
"desc":"When the driver memory is set to 10 GB and the 10 TB TPCDS test suites are continuously run in Beeline/JDBCServer mode, SQL statements fail to be executed due to insuffic",
|
||
"product_code":"mrs",
|
||
"title":"Why Is Memory Insufficient if 10 Terabytes of TPCDS Test Suites Are Consecutively Run in Beeline/JDBCServer Mode?",
|
||
"uri":"mrs_01_2043.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"812"
|
||
},
|
||
{
|
||
"desc":"Scenario 1I set up permanent functions using the add jar statement. After Beeline connects to different JDBCServer or JDBCServer is restarted, I have to run the add jar ",
|
||
"product_code":"mrs",
|
||
"title":"Why Are Some Functions Not Available when Another JDBCServer Is Connected?",
|
||
"uri":"mrs_01_2044.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"813"
|
||
},
|
||
{
|
||
"desc":"When Spark2x accesses the DataSource table created by Spark1.5, a message is displayed indicating that schema information cannot be obtained. As a result, the table canno",
|
||
"product_code":"mrs",
|
||
"title":"Why Does Spark2x Have No Access to DataSource Tables Created by Spark1.5?",
|
||
"uri":"mrs_01_2046.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"814"
|
||
},
|
||
{
|
||
"desc":"Why does \"Failed to create ThriftService instance\" occur when spark beeline fails to run?Beeline logs are as follows:In addition, the \"Timed out waiting for client to con",
|
||
"product_code":"mrs",
|
||
"title":"Why Does Spark-beeline Fail to Run and Error Message \"Failed to create ThriftService instance\" Is Displayed?",
|
||
"uri":"mrs_01_2047.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"792",
|
||
"code":"815"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Spark Streaming",
|
||
"uri":"mrs_01_2048.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"816"
|
||
},
|
||
{
|
||
"desc":"After a Spark Streaming task is run and data is input, no processing result is displayed. Open the web page to view the Spark job execution status. The following figure s",
|
||
"product_code":"mrs",
|
||
"title":"What Can I Do If Spark Streaming Tasks Are Blocked?",
|
||
"uri":"mrs_01_2050.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"816",
|
||
"code":"817"
|
||
},
|
||
{
|
||
"desc":"When Spark Streaming tasks are running, the data processing performance does not improve significantly as the number of executors increases. What should I pay attention t",
|
||
"product_code":"mrs",
|
||
"title":"What Should I Pay Attention to When Optimizing Spark Streaming Task Parameters?",
|
||
"uri":"mrs_01_2051.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"816",
|
||
"code":"818"
|
||
},
|
||
{
|
||
"desc":"Change the validity period of the Kerberos ticket and HDFS token to 5 minutes, set dfs.namenode.delegation.token.renew-interval to a value less than 60 seconds, and submi",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the Spark Streaming Application Fail to Be Submitted After the Token Validity Period Expires?",
|
||
"uri":"mrs_01_2052.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"816",
|
||
"code":"819"
|
||
},
|
||
{
|
||
"desc":"Spark Streaming application creates one input stream without output logic. The application fails to restart from checkpoint and an error will be shown like below:When Str",
|
||
"product_code":"mrs",
|
||
"title":"Why does Spark Streaming Application Fail to Restart from Checkpoint When It Creates an Input Stream Without Output Logic?",
|
||
"uri":"mrs_01_2053.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"816",
|
||
"code":"820"
|
||
},
|
||
{
|
||
"desc":"When the Kafka is restarted during the execution of the Spark Streaming application, the application cannot obtain the topic offset from the Kafka. As a result, the job f",
|
||
"product_code":"mrs",
|
||
"title":"Why Is the Input Size Corresponding to Batch Time on the Web UI Set to 0 Records When Kafka Is Restarted During Spark Streaming Running?",
|
||
"uri":"mrs_01_2054.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"816",
|
||
"code":"821"
|
||
},
|
||
{
|
||
"desc":"The job information obtained from the restful interface of an ended Spark application is incorrect: the value of numActiveTasks is negative, as shown in Figure 1:numActiv",
|
||
"product_code":"mrs",
|
||
"title":"Why the Job Information Obtained from the restful Interface of an Ended Spark Application Is Incorrect?",
|
||
"uri":"mrs_01_2055.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"822"
|
||
},
|
||
{
|
||
"desc":"In FusionInsight, the Spark application is run in yarn-client mode on the client. The following error occurs during the switch from the Yarn web UI to the application web",
|
||
"product_code":"mrs",
|
||
"title":"Why Cannot I Switch from the Yarn Web UI to the Spark Web UI?",
|
||
"uri":"mrs_01_2056.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"823"
|
||
},
|
||
{
|
||
"desc":"An error occurs when I access a Spark application page on the HistoryServer page.Check the HistoryServer logs. The \"FileNotFound\" exception is found. The related logs are",
|
||
"product_code":"mrs",
|
||
"title":"What Can I Do If an Error Occurs when I Access the Application Page Because the Application Cached by HistoryServer Is Recycled?",
|
||
"uri":"mrs_01_2057.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"824"
|
||
},
|
||
{
|
||
"desc":"When I run an application with an empty part file in HDFS with the log grouping function enabled, why is not the application displayed on the homepage of JobHistory?On th",
|
||
"product_code":"mrs",
|
||
"title":"Why Is not an Application Displayed When I Run the Application with the Empty Part File?",
|
||
"uri":"mrs_01_2058.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"825"
|
||
},
|
||
{
|
||
"desc":"The following code fails to be executed on spark-shell of Spark2x:In Spark2x, the duplicate field name of the join statement is checked. You need to modify the code to en",
|
||
"product_code":"mrs",
|
||
"title":"Why Does Spark2x Fail to Export a Table with the Same Field Name?",
|
||
"uri":"mrs_01_2059.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"826"
|
||
},
|
||
{
|
||
"desc":"Why JRE fatal error after running Spark application multiple times?When you run Spark application multiple times, JRE fatal error occurs and this is due to the problem wi",
|
||
"product_code":"mrs",
|
||
"title":"Why JRE fatal error after running Spark application multiple times?",
|
||
"uri":"mrs_01_2060.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"827"
|
||
},
|
||
{
|
||
"desc":"Occasionally, Internet Explorer 9, Explorer 10, or Explorer 11 fails to access the native Spark2x UI.Internet Explorer 9, Explorer 10, or Explorer 11 fails to access the ",
|
||
"product_code":"mrs",
|
||
"title":"\"This page can't be displayed\" Is Displayed When Internet Explorer Fails to Access the Native Spark2x UI",
|
||
"uri":"mrs_01_2061.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"828"
|
||
},
|
||
{
|
||
"desc":"There are two clusters, cluster 1 and cluster 2. How do I use Spark2x in cluster 1 to access HDFS, Hive, HBase, Solr, and Kafka components in cluster 2?Components in two ",
|
||
"product_code":"mrs",
|
||
"title":"How Does Spark2x Access External Cluster Components?",
|
||
"uri":"mrs_01_2062.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"829"
|
||
},
|
||
{
|
||
"desc":"Assume there is a data file path named /test_data_path. User A creates a foreign table named tableA for the directory, and user B creates a foreign table named tableB for",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the Foreign Table Query Fail When Multiple Foreign Tables Are Created in the Same Directory?",
|
||
"uri":"mrs_01_2063.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"830"
|
||
},
|
||
{
|
||
"desc":"After a Spark application that contains a job with millions of tasks. After the application creation is complete, if you access the native page of the application in JobH",
|
||
"product_code":"mrs",
|
||
"title":"What Should I Do If the Native Page of an Application of Spark2x JobHistory Fails to Display During Access to the Page",
|
||
"uri":"mrs_01_2064.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"831"
|
||
},
|
||
{
|
||
"desc":"In some scenarios, the following exception occurs in the Spark shuffle phase:For JDBC:Log in to FusionInsight Manager, change the value of the JDBCServer parameter spark.",
|
||
"product_code":"mrs",
|
||
"title":"Spark Shuffle Exception Handling",
|
||
"uri":"mrs_01_24176.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"773",
|
||
"code":"832"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Tez",
|
||
"uri":"mrs_01_2067.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"833"
|
||
},
|
||
{
|
||
"desc":"On Manager, choose Cluster > Service > Tez > Configuration > All Configurations. Enter a parameter name in the search box.",
|
||
"product_code":"mrs",
|
||
"title":"Common Tez Parameters",
|
||
"uri":"mrs_01_2069.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"833",
|
||
"code":"834"
|
||
},
|
||
{
|
||
"desc":"Tez displays the Tez task execution process on a GUI. You can view the task execution details on the GUI.The TimelineServer instance of the Yarn service has been installe",
|
||
"product_code":"mrs",
|
||
"title":"Accessing TezUI",
|
||
"uri":"mrs_01_2070.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"833",
|
||
"code":"835"
|
||
},
|
||
{
|
||
"desc":"Log path: The default save path of Tez logs is /var/log/Bigdata/tez/role name.TezUI: /var/log/Bigdata/tez/tezui (run logs) and /var/log/Bigdata/audit/tez/tezui (audit log",
|
||
"product_code":"mrs",
|
||
"title":"Log Overview",
|
||
"uri":"mrs_01_2071.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"833",
|
||
"code":"836"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues",
|
||
"uri":"mrs_01_2072.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"833",
|
||
"code":"837"
|
||
},
|
||
{
|
||
"desc":"After a user logs in to Manager and switches to the Tez web UI, the submitted Tez tasks are not displayed.The Tez task data displayed on the Tez WebUI requires the suppor",
|
||
"product_code":"mrs",
|
||
"title":"TezUI Cannot Display Tez Task Execution Details",
|
||
"uri":"mrs_01_2073.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"837",
|
||
"code":"838"
|
||
},
|
||
{
|
||
"desc":"When a user logs in to Manager and switches to the Tez web UI, error 404 or 503 is displayed.The Tez web UI depends on the TimelineServer instance of Yarn. Therefore, Tim",
|
||
"product_code":"mrs",
|
||
"title":"Error Occurs When a User Switches to the Tez Web UI",
|
||
"uri":"mrs_01_2074.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"837",
|
||
"code":"839"
|
||
},
|
||
{
|
||
"desc":"A user logs in to the Tez web UI and clicks Logs, but the Yarn log page fails to be displayed and data cannot be loaded.Currently, the hostname is used for the access to ",
|
||
"product_code":"mrs",
|
||
"title":"Yarn Logs Cannot Be Viewed on the TezUI Page",
|
||
"uri":"mrs_01_2075.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"837",
|
||
"code":"840"
|
||
},
|
||
{
|
||
"desc":"A user logs in to Manager and switches to the Tez web UI page, but no data for the submitted task is displayed on the Hive Queries page.To display Hive Queries task data ",
|
||
"product_code":"mrs",
|
||
"title":"Table Data Is Empty on the TezUI HiveQueries Page",
|
||
"uri":"mrs_01_2076.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"837",
|
||
"code":"841"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using Yarn",
|
||
"uri":"mrs_01_0851.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"842"
|
||
},
|
||
{
|
||
"desc":"The Yarn service provides one queue (default) for users. Users allocate system resources to each queue. After the configuration is complete, you can click Refresh Queue o",
|
||
"product_code":"mrs",
|
||
"title":"Common Yarn Parameters",
|
||
"uri":"mrs_01_0852.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"843"
|
||
},
|
||
{
|
||
"desc":"This section describes how to create and configure a Yarn role. The Yarn role can be assigned with Yarn administrator permission and manage Yarn queue resources.If the cu",
|
||
"product_code":"mrs",
|
||
"title":"Creating Yarn Roles",
|
||
"uri":"mrs_01_0853.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"844"
|
||
},
|
||
{
|
||
"desc":"This section guides users to use a Yarn client in an O&M or service scenario.The client has been installed.For example, the installation directory is /opt/hadoopclient. T",
|
||
"product_code":"mrs",
|
||
"title":"Using the Yarn Client",
|
||
"uri":"mrs_01_0854.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"845"
|
||
},
|
||
{
|
||
"desc":"If the hardware resources (such as the number of CPU cores and memory size) of the nodes for deploying NodeManagers are different but the NodeManager available hardware r",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Resources for a NodeManager Role Instance",
|
||
"uri":"mrs_01_0855.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"846"
|
||
},
|
||
{
|
||
"desc":"If the storage directories defined by the Yarn NodeManager are incorrect or the Yarn storage plan changes, the system administrator needs to modify the NodeManager storag",
|
||
"product_code":"mrs",
|
||
"title":"Changing NodeManager Storage Directories",
|
||
"uri":"mrs_01_0856.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"847"
|
||
},
|
||
{
|
||
"desc":"In the multi-tenant scenario in security mode, a cluster can be used by multiple users, and tasks of multiple users can be submitted and executed. Users are invisible to ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Strict Permission Control for Yarn",
|
||
"uri":"mrs_01_0857.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"848"
|
||
},
|
||
{
|
||
"desc":"Yarn provides the container log aggregation function to collect logs generated by containers on each node to HDFS to release local disk space. You can collect logs in eit",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Container Log Aggregation",
|
||
"uri":"mrs_01_0858.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"849"
|
||
},
|
||
{
|
||
"desc":"CGroups is a Linux kernel feature. In YARN this feature allows containers to be limited in their resource usage (example, CPU usage). Without CGroups, it is hard to limit",
|
||
"product_code":"mrs",
|
||
"title":"Using CGroups with YARN",
|
||
"uri":"mrs_01_0859.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"850"
|
||
},
|
||
{
|
||
"desc":"When resources are insufficient or ApplicationMaster fails to start, a client probably encounters running errors.Go to the All Configurations page of Yarn and enter a par",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Number of ApplicationMaster Retries",
|
||
"uri":"mrs_01_0860.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"851"
|
||
},
|
||
{
|
||
"desc":"During the process of starting the configuration, when the ApplicationMaster creates a container, the allocated memory is automatically adjusted according to the total nu",
|
||
"product_code":"mrs",
|
||
"title":"Configure the ApplicationMaster to Automatically Adjust the Allocated Memory",
|
||
"uri":"mrs_01_0861.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"852"
|
||
},
|
||
{
|
||
"desc":"The value of the yarn.http.policy parameter must be consistent on both the server and clients. Web UIs on clients will be garbled if an inconsistency exists, for example,",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Access Channel Protocol",
|
||
"uri":"mrs_01_0862.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"853"
|
||
},
|
||
{
|
||
"desc":"If memory usage of the submitted application cannot be estimated, you can modify the configuration on the server to determine whether to check the memory usage.If the mem",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Memory Usage Detection",
|
||
"uri":"mrs_01_0863.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"854"
|
||
},
|
||
{
|
||
"desc":"If the custom scheduler is set in ResourceManager, you can set the corresponding web page and other Web applications for the custom scheduler.Go to the All Configurations",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Additional Scheduler WebUI",
|
||
"uri":"mrs_01_0864.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"855"
|
||
},
|
||
{
|
||
"desc":"The Yarn Restart feature includes ResourceManager Restart and NodeManager Restart.When ResourceManager Restart is enabled, the new active ResourceManager node loads the i",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Yarn Restart",
|
||
"uri":"mrs_01_0865.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"856"
|
||
},
|
||
{
|
||
"desc":"In YARN, ApplicationMasters run on NodeManagers just like every other container (ignoring unmanaged ApplicationMasters in this context). ApplicationMasters may break down",
|
||
"product_code":"mrs",
|
||
"title":"Configuring ApplicationMaster Work Preserving",
|
||
"uri":"mrs_01_0866.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"857"
|
||
},
|
||
{
|
||
"desc":"The default log level of localized container is INFO. You can change the log level by configuring yarn.nodemanager.container-localizer.java.opts.On Manager, choose Cluste",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Localized Log Levels",
|
||
"uri":"mrs_01_0867.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"858"
|
||
},
|
||
{
|
||
"desc":"Currently, YARN allows the user that starts the NodeManager to run the task submitted by all other users, or the users to run the task submitted by themselves.On Manager,",
|
||
"product_code":"mrs",
|
||
"title":"Configuring Users That Run Tasks",
|
||
"uri":"mrs_01_0868.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"859"
|
||
},
|
||
{
|
||
"desc":"The default paths for saving Yarn logs are as follows:ResourceManager: /var/log/Bigdata/yarn/rm (run logs) and /var/log/Bigdata/audit/yarn/rm (audit logs)NodeManager: /va",
|
||
"product_code":"mrs",
|
||
"title":"Yarn Log Overview",
|
||
"uri":"mrs_01_0870.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"860"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Yarn Performance Tuning",
|
||
"uri":"mrs_01_0871.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"861"
|
||
},
|
||
{
|
||
"desc":"The capacity scheduler of ResourceManager implements job preemption to simplify job running in queues and improve resource utilization. The process is as follows:Assume t",
|
||
"product_code":"mrs",
|
||
"title":"Preempting a Task",
|
||
"uri":"mrs_01_0872.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"861",
|
||
"code":"862"
|
||
},
|
||
{
|
||
"desc":"The resource contention scenarios of a cluster are as follows:Submit two jobs (Job 1 and Job 2) with lower priorities.Some tasks of running Job 1 and Job 2 are in the run",
|
||
"product_code":"mrs",
|
||
"title":"Setting the Task Priority",
|
||
"uri":"mrs_01_0873.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"861",
|
||
"code":"863"
|
||
},
|
||
{
|
||
"desc":"After the scheduler of a big data cluster is properly configured, you can adjust the available memory, CPU resources, and local disk of each node to optimize the performa",
|
||
"product_code":"mrs",
|
||
"title":"Optimizing Node Configuration",
|
||
"uri":"mrs_01_0874.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"861",
|
||
"code":"864"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About Yarn",
|
||
"uri":"mrs_01_2077.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"842",
|
||
"code":"865"
|
||
},
|
||
{
|
||
"desc":"Why mounted directory for Container is not cleared after the completion of the job while using CGroups?The mounted path for the Container should be cleared even if job is",
|
||
"product_code":"mrs",
|
||
"title":"Why Mounted Directory for Container is Not Cleared After the Completion of the Job While Using CGroups?",
|
||
"uri":"mrs_01_2078.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"866"
|
||
},
|
||
{
|
||
"desc":"Why is the HDFS_DELEGATION_TOKEN expired exception reported when a job fails in security mode?HDFS_DELEGATION_TOKEN expires because the token is not updated or it is acce",
|
||
"product_code":"mrs",
|
||
"title":"Why the Job Fails with HDFS_DELEGATION_TOKEN Expired Exception?",
|
||
"uri":"mrs_01_2079.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"867"
|
||
},
|
||
{
|
||
"desc":"If Yarn is restarted in either of the following scenarios, local logs will not be deleted as scheduled and will be retained permanently:When Yarn is restarted during task",
|
||
"product_code":"mrs",
|
||
"title":"Why Are Local Logs Not Deleted After YARN Is Restarted?",
|
||
"uri":"mrs_01_2080.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"868"
|
||
},
|
||
{
|
||
"desc":"Why the task does not fail even though AppAttempts restarts due to failure for more than two times?During the task execution process, if the ContainerExitStatus returns v",
|
||
"product_code":"mrs",
|
||
"title":"Why the Task Does Not Fail Even Though AppAttempts Restarts for More Than Two Times?",
|
||
"uri":"mrs_01_2081.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"869"
|
||
},
|
||
{
|
||
"desc":"After I moved an application from one queue to another, why is it moved back to the original queue after ResourceManager restarts?This problem is caused by the constraint",
|
||
"product_code":"mrs",
|
||
"title":"Why Is an Application Moved Back to the Original Queue After ResourceManager Restarts?",
|
||
"uri":"mrs_01_2082.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"870"
|
||
},
|
||
{
|
||
"desc":"Why does Yarn not release the blacklist even all nodes are added to the blacklist?In Yarn, when the number of application nodes added to the blacklist by ApplicationMaste",
|
||
"product_code":"mrs",
|
||
"title":"Why Does Yarn Not Release the Blacklist Even All Nodes Are Added to the Blacklist?",
|
||
"uri":"mrs_01_2083.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"871"
|
||
},
|
||
{
|
||
"desc":"The switchover of ResourceManager occurs continuously when multiple, for example 2,000, tasks are running concurrently, causing the Yarn service unavailable.The cause is ",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the Switchover of ResourceManager Occur Continuously?",
|
||
"uri":"mrs_01_2084.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"872"
|
||
},
|
||
{
|
||
"desc":"Why does a new application fail if a NodeManager has been in unhealthy status for 10 minutes?When nodeSelectPolicy is set to SEQUENCE and the first NodeManager connected ",
|
||
"product_code":"mrs",
|
||
"title":"Why Does a New Application Fail If a NodeManager Has Been in Unhealthy Status for 10 Minutes?",
|
||
"uri":"mrs_01_2085.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"873"
|
||
},
|
||
{
|
||
"desc":"If a user belongs to multiple user groups with different default queue configurations, which queue will be selected as the default queue when an application is submitted?",
|
||
"product_code":"mrs",
|
||
"title":"What Is the Queue Replacement Policy?",
|
||
"uri":"mrs_01_2086.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"874"
|
||
},
|
||
{
|
||
"desc":"Why does an error occur when I query the applicationID of a completed or non-existing application using the RESTful APIs?The Superior scheduler only stores the applicatio",
|
||
"product_code":"mrs",
|
||
"title":"Why Does an Error Occur When I Query the ApplicationID of a Completed or Non-existing Application Using the RESTful APIs?",
|
||
"uri":"mrs_01_2087.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"875"
|
||
},
|
||
{
|
||
"desc":"In Superior scheduling mode, if a single NodeManager is faulty, why may the MapReduce tasks fail?In normal cases, when the attempt of a single task of an application fail",
|
||
"product_code":"mrs",
|
||
"title":"Why May A Single NodeManager Fault Cause MapReduce Task Failures in the Superior Scheduling Mode?",
|
||
"uri":"mrs_01_2088.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"876"
|
||
},
|
||
{
|
||
"desc":"When a queue is deleted when there are applications running in it, these applications are moved to the \"lost_and_found\" queue. When these applications are moved back to a",
|
||
"product_code":"mrs",
|
||
"title":"Why Are Applications Suspended After They Are Moved From Lost_and_Found Queue to Another Queue?",
|
||
"uri":"mrs_01_2089.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"877"
|
||
},
|
||
{
|
||
"desc":"How do I limit the size of application diagnostic messages stored in the ZKstore?In some cases, it has been observed that diagnostic messages may grow infinitely. Because",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Limit the Size of Application Diagnostic Messages Stored in the ZKstore?",
|
||
"uri":"mrs_01_2090.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"878"
|
||
},
|
||
{
|
||
"desc":"Why does a MapReduce job fail to run when a non-ViewFS file system is configured as ViewFS?When a non-ViewFS file system is configured as a ViewFS using cluster, the user",
|
||
"product_code":"mrs",
|
||
"title":"Why Does a MapReduce Job Fail to Run When a Non-ViewFS File System Is Configured as ViewFS?",
|
||
"uri":"mrs_01_2091.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"879"
|
||
},
|
||
{
|
||
"desc":"After the Native Task feature is enabled, Reduce tasks fail to run in some OSs.When -Dmapreduce.job.map.output.collector.class=org.apache.hadoop.mapred.nativetask.NativeM",
|
||
"product_code":"mrs",
|
||
"title":"Why Do Reduce Tasks Fail to Run in Some OSs After the Native Task Feature is Enabled?",
|
||
"uri":"mrs_01_24051.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"865",
|
||
"code":"880"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using ZooKeeper",
|
||
"uri":"mrs_01_2092.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"881"
|
||
},
|
||
{
|
||
"desc":"ZooKeeper is an open-source, highly reliable, and distributed consistency coordination service. ZooKeeper is designed to solve the problem that data consistency cannot be",
|
||
"product_code":"mrs",
|
||
"title":"Using ZooKeeper from Scratch",
|
||
"uri":"mrs_01_2093.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"882"
|
||
},
|
||
{
|
||
"desc":"Navigation path for setting parameters:Go to the All Configurations page of ZooKeeper by referring to Modifying Cluster Service Configuration Parameters. Enter a paramete",
|
||
"product_code":"mrs",
|
||
"title":"Common ZooKeeper Parameters",
|
||
"uri":"mrs_01_2094.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"883"
|
||
},
|
||
{
|
||
"desc":"Use a ZooKeeper client in an O&M scenario or service scenario.You have installed the client. For example, the installation directory is /opt/client. The client directory ",
|
||
"product_code":"mrs",
|
||
"title":"Using a ZooKeeper Client",
|
||
"uri":"mrs_01_2095.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"884"
|
||
},
|
||
{
|
||
"desc":"Configure znode permission of ZooKeeper.ZooKeeper uses an access control list (ACL) to implement znode access control. The ZooKeeper client specifies a znode ACL, and the",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the ZooKeeper Permissions",
|
||
"uri":"mrs_01_2097.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"885"
|
||
},
|
||
{
|
||
"desc":"When the defined storage directory of ZooKeeper is incorrect, or when the storage plan of ZooKeeper is changed, log in to FusionInsight Manager to change the storage dire",
|
||
"product_code":"mrs",
|
||
"title":"Changing the ZooKeeper Storage Directory",
|
||
"uri":"mrs_01_2096.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"886"
|
||
},
|
||
{
|
||
"desc":"ZooKeeper has maxClientCnxn configuration at the server side, and this configuration will verify the connections from each client IP address. But many clients can create ",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the ZooKeeper Connection",
|
||
"uri":"mrs_01_2098.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"887"
|
||
},
|
||
{
|
||
"desc":"The ZooKeeper client uses the FIFO queue to send a request to the server and waits for a response from the server. The client maintains the FIFO queue until it acknowledg",
|
||
"product_code":"mrs",
|
||
"title":"Configuring ZooKeeper Response Timeout Interval",
|
||
"uri":"mrs_01_2099.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"888"
|
||
},
|
||
{
|
||
"desc":"To prevent multiple IP nodes, bind the current ZooKeeper client to any available IP address. The data flow layer, management layer, and other network layers in the produc",
|
||
"product_code":"mrs",
|
||
"title":"Binding the Client to an IP Address",
|
||
"uri":"mrs_01_2100.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"889"
|
||
},
|
||
{
|
||
"desc":"When the ZooKeeper client is started, it is bound to a random port. In most cases, you want to bind the ZooKeeper client to a specific port. For example, for the client c",
|
||
"product_code":"mrs",
|
||
"title":"Configuring the Port Range Bound to the Client",
|
||
"uri":"mrs_01_2101.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"890"
|
||
},
|
||
{
|
||
"desc":"Currently, ZooKeeper client properties can be configured only through Java system properties. Therefore, all clients in the same JVM have the same configuration. In some ",
|
||
"product_code":"mrs",
|
||
"title":"Performing Special Configuration on ZooKeeper Clients in the Same JVM",
|
||
"uri":"mrs_01_2102.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"891"
|
||
},
|
||
{
|
||
"desc":"Set a quota for Znodes in ZooKeeper of a security cluster in O&M scenarios or service scenarios to restrict the quantity and byte space of Znodes and subnodes.Two modes a",
|
||
"product_code":"mrs",
|
||
"title":"Configuring a Quota for a Znode",
|
||
"uri":"mrs_01_2104.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"892"
|
||
},
|
||
{
|
||
"desc":"Log path: /var/log/Bigdata/zookeeper/quorumpeer (Run log), /var/log/Bigdata/audit/zookeeper/quorumpeer (Audit log)Log archive rule: The automatic ZooKeeper log compressio",
|
||
"product_code":"mrs",
|
||
"title":"ZooKeeper Log Overview",
|
||
"uri":"mrs_01_2106.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"893"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Common Issues About ZooKeeper",
|
||
"uri":"mrs_01_2107.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"881",
|
||
"code":"894"
|
||
},
|
||
{
|
||
"desc":"After a large number of znodes are created, ZooKeeper servers in the ZooKeeper cluster become faulty and cannot be automatically recovered or restarted.Logs of followers:",
|
||
"product_code":"mrs",
|
||
"title":"Why Do ZooKeeper Servers Fail to Start After Many znodes Are Created?",
|
||
"uri":"mrs_01_2108.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"894",
|
||
"code":"895"
|
||
},
|
||
{
|
||
"desc":"After a large number of znodes are created in a parent directory, the ZooKeeper client will fail to fetch all child nodes of this parent directory in a single request.Log",
|
||
"product_code":"mrs",
|
||
"title":"Why Does the ZooKeeper Server Display the java.io.IOException: Len Error Log?",
|
||
"uri":"mrs_01_2109.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"894",
|
||
"code":"896"
|
||
},
|
||
{
|
||
"desc":"Why four letter commands do not work with linux netcat command when secure netty configurations are enabled at Zookeeper server?For example,echo stat |netcat host portLin",
|
||
"product_code":"mrs",
|
||
"title":"Why Four Letter Commands Don't Work With Linux netcat Command When Secure Netty Configurations Are Enabled at Zookeeper Server?",
|
||
"uri":"mrs_01_2110.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"894",
|
||
"code":"897"
|
||
},
|
||
{
|
||
"desc":"How to check whether the role of a ZooKeeper instance is a leader or follower.Log in to Manager and choose Cluster > Name of the desired cluster > Service > ZooKeeper > I",
|
||
"product_code":"mrs",
|
||
"title":"How Do I Check Which ZooKeeper Instance Is a Leader?",
|
||
"uri":"mrs_01_2111.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"894",
|
||
"code":"898"
|
||
},
|
||
{
|
||
"desc":"When the IBM JDK is used, the client fails to connect to ZooKeeper.The possible cause is that the jaas.conf file format of the IBM JDK is different from that of the commo",
|
||
"product_code":"mrs",
|
||
"title":"Why Cannot the Client Connect to ZooKeeper using the IBM JDK?",
|
||
"uri":"mrs_01_2112.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"894",
|
||
"code":"899"
|
||
},
|
||
{
|
||
"desc":"The ZooKeeper client fails to refresh a TGT and therefore ZooKeeper cannot be accessed. The error message is as follows:ZooKeeper uses the system command kinit – R to ref",
|
||
"product_code":"mrs",
|
||
"title":"What Should I Do When the ZooKeeper Client Fails to Refresh a TGT?",
|
||
"uri":"mrs_01_2113.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"894",
|
||
"code":"900"
|
||
},
|
||
{
|
||
"desc":"When the client connects to a non-leader instance, run the deleteall command to delete a large number of znodes, the error message \"Node does not exist\" is displayed, but",
|
||
"product_code":"mrs",
|
||
"title":"Why Is Message \"Node does not exist\" Displayed when A Large Number of Znodes Are Deleted Using the deleteallCommand",
|
||
"uri":"mrs_01_2114.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"894",
|
||
"code":"901"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Appendix",
|
||
"uri":"mrs_01_2122.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"902"
|
||
},
|
||
{
|
||
"desc":"Modify the configuration parameters of each service on FusionInsight Manager.The Basic Configuration tab page is displayed by default. To modify more parameters, click th",
|
||
"product_code":"mrs",
|
||
"title":"Modifying Cluster Service Configuration Parameters",
|
||
"uri":"mrs_01_1293.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"902",
|
||
"code":"903"
|
||
},
|
||
{
|
||
"desc":"FusionInsight Manager is used to monitor, configure, and manage clusters. After the cluster is installed, you can use the account to log in to FusionInsight Manager.Curre",
|
||
"product_code":"mrs",
|
||
"title":"Accessing FusionInsight Manager",
|
||
"uri":"mrs_01_2124.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"902",
|
||
"code":"904"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Using an MRS Client",
|
||
"uri":"mrs_01_0787.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"902",
|
||
"code":"905"
|
||
},
|
||
{
|
||
"desc":"Before using the client, you need to install the client. For example, the installation directory is /opt/hadoopclient.cd /opt/hadoopclientsource bigdata_envkinit MRS clus",
|
||
"product_code":"mrs",
|
||
"title":"Using an MRS Client on Nodes Inside a MRS Cluster",
|
||
"uri":"mrs_01_0788.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"905",
|
||
"code":"906"
|
||
},
|
||
{
|
||
"desc":"After a client is installed, you can use the client on a node outside an MRS cluster.A Linux ECS has been prepared. For details about the OS and its version of the ECS, s",
|
||
"product_code":"mrs",
|
||
"title":"Using an MRS Client on Nodes Outside a MRS Cluster",
|
||
"uri":"mrs_01_0800.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"905",
|
||
"code":"907"
|
||
},
|
||
{
|
||
"desc":"HUAWEI CLOUD Help Center presents technical documents to help you quickly get started with HUAWEI CLOUD services. The technical documents include Service Overview, Price Details, Purchase Guide, User Guide, API Reference, Best Practices, FAQs, and Videos.",
|
||
"product_code":"mrs",
|
||
"title":"Change History",
|
||
"uri":"en-us_topic_0000001298722056.html",
|
||
"doc_type":"cmpntguide-lts",
|
||
"p_code":"",
|
||
"code":"908"
|
||
}
|
||
] |