MRS provides enterprise-level big data clusters on the cloud. It contains HDFS, Hive, and Spark components and is applicable to massive data analysis of enterprises.
Hive supports SQL to help users perform extraction, transformation, and loading (ETL) operations on large-scale data sets. Query on large-scale data sets takes a long time. In many scenarios, you can create Hive partitions to reduce the total amount of data to be scanned each time. This significantly improves query performance.
Hive partitions are implemented by using the HDFS subdirectory function. Each subdirectory contains the column names and values of each partition. If there are multiple partitions, many HDFS subdirectories exist. It is not easy to load external data to each partition of the Hive table without relying on tools. With CDM, you can easily load data of the external data sources (relational databases, object storage services, and file system services) to Hive partition tables.
This section describes how to migrate data from the MySQL database to the MRS Hive partition table.
Suppose that there is a trip_data table in the MySQL database. The table stores cycling records such as the start time, end time, start sites, end sites, and rider IDs. For details about the fields in the trip_data table, see Figure 1.
The following describes how to use CDM to import the trip_data table in the MySQL database to the MRS Hive partition table. The procedure is as follows:
1 | create table trip_data(TripID int,Duration int,StartDate timestamp,StartStation varchar(64),StartTerminal int,EndDate timestamp,EndStation varchar(64),EndTerminal int,Bike int,SubscriberType varchar(32),ZipCodev varchar(10))partitioned by (y int,ym int,ymd int); |
The trip_data partition table has three partition fields: year, year and month, and year, month, and date of the start time of a ride. For example, if the start time of a ride is 2018/5/11 9:40, the record is saved in the trip_data/2018/201805/20180511 partition. When the records in the trip_data table are summarized, only part of the data needs to be scanned, greatly improving the performance.
The key configurations are as follows:
If SSL encryption is configured for the access channel of a local data source, CDM cannot connect to the data source using the EIP.
Click Show Advanced Attributes and set optional parameters. For details, see Link to a Common Relational Database. Retain the default values of the optional parameters and configure the mandatory parameters according to Table 1.
Parameter |
Description |
Example Value |
---|---|---|
Name |
Unique link name |
mysqllink |
Database Server |
IP address or domain name of the MySQL database server |
192.168.1.110 |
Port |
MySQL database port |
3306 |
Database Name |
Name of the MySQL database |
sqoop |
Username |
User who has the read, write, and delete permissions on the MySQL database |
admin |
Password |
Password of the user |
- |
Use Agent |
Whether to extract data from the data source through an agent |
Yes |
Agent |
Click Select to select the agent created in Connecting to an Agent. |
- |
If an error occurs during the saving, the security settings of the MySQL database are incorrect. In this case, you need to enable the EIP of the CDM cluster to access the MySQL database.
Table 2 describes the parameters. You can configure the parameters according to the actual situation.
Parameter |
Description |
Example Value |
---|---|---|
Name |
Link name, which should be defined based on the data source type, so it is easier to remember what the link is for |
hivelink |
Manager IP |
Floating IP address of MRS Manager. Click Select next to the Manager IP text box to select an MRS cluster. CDM automatically fills in the authentication information. |
127.0.0.1 |
Authentication Method |
Authentication method used for accessing MRS
|
SIMPLE |
HIVE Version |
Set this to the Hive version on the server. |
HIVE_3_X |
Username |
If Authentication Method is set to KERBEROS, you must provide the username and password used for logging in to MRS Manager. If you need to create a snapshot when exporting a directory from HDFS, the user configured here must have the administrator permission on HDFS. To create a data connection for an MRS security cluster, do not use user admin. The admin user is the default management page user and cannot be used as the authentication user of the security cluster. You can create an MRS user and set Username and Password to the username and password of the created MRS user when creating an MRS data connection.
NOTE:
|
cdm |
Password |
Password used for logging in to MRS Manager |
- |
OBS storage support |
The server must support OBS storage. When creating a Hive table, you can store the table in OBS. |
No |
Run Mode |
This parameter is used only when the Hive version is HIVE_3_X. Possible values are:
|
EMBEDDED |
Use Cluster Config |
You can use the cluster configuration to simplify parameter settings for the Hadoop connection. |
No |
Cluster Config Name |
This parameter is valid only when Use Cluster Config is set to Yes. Select a cluster configuration that has been created. For details, see Managing Cluster Configurations. |
hive_01 |
Set Clear Data Before Import to Yes, so that the data in the Hive table will be cleared before data import.
Map the fields of the MySQL table and Hive table. The Hive table has three more fields y, ym, and ymd than the MySQL table, which are the Hive partition fields. Because the fields of the source table cannot be directly mapped to the destination table, you need to configure an expression to extract data from the StartDate field in the source table.
The expressions for the y, ym, and ymd fields are as follows:
DateUtils.format(DateUtils.parseDate(row[2],"yyyy-MM-dd HH:mm:ss.SSS"),"yyyy")
DateUtils.format(DateUtils.parseDate(row[2],"yyyy-MM-dd HH:mm:ss.SSS"),"yyyyMM")
DateUtils.format(DateUtils.parseDate(row[2],"yyyy-MM-dd HH:mm:ss.SSS"),"yyyyMMdd")
The expressions in CDM support field conversion of common character strings, dates, and values.
On the Historical Record page, click Log to view the job logs.