Based on the regions of and network connectivity between the source cluster and destination cluster, data backup scenarios are classified as follows:
If the source cluster and destination cluster are in the same region, set up a network transmission channel. Use the DistCp tool to run the following command to copy the HDFS, HBase, Hive data files and Hive metadata backup files from the source cluster to the destination cluster.
$HADOOP_HOME/bin/hadoop distcp <src> <dist> -p
The following provides description about the parameters in the preceding command.
If the source cluster and destination cluster are in different regions, use the DistCp tool to copy the source cluster data to OBS, and use the OBS cross-region replication function (For details, see Object Storage Service > Console Operation Guide > Cross-Region Replication) to copy the data to OBS in the region where the destination cluster resides. If DistCp is used, permission, owner, and group information cannot be set for files on OBS. In this case, you need to export and copy the HDFS metadata while exporting data to prevent the loss of HDFS file property information.
You can use the following way to migrate data from an offline cluster to the cloud.
Create a Direct Connect between the source cluster and destination cluster, enable the network between the offline cluster egress gateway and the online VPC, and execute the DistCp to copy the data by referring to the method provided in Same Region.
HDFS metadata information to be exported includes file and folder permissions and owner/group information. You can run the following command on the HDFS client to export the metadata:
$HADOOP_HOME/bin/hdfs dfs –ls –R <migrating_path> > /tmp/hdfs_meta.txt
The following provides description about the parameters in the preceding command.
If the source cluster can communicate with the destination cluster and you run the hadoop distcp command as a super administrator to copy data, you can add the -p parameter to enable DistCp to restore the metadata of the corresponding file in the destination cluster while copying data. In this case, skip this step.
Based on the exported permission information, run the HDFS commands in the background of the destination cluster to restore the file permission and owner and group information.
$HADOOP_HOME/bin/hdfs dfs –chmod <MODE> <path> $HADOOP_HOME/bin/hdfs dfs –chown <OWNER> <path>