In big data application scenarios, Hive databases and all tables in these databases are usually migrated to another cluster. You can run the Hive database export and import commands to migrate a complete database.
This section applies to MRS 3.2.0 or later.
The Hive database import and export function does not support importing or exporting encrypted tables, transaction tables, HBase external tables, Hudi tables, view tables, and materialized view tables.
Log in to FusionInsight Manager of the source cluster, click Cluster, choose Services > Hive, and click Configuration. On the displayed page, search for hdfs.site.customized.configs, add custom parameter dfs.namenode.rpc-address.haclusterX, and set its value to Service IP address of the active NameNode instance node in the destination cluster:RPC port. Add custom parameter dfs.namenode.rpc-address.haclusterX1 and set its value to Service IP address of the standby NameNode instance node in the destination cluster:RPC port. The RPC port of NameNode is 25000 by default. After saving the configuration, roll-restart the Hive service.
cd /opt/client
source bigdata_env
kinit Hive service user
beeline
create database dump_db;
use dump_db;
create table test(id int);
insert into test values(123);
alter database dump_db set dbproperties ('repl.source.for'='replpolicy1');
alter database dump_db set dbproperties ('repl.source.for'='');
repl dump dump_db with ('hive.repl.rootdir'='hdfs://haclusterX/user/hive/test');
repl load load_db from '/user/hive/repl';
When the repl load command is used to import a database, pay attention to the following points when specifying the database name: