Reviewed-by: Kacur, Michal <michal.kacur@t-systems.com> Co-authored-by: Yang, Tong <yangtong2@huawei.com> Co-committed-by: Yang, Tong <yangtong2@huawei.com>
4.6 KiB
Configuring the Drop Partition Command to Support Batch Deletion
Scenario
Currently, the Drop Partition command in Spark supports partition deletion using only the equal sign (=). This configuration allows multiple filter criteria to be used to delete partitions in batches, for example, <, <=, >, >=, !>, and !<.
Configuration
Log in to FusionInsight Manager and choose Cluster. Click the name of the desired cluster and choose Services > Spark2x. On the page that is displayed, click the Configurations tab then All Configurations, and search for the following parameters.
Parameter |
Description |
Default Value |
---|---|---|
spark.sql.dropPartitionsInBatch.enabled |
If this parameter is set to true, the Drop Partition command supports the following filter criteria: <, <=, >, >=, !>, and !<. |
true |
spark.sql.dropPartitionsInBatch.limit |
Indicates the maximum number of partitions that can be batch dropped. |
1000 |