forked from docs/doc-exports
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com> Co-authored-by: Yang, Tong <yangtong2@huawei.com> Co-committed-by: Yang, Tong <yangtong2@huawei.com>
125 lines
16 KiB
HTML
125 lines
16 KiB
HTML
<a name="mrs_01_1970"></a><a name="mrs_01_1970"></a>
|
|
|
|
<h1 class="topictitle1">Configuring Spark SQL to Enable the Adaptive Execution Feature</h1>
|
|
<div id="body1595920207464"><div class="section" id="mrs_01_1970__section10411113113617"><h4 class="sectiontitle">Scenario</h4><p id="mrs_01_1970__p10893165811599">The Spark SQL adaptive execution feature enables Spark SQL to optimize subsequent execution processes based on intermediate results to improve overall execution efficiency. The following features have been implemented:</p>
|
|
<ol id="mrs_01_1970__ol112005119116"><li id="mrs_01_1970__li13199111916">Automatic configuration of the number of shuffle partitions<p id="mrs_01_1970__p17893658185914"><a name="mrs_01_1970__li13199111916"></a><a name="li13199111916"></a>Before the adaptive execution feature is enabled, Spark SQL specifies the number of partitions for a shuffle process by specifying the <strong id="mrs_01_1970__b1478417199452">spark.sql.shuffle.partitions</strong> parameter. This method lacks flexibility when multiple SQL queries are performed on an application and cannot ensure optimal performance in all scenarios. After adaptive execution is enabled, Spark SQL automatically configures the number of partitions for each shuffle process, instead of using the general configuration. In this way, the proper number of partitions is automatically used during each shuffle process.</p>
|
|
</li></ol><ol start="2" id="mrs_01_1970__ol141995118119"><li id="mrs_01_1970__li419971218">Dynamic adjusting of the join execution plan<p id="mrs_01_1970__p58940586599"><a name="mrs_01_1970__li419971218"></a><a name="li419971218"></a>Before the adaptive execution feature is enabled, Spark SQL creates an execution plan based on the optimization results of rule-based optimization (RBO) and Cost-Based Optimization (CBO). This method ignores changes of result sets during data execution. For example, when a view created based on a large table is joined with other large tables, the execution plan cannot be adjusted to BroadcastJoin even if the result set of the view is small. After the adaptive execution feature is enabled, Spark SQL can dynamically adjust the execution plan based on the execution result of the previous stage to obtain better performance.</p>
|
|
</li></ol><ol start="3" id="mrs_01_1970__ol187817109118"><li id="mrs_01_1970__li11781710817">Automatic processing of data skew<p id="mrs_01_1970__p28941458115915"><a name="mrs_01_1970__li11781710817"></a><a name="li11781710817"></a>If data skew occurs during SQL statement execution, the memory overflow of an executor or slow task execution may occur. After the adaptive execution feature is enabled, Spark SQL can automatically process data skew scenarios. Multiple tasks are started for partitions where data skew occurs. Each task reads several output files obtained from the shuffle process and performs union operations on the join results of these tasks to eliminate data skew.</p>
|
|
</li></ol>
|
|
</div>
|
|
<div class="section" id="mrs_01_1970__section149351351145714"><h4 class="sectiontitle">Parameters</h4><p id="mrs_01_1970__p18420142117236">Log in to FusionInsight Manager, choose <span id="mrs_01_1970__text18330192912598"><strong id="mrs_01_1970__b1864019272917">Cluster</strong> > </span><strong id="mrs_01_1970__b377193010507">Services</strong> > <strong id="mrs_01_1970__b1753533115014">Spark2x</strong> > <strong id="mrs_01_1970__b16138203516509">Configurations</strong>, click <strong id="mrs_01_1970__b12231638134919">All Configurations</strong>, and search for the following parameter.</p>
|
|
|
|
<div class="tablenoborder"><table cellpadding="4" cellspacing="0" summary="" id="mrs_01_1970__t18af5fbd3aeb4bda8d13de81ed31f812" frame="border" border="1" rules="all"><thead align="left"><tr id="mrs_01_1970__r74395b638f4442e3aab246ff89b4f414"><th align="left" class="cellrowborder" valign="top" width="32.129999999999995%" id="mcps1.3.2.3.1.4.1.1"><p id="mrs_01_1970__a1f516e63d9e642069c06eb1b4f809563"><strong id="mrs_01_1970__b13341193812509">Parameter</strong></p>
|
|
</th>
|
|
<th align="left" class="cellrowborder" valign="top" width="49.5%" id="mcps1.3.2.3.1.4.1.2"><p id="mrs_01_1970__af9566f38e89e419f83b61605b532f7ad"><strong id="mrs_01_1970__b14467405504">Description</strong></p>
|
|
</th>
|
|
<th align="left" class="cellrowborder" valign="top" width="18.37%" id="mcps1.3.2.3.1.4.1.3"><p id="mrs_01_1970__ac817ca09e627493cb99f5b57d2943bd6"><strong id="mrs_01_1970__b84140417509">Default Value</strong></p>
|
|
</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody><tr id="mrs_01_1970__rd52556c4569841b3a5e5a2ebdfcb3b56"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p44616111023">spark.sql.adaptive.enabled</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p5461111211">Specifies whether to enable the adaptive execution function.</p>
|
|
<p id="mrs_01_1970__p17918584318">Note: If AQE and Static Partition Pruning (DPP) are enabled at the same time, DPP takes precedence over AQE during SparkSQL task execution. As a result, AQE does not take effect.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p10461011426">false</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row782249193512"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p1483164913352">spark.sql.optimizer.dynamicPartitionPruning.enabled</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p6831249103510">The switch to enable DPP.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p15832497350">true</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__rf534b3c540af486495ea96f84d00a611"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p12461611923">spark.sql.adaptive.coalescePartitions.enabled</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p2302328214">If this parameter is set to <strong id="mrs_01_1970__b1084301317156">true</strong> and <strong id="mrs_01_1970__b15202172511512">spark.sql.adaptive.enabled</strong> is set to <strong id="mrs_01_1970__b878215280151">true</strong>, Spark combines partitions that are consecutively random played based on the target size (specified by <strong id="mrs_01_1970__b1988518418210">spark.sql.adaptive.advisoryPartitionSizeInBytes</strong>) to prevent too many small tasks from being executed.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p4467111127">true</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row41031841117"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p78411328172410">spark.sql.adaptive.coalescePartitions.initialPartitionNum</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p1946151113217">Initial number of shuffle partitions before merge. The default value is the same as the value of <strong id="mrs_01_1970__b1855518496206">spark.sql.shuffle.partitions</strong>. This parameter is valid only when <strong id="mrs_01_1970__b07521830192219">spark.sql.adaptive.enabled</strong> and <strong id="mrs_01_1970__b116523348224">spark.sql.adaptive.coalescePartitions.enabled</strong> are set to <strong id="mrs_01_1970__b34141937122220">true</strong>. This parameter is optional. The initial number of partitions must be a positive number.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p1685717911125">200</p>
|
|
<p id="mrs_01_1970__p84341337257"></p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row1910412411417"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p15365131616222">spark.sql.adaptive.coalescePartitions.minPartitionNum</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p92814112315">Minimum number of shuffle partitions after merge. If this parameter is not set, the default degree of parallelism (DOP) of the Spark cluster is used. This parameter is valid only when <strong id="mrs_01_1970__b2270135618478">spark.sql.adaptive.enabled</strong> and <strong id="mrs_01_1970__b1227155618476">spark.sql.adaptive.coalescePartitions.enable</strong> are set to <strong id="mrs_01_1970__b927111566471">true</strong>. This parameter is optional. The initial number of partitions must be a positive number.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p7713191431215">1</p>
|
|
<p id="mrs_01_1970__p174610111222"></p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row1389112476118"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p17463111429">spark.sql.adaptive.shuffle.targetPostShuffleInputSize</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p10463112210">Target size of a partition after shuffling. Spark 3.0 and later versions do not support this parameter.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p662671714111">64MB</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row138919476117"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p9121125552619">spark.sql.adaptive.advisoryPartitionSizeInBytes</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p399871811277">Size of a shuffle partition (unit: byte) during adaptive optimization (<strong id="mrs_01_1970__b1305067501">spark.sql.adaptive.enabled</strong> is set to <strong id="mrs_01_1970__b166161813500">true</strong>). This parameter takes effect when Spark aggregates small shuffle partitions or splits shuffle partitions where skew occurs.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p16461911527">64MB</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row14911033183114"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p124911033153115">spark.sql.adaptive.fetchShuffleBlocksInBatch</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p14911933203116">Whether to obtain consecutive shuffle blocks in batches. For the same map job, reading consecutive shuffle blocks in batches can reduce I/Os and improve performance, instead of reading blocks one by one. Note that multiple consecutive blocks exist in a single read request only when <strong id="mrs_01_1970__b48241110524">spark.sql.adaptive.enabled</strong> and <strong id="mrs_01_1970__b11984101425217">spark.sql.adaptive.coalescePartitions.enabled</strong> are set to <strong id="mrs_01_1970__b1948191714522">true</strong>. This feature also relies on a relocatable serializer that uses cascading to support the codec and the latest version of the shuffle extraction protocol.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p1749112335315">true</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row9893163643112"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p1589333633119">spark.sql.adaptive.localShuffleReader.enabled</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p12893173633117">If the value of this parameter is <strong id="mrs_01_1970__b156121719205419">true</strong> and the value of <strong id="mrs_01_1970__b7333132345413">spark.sql.adaptive.enabled</strong> is <strong id="mrs_01_1970__b4627182911544">true</strong>, Spark attempts to use the local shuffle reader to read shuffle data when shuffling of partitions is not required, for example, after sort-merge join is converted to broadcast-hash join.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p489353612317">true</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row188914471315"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p1146151113214">spark.sql.adaptive.skewJoin.enabled</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p14615111220">Specifies whether to enable the function of automatic processing of the data skew in join operations. The function is enabled when this parameter is set to <strong id="mrs_01_1970__b165232019155114">true</strong> and <strong id="mrs_01_1970__b952817195518">spark.sql.adaptive.enabled</strong> is set to <strong id="mrs_01_1970__b1152821925116">true</strong>.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p146141118211">true</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row18643555515"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p1829595712279">spark.sql.adaptive.skewJoin.skewedPartitionFactor</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p34791114215">This parameter is a multiplier used to determine whether a partition is a data skew partition. If the data size of a partition exceeds the value of this parameter multiplied by the median of the all partition sizes except this partition and exceeds the value of <strong id="mrs_01_1970__b546493319166">spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes</strong>, this partition is considered as a data skew partition.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p144714111221">5</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row4643115515114"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p74711115218">spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p7565172222812">If the partition size (unit: byte) is greater than the threshold as well as the product of the <strong id="mrs_01_1970__b25621127342">spark.sql.adaptive.skewJoin.skewedPartitionFactor</strong> value and the median partition size, skew occurs in the partition. Ideally, the value of this parameter should be greater than that of <strong id="mrs_01_1970__b1653312408417">spark.sql.adaptive.advisoryPartitionSizeInBytes.</strong>.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p396712233287">256MB</p>
|
|
</td>
|
|
</tr>
|
|
<tr id="mrs_01_1970__row089254712119"><td class="cellrowborder" valign="top" width="32.129999999999995%" headers="mcps1.3.2.3.1.4.1.1 "><p id="mrs_01_1970__p13601462310">spark.sql.adaptive.nonEmptyPartitionRatioForBroadcastJoin</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="49.5%" headers="mcps1.3.2.3.1.4.1.2 "><p id="mrs_01_1970__p10311101493117">If the ratio of non-null partitions is less than the value of this parameter when two tables are joined, broadcast hash join cannot be properly performed regardless of the partition size. This parameter is valid only when <strong id="mrs_01_1970__b0689251507">spark.sql.adaptive.enabled</strong> is set to <strong id="mrs_01_1970__b11690195601">true</strong>.</p>
|
|
</td>
|
|
<td class="cellrowborder" valign="top" width="18.37%" headers="mcps1.3.2.3.1.4.1.3 "><p id="mrs_01_1970__p17430918113116">0.2</p>
|
|
</td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
<div>
|
|
<div class="familylinks">
|
|
<div class="parentlink"><strong>Parent topic:</strong> <a href="mrs_01_1941.html">Scenario-Specific Configuration</a></div>
|
|
</div>
|
|
</div>
|
|
|