[ 
https://issues.apache.org/jira/browse/SPARK-29203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated SPARK-29203:
--------------------------------
    Description: 
spark.sql.shuffle.partitions=200(default):
{noformat}
[info] - subquery/in-subquery/in-joins.sql (6 minutes, 19 seconds)
[info] - subquery/in-subquery/not-in-joins.sql (2 minutes, 17 seconds)
[info] - subquery/scalar-subquery/scalar-subquery-predicate.sql (45 seconds, 
763 milliseconds)
{noformat}


spark.sql.shuffle.partitions=5:
{noformat}
[info] - subquery/in-subquery/in-joins.sql (1 minute, 12 seconds)
[info] - subquery/in-subquery/not-in-joins.sql (27 seconds, 541 milliseconds)
[info] - subquery/scalar-subquery/scalar-subquery-predicate.sql (17 seconds, 
360 milliseconds)
{noformat}

> Reduce shuffle partitions in SQLQueryTestSuite
> ----------------------------------------------
>
>                 Key: SPARK-29203
>                 URL: https://issues.apache.org/jira/browse/SPARK-29203
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL, Tests
>    Affects Versions: 3.0.0
>            Reporter: Yuming Wang
>            Priority: Major
>
> spark.sql.shuffle.partitions=200(default):
> {noformat}
> [info] - subquery/in-subquery/in-joins.sql (6 minutes, 19 seconds)
> [info] - subquery/in-subquery/not-in-joins.sql (2 minutes, 17 seconds)
> [info] - subquery/scalar-subquery/scalar-subquery-predicate.sql (45 seconds, 
> 763 milliseconds)
> {noformat}
> spark.sql.shuffle.partitions=5:
> {noformat}
> [info] - subquery/in-subquery/in-joins.sql (1 minute, 12 seconds)
> [info] - subquery/in-subquery/not-in-joins.sql (27 seconds, 541 milliseconds)
> [info] - subquery/scalar-subquery/scalar-subquery-predicate.sql (17 seconds, 
> 360 milliseconds)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to