[jira] [Updated] (SPARK-32542) Add an optimizer rule to split an Expand into multiple Expands for aggregates
[ https://issues.apache.org/jira/browse/SPARK-32542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takeshi Yamamuro updated SPARK-32542: - Fix Version/s: (was: 3.0.0) > Add an optimizer rule to split an Expand into multiple Expands for aggregates > - > > Key: SPARK-32542 > URL: https://issues.apache.org/jira/browse/SPARK-32542 > Project: Spark > Issue Type: Improvement > Components: Optimizer >Affects Versions: 3.0.0 >Reporter: karl wang >Priority: Major > > Split an expand into several small Expand, which contains the Specified > number of projections. > For instance, like this sql.select a, b, c, d, count(1) from table1 group by > a, b, c, d with cube. It can expand 2^4 times of original data size. > Now we specify the spark.sql.optimizer.projections.size=4.The Expand will be > split into 2^4/4 smallExpand.It can reduce the shuffle pressure and improve > performance in multidimensional analysis when data is huge. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-32542) Add an optimizer rule to split an Expand into multiple Expands for aggregates
[ https://issues.apache.org/jira/browse/SPARK-32542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takeshi Yamamuro updated SPARK-32542: - Component/s: (was: Optimizer) SQL > Add an optimizer rule to split an Expand into multiple Expands for aggregates > - > > Key: SPARK-32542 > URL: https://issues.apache.org/jira/browse/SPARK-32542 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.0.0 >Reporter: karl wang >Priority: Major > > Split an expand into several small Expand, which contains the Specified > number of projections. > For instance, like this sql.select a, b, c, d, count(1) from table1 group by > a, b, c, d with cube. It can expand 2^4 times of original data size. > Now we specify the spark.sql.optimizer.projections.size=4.The Expand will be > split into 2^4/4 smallExpand.It can reduce the shuffle pressure and improve > performance in multidimensional analysis when data is huge. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-32542) Add an optimizer rule to split an Expand into multiple Expands for aggregates
[ https://issues.apache.org/jira/browse/SPARK-32542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takeshi Yamamuro updated SPARK-32542: - Target Version/s: (was: 3.0.0) > Add an optimizer rule to split an Expand into multiple Expands for aggregates > - > > Key: SPARK-32542 > URL: https://issues.apache.org/jira/browse/SPARK-32542 > Project: Spark > Issue Type: Improvement > Components: Optimizer >Affects Versions: 3.0.0 >Reporter: karl wang >Priority: Major > > Split an expand into several small Expand, which contains the Specified > number of projections. > For instance, like this sql.select a, b, c, d, count(1) from table1 group by > a, b, c, d with cube. It can expand 2^4 times of original data size. > Now we specify the spark.sql.optimizer.projections.size=4.The Expand will be > split into 2^4/4 smallExpand.It can reduce the shuffle pressure and improve > performance in multidimensional analysis when data is huge. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-32542) Add an optimizer rule to split an Expand into multiple Expands for aggregates
[ https://issues.apache.org/jira/browse/SPARK-32542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] karl wang updated SPARK-32542: -- Target Version/s: 3.0.0 > Add an optimizer rule to split an Expand into multiple Expands for aggregates > - > > Key: SPARK-32542 > URL: https://issues.apache.org/jira/browse/SPARK-32542 > Project: Spark > Issue Type: Improvement > Components: Optimizer >Affects Versions: 3.0.0 >Reporter: karl wang >Priority: Major > Fix For: 3.0.0 > > > Split an expand into several small Expand, which contains the Specified > number of projections. > For instance, like this sql.select a, b, c, d, count(1) from table1 group by > a, b, c, d with cube. It can expand 2^4 times of original data size. > Now we specify the spark.sql.optimizer.projections.size=4.The Expand will be > split into 2^4/4 smallExpand.It can reduce the shuffle pressure and improve > performance in multidimensional analysis when data is huge. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-32542) Add an optimizer rule to split an Expand into multiple Expands for aggregates
[ https://issues.apache.org/jira/browse/SPARK-32542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] karl wang updated SPARK-32542: -- Shepherd: karl wang > Add an optimizer rule to split an Expand into multiple Expands for aggregates > - > > Key: SPARK-32542 > URL: https://issues.apache.org/jira/browse/SPARK-32542 > Project: Spark > Issue Type: Improvement > Components: Optimizer >Affects Versions: 3.0.0 >Reporter: karl wang >Priority: Major > Fix For: 3.0.0 > > > Split an expand into several small Expand, which contains the Specified > number of projections. > For instance, like this sql.select a, b, c, d, count(1) from table1 group by > a, b, c, d with cube. It can expand 2^4 times of original data size. > Now we specify the spark.sql.optimizer.projections.size=4.The Expand will be > split into 2^4/4 smallExpand.It can reduce the shuffle pressure and improve > performance in multidimensional analysis when data is huge. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-32542) Add an optimizer rule to split an Expand into multiple Expands for aggregates
[ https://issues.apache.org/jira/browse/SPARK-32542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] karl wang updated SPARK-32542: -- Shepherd: (was: karl wang) > Add an optimizer rule to split an Expand into multiple Expands for aggregates > - > > Key: SPARK-32542 > URL: https://issues.apache.org/jira/browse/SPARK-32542 > Project: Spark > Issue Type: Improvement > Components: Optimizer >Affects Versions: 3.0.0 >Reporter: karl wang >Priority: Major > Fix For: 3.0.0 > > > Split an expand into several small Expand, which contains the Specified > number of projections. > For instance, like this sql.select a, b, c, d, count(1) from table1 group by > a, b, c, d with cube. It can expand 2^4 times of original data size. > Now we specify the spark.sql.optimizer.projections.size=4.The Expand will be > split into 2^4/4 smallExpand.It can reduce the shuffle pressure and improve > performance in multidimensional analysis when data is huge. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-32542) Add an optimizer rule to split an Expand into multiple Expands for aggregates
[ https://issues.apache.org/jira/browse/SPARK-32542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] karl wang updated SPARK-32542: -- Summary: Add an optimizer rule to split an Expand into multiple Expands for aggregates (was: add a batch for optimizing logicalPlan) > Add an optimizer rule to split an Expand into multiple Expands for aggregates > - > > Key: SPARK-32542 > URL: https://issues.apache.org/jira/browse/SPARK-32542 > Project: Spark > Issue Type: Improvement > Components: Optimizer >Affects Versions: 3.0.0 >Reporter: karl wang >Priority: Major > Fix For: 3.0.0 > > > Split an expand into several small Expand, which contains the Specified > number of projections. > For instance, like this sql.select a, b, c, d, count(1) from table1 group by > a, b, c, d with cube. It can expand 2^4 times of original data size. > Now we specify the spark.sql.optimizer.projections.size=4.The Expand will be > split into 2^4/4 smallExpand.It can reduce the shuffle pressure and improve > performance in multidimensional analysis when data is huge. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org