[ 
https://issues.apache.org/jira/browse/SPARK-18591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16160388#comment-16160388
 ] 

Takeshi Yamamuro commented on SPARK-18591:
------------------------------------------

I just kindly give a head-up for the discussion on this thread; since we've 
already have LogicalPlanVisitor, we might easily realise bottom-up 
transformation in SparkStrategies like 
https://github.com/apache/spark/compare/master...maropu:SPARK-18591. I'm not 
sure this is the good timing now (cuz, probably, I think many committers and 
qualified developers spending much time on Dataset API v2 reviews and others) 
to change the transformation way though, I think it'd be better to modify this 
in future because the bottom-up transformation makes catalyst easily select 
better physical plans based on bottom sub-tree condition (costs and 
partition/sort conditions), e.g., we could easily fix  SPARK-12978 and this 
ticket. cc: [~smilegator]

> Replace hash-based aggregates with sort-based ones if inputs already sorted
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-18591
>                 URL: https://issues.apache.org/jira/browse/SPARK-18591
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.0.2
>            Reporter: Takeshi Yamamuro
>
> Spark currently uses sort-based aggregates only in limited condition; the 
> cases where spark cannot use partial aggregates and hash-based ones.
> However, if input ordering has already satisfied the requirements of 
> sort-based aggregates, it seems sort-based ones are faster than the other.
> {code}
> ./bin/spark-shell --conf spark.sql.shuffle.partitions=1
> val df = spark.range(10000000).selectExpr("id AS key", "id % 10 AS 
> value").sort($"key").cache
> def timer[R](block: => R): R = {
>   val t0 = System.nanoTime()
>   val result = block
>   val t1 = System.nanoTime()
>   println("Elapsed time: " + ((t1 - t0 + 0.0) / 1000000000.0)+ "s")
>   result
> }
> timer {
>   df.groupBy("key").count().count
> }
> // codegen'd hash aggregate
> Elapsed time: 7.116962977s
> // non-codegen'd sort aggregarte
> Elapsed time: 3.088816662s
> {code}
> If codegen'd sort-based aggregates are supported in SPARK-16844, this seems 
> to make the performance gap bigger;
> {code}
> - codegen'd sort aggregate
> Elapsed time: 1.645234684s
> {code} 
> Therefore, it'd be better to use sort-based ones in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to