[GitHub] spark pull request #14000: [SPARK-16331] [SQL] Reduce code generation time

2016-06-30 Thread inouehrs
GitHub user inouehrs opened a pull request:

https://github.com/apache/spark/pull/14000

[SPARK-16331] [SQL] Reduce code generation time 

## What changes were proposed in this pull request?
During the code generation, a `LocalRelation` often has a huge `Vector` 
object as `data`. In the simple example below, a `LocalRelation` has a Vector 
with 100 elements of `UnsafeRow`. 

```
val numRows = 100
val ds = (1 to numRows).toDS().persist()
benchmark.addCase("filter+reduce") { iter =>
  ds.filter(a => (a & 1) == 0).reduce(_ + _)
}
```

At `TreeNode.transformChildren`, all elements of the vector is 
unnecessarily iterated to check whether any children exist in the vector since 
`Vector` is Traversable. This part significantly increases code generation time.

This patch avoids this overhead by checking the number of children before 
iterating all elements; `LocalRelation` does not have children since it extends 
`LeafNode`.

The performance of the above example 
```
without this patch
Java HotSpot(TM) 64-Bit Server VM 1.8.0_91-b14 on Mac OS X 10.11.5
Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz
compilationTime: Best/Avg Time(ms)Rate(M/s)   
Per Row(ns)   Relative


filter+reduce 4426 / 4533  0.2  
  4426.0   1.0X

with this patch
compilationTime: Best/Avg Time(ms)Rate(M/s)   
Per Row(ns)   Relative


filter+reduce 3117 / 3391  0.3  
  3116.6   1.0X
```


## How was this patch tested?

using existing unit tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/inouehrs/spark compilation-time-reduction

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14000.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14000


commit 5afd079f38e9618c4b3cd8be33d73724844a7cfd
Author: Hiroshi Inoue 
Date:   2016-06-30T15:52:40Z

Merge branch 'apache/master'

commit 153e170fe5a478d04559d430f566478a6e48528f
Author: Hiroshi Inoue 
Date:   2016-06-30T17:25:24Z

add check of # children to avoid redundant iteration in transformChildren




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14000: [SPARK-16331] [SQL] Reduce code generation time

2016-06-30 Thread rxin
Github user rxin commented on a diff in the pull request:

https://github.com/apache/spark/pull/14000#discussion_r69219668
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala 
---
@@ -342,33 +326,54 @@ abstract class TreeNode[BaseType <: 
TreeNode[BaseType]] extends Product {
   } else {
 arg
   }
-case other => other
-  }.view.force // `mapValues` is lazy and we need to force it to 
materialize
-  case d: DataType => d // Avoid unpacking Structs
-  case args: Traversable[_] => args.map {
-case arg: TreeNode[_] if containsChild(arg) =>
+case Some(arg: TreeNode[_]) if containsChild(arg) =>
   val newChild = nextOperation(arg.asInstanceOf[BaseType], rule)
   if (!(newChild fastEquals arg)) {
 changed = true
-newChild
+Some(newChild)
   } else {
-arg
+Some(arg)
   }
-case tuple @ (arg1: TreeNode[_], arg2: TreeNode[_]) =>
-  val newChild1 = nextOperation(arg1.asInstanceOf[BaseType], rule)
-  val newChild2 = nextOperation(arg2.asInstanceOf[BaseType], rule)
-  if (!(newChild1 fastEquals arg1) || !(newChild2 fastEquals 
arg2)) {
-changed = true
-(newChild1, newChild2)
-  } else {
-tuple
-  }
-case other => other
+case m: Map[_, _] => m.mapValues {
+  case arg: TreeNode[_] if containsChild(arg) =>
+val newChild = nextOperation(arg.asInstanceOf[BaseType], rule)
+if (!(newChild fastEquals arg)) {
+  changed = true
+  newChild
+} else {
+  arg
+}
+  case other => other
+}.view.force // `mapValues` is lazy and we need to force it to 
materialize
+case d: DataType => d // Avoid unpacking Structs
+case args: Traversable[_] => args.map {
+  case arg: TreeNode[_] if containsChild(arg) =>
+val newChild = nextOperation(arg.asInstanceOf[BaseType], rule)
+if (!(newChild fastEquals arg)) {
+  changed = true
+  newChild
+} else {
+  arg
+}
+  case tuple@(arg1: TreeNode[_], arg2: TreeNode[_]) =>
+val newChild1 = nextOperation(arg1.asInstanceOf[BaseType], 
rule)
+val newChild2 = nextOperation(arg2.asInstanceOf[BaseType], 
rule)
+if (!(newChild1 fastEquals arg1) || !(newChild2 fastEquals 
arg2)) {
+  changed = true
+  (newChild1, newChild2)
+} else {
+  tuple
+}
+  case other => other
+}
+case nonChild: AnyRef => nonChild
+case null => null
   }
-  case nonChild: AnyRef => nonChild
-  case null => null
+  if (changed) makeCopy(newArgs) else this
+}
--- End diff --

a small style nit:
```
} else {
  this
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14000: [SPARK-16331] [SQL] Reduce code generation time

2016-06-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/14000


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org