[GitHub] spark pull request #13887: [SPARK-16186][SQL] Support partition batch prunin...

2016-06-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/13887


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13887: [SPARK-16186][SQL] Support partition batch prunin...

2016-06-24 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/13887#discussion_r68475255
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -79,6 +79,11 @@ private[sql] case class InMemoryTableScanExec(
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
+
+case In(a: AttributeReference, list: Seq[Expression])
+  if list.length <= inMemoryPartitionPruningMaxInSize && 
list.forall(_.isInstanceOf[Literal]) =>
--- End diff --

Sure! I'll remove the option related stuff.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13887: [SPARK-16186][SQL] Support partition batch prunin...

2016-06-24 Thread davies
Github user davies commented on a diff in the pull request:

https://github.com/apache/spark/pull/13887#discussion_r68474760
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -79,6 +79,11 @@ private[sql] case class InMemoryTableScanExec(
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
+
+case In(a: AttributeReference, list: Seq[Expression])
+  if list.length <= inMemoryPartitionPruningMaxInSize && 
list.forall(_.isInstanceOf[Literal]) =>
--- End diff --

Can we not have this config? Another optimize rule will garantee that the 
number of expression will not be big.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13887: [SPARK-16186][SQL] Support partition batch prunin...

2016-06-24 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/13887#discussion_r68383690
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -79,6 +79,11 @@ private[sql] case class InMemoryTableScanExec(
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
+
+case In(a: AttributeReference, list: Seq[Expression])
+  if list.length <= inMemoryPartitionPruningMaxInSize =>
+  list.map(l => statsFor(a).lowerBound <= l.asInstanceOf[Literal] &&
--- End diff --

oh sorry I read the code wrong, yea the config is different.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13887: [SPARK-16186][SQL] Support partition batch prunin...

2016-06-24 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/13887#discussion_r68381115
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -79,6 +79,11 @@ private[sql] case class InMemoryTableScanExec(
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
+
+case In(a: AttributeReference, list: Seq[Expression])
+  if list.length <= inMemoryPartitionPruningMaxInSize =>
+  list.map(l => statsFor(a).lowerBound <= l.asInstanceOf[Literal] &&
--- End diff --

But, that configuration is minimum threshold for InSet. So, the meaning is 
quite different.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13887: [SPARK-16186][SQL] Support partition batch prunin...

2016-06-24 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/13887#discussion_r68380891
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -79,6 +79,11 @@ private[sql] case class InMemoryTableScanExec(
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
+
+case In(a: AttributeReference, list: Seq[Expression])
+  if list.length <= inMemoryPartitionPruningMaxInSize =>
+  list.map(l => statsFor(a).lowerBound <= l.asInstanceOf[Literal] &&
--- End diff --

how about we do this optimization for `InSet`? It guarantees the list are 
all literals and the max length by default is 10. Then we can save the new 
config.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13887: [SPARK-16186][SQL] Support partition batch prunin...

2016-06-24 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/13887#discussion_r68380170
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -79,6 +79,11 @@ private[sql] case class InMemoryTableScanExec(
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
+
+case In(a: AttributeReference, list: Seq[Expression])
+  if list.length <= inMemoryPartitionPruningMaxInSize =>
+  list.map(l => statsFor(a).lowerBound <= l.asInstanceOf[Literal] &&
--- End diff --

Oh, right. I miss that. I'll fix that by checking.
Thank you for review, @cloud-fan !


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13887: [SPARK-16186][SQL] Support partition batch prunin...

2016-06-24 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/13887#discussion_r68379466
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
 ---
@@ -79,6 +79,11 @@ private[sql] case class InMemoryTableScanExec(
 
 case IsNull(a: Attribute) => statsFor(a).nullCount > 0
 case IsNotNull(a: Attribute) => statsFor(a).count - 
statsFor(a).nullCount > 0
+
+case In(a: AttributeReference, list: Seq[Expression])
+  if list.length <= inMemoryPartitionPruningMaxInSize =>
+  list.map(l => statsFor(a).lowerBound <= l.asInstanceOf[Literal] &&
--- End diff --

where do we make sure the `l` is always literal?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13887: [SPARK-16186][SQL] Support partition batch prunin...

2016-06-24 Thread dongjoon-hyun
GitHub user dongjoon-hyun opened a pull request:

https://github.com/apache/spark/pull/13887

[SPARK-16186][SQL] Support partition batch pruning with `IN` predicate in 
InMemoryTableScanExec

## What changes were proposed in this pull request?

One of the most frequent usage patterns for Spark SQL is using **cached 
tables**. This PR improves `InMemoryTableScanExec` to handle `IN` predicate 
efficiently by pruning partition batches. Of course, the performance 
improvement varies over the queries and the datasets. But, for the following 
simple query, the query duration in Spark UI goes from 9 seconds to 50~90ms. 
It's about 100 times faster.
```scala
$ bin/spark-shell --driver-memory 6G
scala> val df = spark.range(20)
scala> df.createOrReplaceTempView("t")
scala> spark.catalog.cacheTable("t")
scala> sql("select id from t where id = 1").collect()// About 2 mins
scala> sql("select id from t where id = 1").collect()// less than 90ms
scala> sql("select id from t where id in (1,2,3)").collect()  // 9 seconds
scala> 
spark.conf.set("spark.sql.inMemoryColumnarStorage.partitionPruningMaxInSize", 
10)  // Enable. (Just to show this examples, currently the default value is 10.)
scala> sql("select id from t where id in (1,2,3)").collect() // less than 
90ms

spark.conf.set("spark.sql.inMemoryColumnarStorage.partitionPruningMaxInSize", 
0)  // Disable
scala> sql("select id from t where id in (1,2,3)").collect() // 9 seconds
```

This PR has impacts over 35 queries of TPC-DS if the tables are cached.

## How was this patch tested?

Pass the Jenkins tests (including new testcases).

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dongjoon-hyun/spark SPARK-16186

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/13887.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #13887


commit 3b36e9cfb033762205900200a2249b8da3ba11bd
Author: Dongjoon Hyun 
Date:   2016-06-24T08:30:36Z

[SPARK-16186][SQL] Support partition batch pruning with `IN` predicate in 
InMemoryTableScanExec




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org