[GitHub] spark pull request: [SPARK-3537][SQL] Refines in-memory columnar t...

2014-10-20 Thread liancheng
GitHub user liancheng opened a pull request:

https://github.com/apache/spark/pull/2860

[SPARK-3537][SQL] Refines in-memory columnar table statistics

This PR refines in-memory columnar table statistics:

1. adds 3 more statistics for in-memory table columns: `count`, `nullCount` 
and `sizeInBytes`, and filter pushdown support for `IS NULL` and `IS NOT NULL`.
1. caches and propagates statistics in `InMemoryRelation` once the 
underlying cached RDD is materialized.

   Statistics are collected to driver side with an accumulator.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liancheng/spark propagates-in-mem-stats

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/2860.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2860


commit 7dc6a34166ad915e07438795ce6b6ea67b3fdee6
Author: Cheng Lian l...@databricks.com
Date:   2014-10-20T17:13:59Z

Adds more in-memory table statistics and propagates them properly




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3537][SQL] Refines in-memory columnar t...

2014-10-20 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/2860#discussion_r19099520
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/columnar/ColumnStats.scala ---
@@ -24,11 +24,13 @@ import 
org.apache.spark.sql.catalyst.expressions.{AttributeMap, Attribute, Attri
 import org.apache.spark.sql.catalyst.types._
 
 private[sql] class ColumnStatisticsSchema(a: Attribute) extends 
Serializable {
-  val upperBound = AttributeReference(a.name + .upperBound, a.dataType, 
nullable = false)()
-  val lowerBound = AttributeReference(a.name + .lowerBound, a.dataType, 
nullable = false)()
-  val nullCount =  AttributeReference(a.name + .nullCount, IntegerType, 
nullable = false)()
+  val upperBound = AttributeReference(a.name + .upperBound, a.dataType, 
nullable = true)()
+  val lowerBound = AttributeReference(a.name + .lowerBound, a.dataType, 
nullable = true)()
--- End diff --

Upper/lower bound can be null for types like string.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3537][SQL] Refines in-memory columnar t...

2014-10-20 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/2860#discussion_r19099771
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/columnar/ColumnStats.scala ---
@@ -185,15 +196,16 @@ private[sql] class StringColumnStats extends 
ColumnStats {
 } else {
   nullCount += 1
 }
+count += 1
+sizeInBytes += STRING.actualSize(row, ordinal)
--- End diff --

This can potentially slow down caching process of string columns, because 
the `.getBytes(utf-8)` call within `actualSize` traverses the whole string.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-3537][SQL] Refines in-memory columnar t...

2014-10-20 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/2860#issuecomment-59806948
  
  [QA tests have 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21923/consoleFull)
 for   PR 2860 at commit 
[`7dc6a34`](https://github.com/apache/spark/commit/7dc6a34166ad915e07438795ce6b6ea67b3fdee6).
 * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org