[spark] branch branch-2.4 updated: [SPARK-31306][DOCS] update rand() function documentation to indicate exclusive upper bound

2020-03-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new e226f68  [SPARK-31306][DOCS] update rand() function documentation to 
indicate exclusive upper bound
e226f68 is described below

commit e226f687c172c63ce9ae6531772af9df124c9454
Author: Ben Ryves 
AuthorDate: Tue Mar 31 15:16:17 2020 +0900

[SPARK-31306][DOCS] update rand() function documentation to indicate 
exclusive upper bound

### What changes were proposed in this pull request?
A small documentation change to clarify that the `rand()` function produces 
values in `[0.0, 1.0)`.

### Why are the changes needed?
`rand()` uses `Rand()` - which generates values in [0, 1) ([documented 
here](https://github.com/apache/spark/blob/a1dbcd13a3eeaee50cc1a46e909f9478d6d55177/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/randomExpressions.scala#L71)).
 The existing documentation suggests that 1.0 is a possible value returned by 
rand (i.e for a distribution written as `X ~ U(a, b)`, x can be a or b, so 
`U[0.0, 1.0]` suggests the value returned could include 1.0).

### Does this PR introduce any user-facing change?
Only documentation changes.

### How was this patch tested?
Documentation changes only.

Closes #28071 from Smeb/master.

Authored-by: Ben Ryves 
Signed-off-by: HyukjinKwon 
---
 R/pkg/R/functions.R  | 2 +-
 python/pyspark/sql/functions.py  | 2 +-
 sql/core/src/main/scala/org/apache/spark/sql/functions.scala | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/R/pkg/R/functions.R b/R/pkg/R/functions.R
index e914dd3..09b0a21 100644
--- a/R/pkg/R/functions.R
+++ b/R/pkg/R/functions.R
@@ -2614,7 +2614,7 @@ setMethod("lpad", signature(x = "Column", len = 
"numeric", pad = "character"),
 
 #' @details
 #' \code{rand}: Generates a random column with independent and identically 
distributed (i.i.d.)
-#' samples from U[0.0, 1.0].
+#' samples uniformly distributed in [0.0, 1.0).
 #' Note: the function is non-deterministic in general case.
 #'
 #' @rdname column_nonaggregate_functions
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index b964980..c305529 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -553,7 +553,7 @@ def nanvl(col1, col2):
 @since(1.4)
 def rand(seed=None):
 """Generates a random column with independent and identically distributed 
(i.i.d.) samples
-from U[0.0, 1.0].
+uniformly distributed in [0.0, 1.0).
 
 .. note:: The function is non-deterministic in general case.
 
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
index f419a38..21ad1fd 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
@@ -1224,7 +1224,7 @@ object functions {
 
   /**
* Generate a random column with independent and identically distributed 
(i.i.d.) samples
-   * from U[0.0, 1.0].
+   * uniformly distributed in [0.0, 1.0).
*
* @note The function is non-deterministic in general case.
*
@@ -1235,7 +1235,7 @@ object functions {
 
   /**
* Generate a random column with independent and identically distributed 
(i.i.d.) samples
-   * from U[0.0, 1.0].
+   * uniformly distributed in [0.0, 1.0).
*
* @note The function is non-deterministic in general case.
*


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: [SPARK-31306][DOCS] update rand() function documentation to indicate exclusive upper bound

2020-03-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new e226f68  [SPARK-31306][DOCS] update rand() function documentation to 
indicate exclusive upper bound
e226f68 is described below

commit e226f687c172c63ce9ae6531772af9df124c9454
Author: Ben Ryves 
AuthorDate: Tue Mar 31 15:16:17 2020 +0900

[SPARK-31306][DOCS] update rand() function documentation to indicate 
exclusive upper bound

### What changes were proposed in this pull request?
A small documentation change to clarify that the `rand()` function produces 
values in `[0.0, 1.0)`.

### Why are the changes needed?
`rand()` uses `Rand()` - which generates values in [0, 1) ([documented 
here](https://github.com/apache/spark/blob/a1dbcd13a3eeaee50cc1a46e909f9478d6d55177/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/randomExpressions.scala#L71)).
 The existing documentation suggests that 1.0 is a possible value returned by 
rand (i.e for a distribution written as `X ~ U(a, b)`, x can be a or b, so 
`U[0.0, 1.0]` suggests the value returned could include 1.0).

### Does this PR introduce any user-facing change?
Only documentation changes.

### How was this patch tested?
Documentation changes only.

Closes #28071 from Smeb/master.

Authored-by: Ben Ryves 
Signed-off-by: HyukjinKwon 
---
 R/pkg/R/functions.R  | 2 +-
 python/pyspark/sql/functions.py  | 2 +-
 sql/core/src/main/scala/org/apache/spark/sql/functions.scala | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/R/pkg/R/functions.R b/R/pkg/R/functions.R
index e914dd3..09b0a21 100644
--- a/R/pkg/R/functions.R
+++ b/R/pkg/R/functions.R
@@ -2614,7 +2614,7 @@ setMethod("lpad", signature(x = "Column", len = 
"numeric", pad = "character"),
 
 #' @details
 #' \code{rand}: Generates a random column with independent and identically 
distributed (i.i.d.)
-#' samples from U[0.0, 1.0].
+#' samples uniformly distributed in [0.0, 1.0).
 #' Note: the function is non-deterministic in general case.
 #'
 #' @rdname column_nonaggregate_functions
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index b964980..c305529 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -553,7 +553,7 @@ def nanvl(col1, col2):
 @since(1.4)
 def rand(seed=None):
 """Generates a random column with independent and identically distributed 
(i.i.d.) samples
-from U[0.0, 1.0].
+uniformly distributed in [0.0, 1.0).
 
 .. note:: The function is non-deterministic in general case.
 
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
index f419a38..21ad1fd 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
@@ -1224,7 +1224,7 @@ object functions {
 
   /**
* Generate a random column with independent and identically distributed 
(i.i.d.) samples
-   * from U[0.0, 1.0].
+   * uniformly distributed in [0.0, 1.0).
*
* @note The function is non-deterministic in general case.
*
@@ -1235,7 +1235,7 @@ object functions {
 
   /**
* Generate a random column with independent and identically distributed 
(i.i.d.) samples
-   * from U[0.0, 1.0].
+   * uniformly distributed in [0.0, 1.0).
*
* @note The function is non-deterministic in general case.
*


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-31306][DOCS] update rand() function documentation to indicate exclusive upper bound

2020-03-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new fa37856  [SPARK-31306][DOCS] update rand() function documentation to 
indicate exclusive upper bound
fa37856 is described below

commit fa378567105ec9d9bbe30edf4b74b09c3df27658
Author: Ben Ryves 
AuthorDate: Tue Mar 31 15:16:17 2020 +0900

[SPARK-31306][DOCS] update rand() function documentation to indicate 
exclusive upper bound

### What changes were proposed in this pull request?
A small documentation change to clarify that the `rand()` function produces 
values in `[0.0, 1.0)`.

### Why are the changes needed?
`rand()` uses `Rand()` - which generates values in [0, 1) ([documented 
here](https://github.com/apache/spark/blob/a1dbcd13a3eeaee50cc1a46e909f9478d6d55177/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/randomExpressions.scala#L71)).
 The existing documentation suggests that 1.0 is a possible value returned by 
rand (i.e for a distribution written as `X ~ U(a, b)`, x can be a or b, so 
`U[0.0, 1.0]` suggests the value returned could include 1.0).

### Does this PR introduce any user-facing change?
Only documentation changes.

### How was this patch tested?
Documentation changes only.

Closes #28071 from Smeb/master.

Authored-by: Ben Ryves 
Signed-off-by: HyukjinKwon 
---
 R/pkg/R/functions.R  | 2 +-
 python/pyspark/sql/functions.py  | 2 +-
 sql/core/src/main/scala/org/apache/spark/sql/functions.scala | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/R/pkg/R/functions.R b/R/pkg/R/functions.R
index 3d30ce1..2baf3aa 100644
--- a/R/pkg/R/functions.R
+++ b/R/pkg/R/functions.R
@@ -2975,7 +2975,7 @@ setMethod("lpad", signature(x = "Column", len = 
"numeric", pad = "character"),
 
 #' @details
 #' \code{rand}: Generates a random column with independent and identically 
distributed (i.i.d.)
-#' samples from U[0.0, 1.0].
+#' samples uniformly distributed in [0.0, 1.0).
 #' Note: the function is non-deterministic in general case.
 #'
 #' @rdname column_nonaggregate_functions
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index 4b51dc1..de0d38e 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -652,7 +652,7 @@ def percentile_approx(col, percentage, accuracy=1):
 @since(1.4)
 def rand(seed=None):
 """Generates a random column with independent and identically distributed 
(i.i.d.) samples
-from U[0.0, 1.0].
+uniformly distributed in [0.0, 1.0).
 
 .. note:: The function is non-deterministic in general case.
 
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
index 1a0244f..8d8638d 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
@@ -1227,7 +1227,7 @@ object functions {
 
   /**
* Generate a random column with independent and identically distributed 
(i.i.d.) samples
-   * from U[0.0, 1.0].
+   * uniformly distributed in [0.0, 1.0).
*
* @note The function is non-deterministic in general case.
*
@@ -1238,7 +1238,7 @@ object functions {
 
   /**
* Generate a random column with independent and identically distributed 
(i.i.d.) samples
-   * from U[0.0, 1.0].
+   * uniformly distributed in [0.0, 1.0).
*
* @note The function is non-deterministic in general case.
*


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-31306][DOCS] update rand() function documentation to indicate exclusive upper bound

2020-03-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 1caca7d  [SPARK-31306][DOCS] update rand() function documentation to 
indicate exclusive upper bound
1caca7d is described below

commit 1caca7d97a03ab9ac99597e1ef9fa3890da90743
Author: Ben Ryves 
AuthorDate: Tue Mar 31 15:16:17 2020 +0900

[SPARK-31306][DOCS] update rand() function documentation to indicate 
exclusive upper bound

### What changes were proposed in this pull request?
A small documentation change to clarify that the `rand()` function produces 
values in `[0.0, 1.0)`.

### Why are the changes needed?
`rand()` uses `Rand()` - which generates values in [0, 1) ([documented 
here](https://github.com/apache/spark/blob/a1dbcd13a3eeaee50cc1a46e909f9478d6d55177/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/randomExpressions.scala#L71)).
 The existing documentation suggests that 1.0 is a possible value returned by 
rand (i.e for a distribution written as `X ~ U(a, b)`, x can be a or b, so 
`U[0.0, 1.0]` suggests the value returned could include 1.0).

### Does this PR introduce any user-facing change?
Only documentation changes.

### How was this patch tested?
Documentation changes only.

Closes #28071 from Smeb/master.

Authored-by: Ben Ryves 
Signed-off-by: HyukjinKwon 
(cherry picked from commit fa378567105ec9d9bbe30edf4b74b09c3df27658)
Signed-off-by: HyukjinKwon 
---
 R/pkg/R/functions.R  | 2 +-
 python/pyspark/sql/functions.py  | 2 +-
 sql/core/src/main/scala/org/apache/spark/sql/functions.scala | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/R/pkg/R/functions.R b/R/pkg/R/functions.R
index d8b0450..173dbc4 100644
--- a/R/pkg/R/functions.R
+++ b/R/pkg/R/functions.R
@@ -2888,7 +2888,7 @@ setMethod("lpad", signature(x = "Column", len = 
"numeric", pad = "character"),
 
 #' @details
 #' \code{rand}: Generates a random column with independent and identically 
distributed (i.i.d.)
-#' samples from U[0.0, 1.0].
+#' samples uniformly distributed in [0.0, 1.0).
 #' Note: the function is non-deterministic in general case.
 #'
 #' @rdname column_nonaggregate_functions
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index 1ade21c..476aab4 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -599,7 +599,7 @@ def nanvl(col1, col2):
 @since(1.4)
 def rand(seed=None):
 """Generates a random column with independent and identically distributed 
(i.i.d.) samples
-from U[0.0, 1.0].
+uniformly distributed in [0.0, 1.0).
 
 .. note:: The function is non-deterministic in general case.
 
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
index 8a89a3b..fd4e77f 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
@@ -1204,7 +1204,7 @@ object functions {
 
   /**
* Generate a random column with independent and identically distributed 
(i.i.d.) samples
-   * from U[0.0, 1.0].
+   * uniformly distributed in [0.0, 1.0).
*
* @note The function is non-deterministic in general case.
*
@@ -1215,7 +1215,7 @@ object functions {
 
   /**
* Generate a random column with independent and identically distributed 
(i.i.d.) samples
-   * from U[0.0, 1.0].
+   * uniformly distributed in [0.0, 1.0).
*
* @note The function is non-deterministic in general case.
*


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (fc5d67f -> 4fc8ee7)

2020-03-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from fc5d67f  [SPARK-31282][DOC] Supplement version for configuration 
appear in security doc
 add 4fc8ee7  [SPARK-31295][DOC] Supplement version for configuration 
appear in doc

No new revisions were added by this update.

Summary of changes:
 docs/spark-standalone.md | 13 -
 docs/sql-data-sources-avro.md| 21 +
 docs/sql-data-sources-orc.md | 16 +---
 docs/sql-data-sources-parquet.md |  9 -
 4 files changed, 50 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (4fc8ee7 -> 47c810f)

2020-03-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4fc8ee7  [SPARK-31295][DOC] Supplement version for configuration 
appear in doc
 add 47c810f  [SPARK-31279][SQL][DOC] Add version information to the 
configuration of Hive

No new revisions were added by this update.

Summary of changes:
 docs/sql-data-sources-hive-tables.md  |  6 +-
 .../src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala  | 11 +++
 2 files changed, 16 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (18b73a5 -> fc5d67f)

2020-03-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 18b73a5  [SPARK-31269][DOC] Supplement version for configuration only 
appear in configuration doc
 add fc5d67f  [SPARK-31282][DOC] Supplement version for configuration 
appear in security doc

No new revisions were added by this update.

Summary of changes:
 docs/security.md | 43 +++
 1 file changed, 31 insertions(+), 12 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (bed2177 -> 18b73a5)

2020-03-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from bed2177  [SPARK-31215][SQL][DOC] Add version information to the static 
configuration of SQL
 add 18b73a5  [SPARK-31269][DOC] Supplement version for configuration only 
appear in configuration doc

No new revisions were added by this update.

Summary of changes:
 docs/configuration.md | 88 ---
 1 file changed, 69 insertions(+), 19 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-31215][SQL][DOC] Add version information to the static configuration of SQL

2020-03-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new bed2177  [SPARK-31215][SQL][DOC] Add version information to the static 
configuration of SQL
bed2177 is described below

commit bed21770af67f99f7a1b49a078604abfd0c3e8d6
Author: beliefer 
AuthorDate: Tue Mar 31 12:31:25 2020 +0900

[SPARK-31215][SQL][DOC] Add version information to the static configuration 
of SQL

### What changes were proposed in this pull request?
Add version information to the static configuration of `SQL`.

I sorted out some information show below.

Item name | Since version | JIRA ID | Commit ID | Note
-- | -- | -- | -- | --
spark.sql.warehouse.dir | 2.0.0 | SPARK-14994 | 
054f991c4350af1350af7a4109ee77f4a34822f0#diff-32bb9518401c0948c5ea19377b5069ab 
|  
spark.sql.catalogImplementation | 2.0.0 | SPARK-14720 and SPARK-13643 | 
8fc267ab3322e46db81e725a5cb1adb5a71b2b4d#diff-6bdad48cfc34314e89599655442ff210 
|  
spark.sql.globalTempDatabase | 2.1.0 | SPARK-17338 | 
23ddff4b2b2744c3dc84d928e144c541ad5df376#diff-6bdad48cfc34314e89599655442ff210 
|  
spark.sql.sources.schemaStringLengthThreshold | 1.3.1 | SPARK-6024 | 
6200f0709c5c8440decae8bf700d7859f32ac9d5#diff-41ef65b9ef5b518f77e2a03559893f4d 
| 1.3
spark.sql.filesourceTableRelationCacheSize | 2.2.0 | SPARK-19265 | 
9d9d67c7957f7cbbdbe889bdbc073568b2bfbb16#diff-32bb9518401c0948c5ea19377b5069ab |
spark.sql.codegen.cache.maxEntries | 2.4.0 | SPARK-24727 | 
b2deef64f604ddd9502a31105ed47cb63470ec85#diff-5081b9388de3add800b6e4a6ddf55c01 |
spark.sql.codegen.comments | 2.0.0 | SPARK-15680 | 
f0e8738c1ec0e4c5526aeada6f50cf76428f9afd#diff-8bcc5aea39c73d4bf38aef6f6951d42c 
|  
spark.sql.debug | 2.1.0 | SPARK-17899 | 
db8784feaa605adcbd37af4bc8b7146479b631f8#diff-32bb9518401c0948c5ea19377b5069ab 
|  
spark.sql.hive.thriftServer.singleSession | 1.6.0 | SPARK-11089 | 
167ea61a6a604fd9c0b00122a94d1bc4b1de24ff#diff-ff50aea397a607b79df9bec6f2a841db 
|  
spark.sql.extensions | 2.2.0 | SPARK-18127 | 
f0de600797ff4883927d0c70732675fd8629e239#diff-5081b9388de3add800b6e4a6ddf55c01 
|  
spark.sql.queryExecutionListeners | 2.3.0 | SPARK-19558 | 
bd4eb9ce57da7bacff69d9ed958c94f349b7e6fb#diff-5081b9388de3add800b6e4a6ddf55c01 
|  
spark.sql.streaming.streamingQueryListeners | 2.4.0 | SPARK-24479 | 
7703b46d2843db99e28110c4c7ccf60934412504#diff-5081b9388de3add800b6e4a6ddf55c01 
|  
spark.sql.ui.retainedExecutions | 1.5.0 | SPARK-8861 and SPARK-8862 | 
ebc3aad272b91cf58e2e1b4aa92b49b8a947a045#diff-81764e4d52817f83bdd5336ef1226bd9 
|  
spark.sql.broadcastExchange.maxThreadThreshold | 3.0.0 | SPARK-26601 | 
126310ca68f2f248ea8b312c4637eccaba2fdc2b#diff-5081b9388de3add800b6e4a6ddf55c01 
|  
spark.sql.subquery.maxThreadThreshold | 2.4.6 | SPARK-30556 | 
2fc562cafd71ec8f438f37a28b65118906ab2ad2#diff-5081b9388de3add800b6e4a6ddf55c01 
|  
spark.sql.event.truncate.length | 3.0.0 | SPARK-27045 | 
e60d8fce0b0cf2a6d766ea2fc5f994546550570a#diff-5081b9388de3add800b6e4a6ddf55c01 |
spark.sql.legacy.sessionInitWithConfigDefaults | 3.0.0 | SPARK-27253 | 
83f628b57da39ad9732d1393aebac373634a2eb9#diff-5081b9388de3add800b6e4a6ddf55c01 |
spark.sql.defaultUrlStreamHandlerFactory.enabled | 3.0.0 | SPARK-25694 | 
8469614c0513fbed87977d4e741649db3fdd8add#diff-5081b9388de3add800b6e4a6ddf55c01 |
spark.sql.streaming.ui.enabled | 3.0.0 | SPARK-29543 | 
f9b86370cb04b72a4f00cbd4d60873960aa2792c#diff-5081b9388de3add800b6e4a6ddf55c01 
|  
spark.sql.streaming.ui.retainedProgressUpdates | 3.0.0 | SPARK-29543 | 
f9b86370cb04b72a4f00cbd4d60873960aa2792c#diff-5081b9388de3add800b6e4a6ddf55c01 
|  
spark.sql.streaming.ui.retainedQueries | 3.0.0 | SPARK-29543 | 
f9b86370cb04b72a4f00cbd4d60873960aa2792c#diff-5081b9388de3add800b6e4a6ddf55c01 
|  

### Why are the changes needed?
Supplemental configuration version information.

### Does this PR introduce any user-facing change?
'No'.

### How was this patch tested?
Exists UT

Closes #27981 from beliefer/add-version-to-sql-static-config.

Authored-by: beliefer 
Signed-off-by: HyukjinKwon 
---
 docs/configuration.md  |  1 +
 .../apache/spark/sql/internal/StaticSQLConf.scala  | 25 --
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/docs/configuration.md b/docs/configuration.md
index a7a1477..4835336 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -1233,6 +1233,7 @@ Apart from these, the following properties are also 
available, and may be useful
   
 How many finished executions the Spark UI and status APIs remember before 
garbage collecting.
   
+  1.5.0
 
 
   spark.streaming.ui.retainedBatches
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark

[spark] branch master updated (cda2e30 -> 1dce6c1)

2020-03-30 Thread ruifengz
This is an automated email from the ASF dual-hosted git repository.

ruifengz pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from cda2e30  Revert "[SPARK-31280][SQL] Perform propagating empty relation 
after RewritePredicateSubquery"
 add 1dce6c1  [SPARK-31222][ML] Make ANOVATest Sparsity-Aware

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/ml/stat/ANOVATest.scala | 190 ++---
 .../apache/spark/mllib/stat/test/ChiSqTest.scala   |  52 +++---
 .../spark/ml/feature/ANOVASelectorSuite.scala  |   3 -
 .../org/apache/spark/ml/stat/ANOVATestSuite.scala  |  40 +++--
 4 files changed, 173 insertions(+), 112 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (aa98ac5 -> cda2e30)

2020-03-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from aa98ac5  [SPARK-30775][DOC] Improve the description of executor 
metrics in the monitoring documentation
 add cda2e30  Revert "[SPARK-31280][SQL] Perform propagating empty relation 
after RewritePredicateSubquery"

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/optimizer/Optimizer.scala   |  2 --
 .../catalyst/optimizer/RewriteSubquerySuite.scala  | 17 +++---
 .../apache/spark/sql/execution/PlannerSuite.scala  | 36 --
 3 files changed, 4 insertions(+), 51 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (aa98ac5 -> cda2e30)

2020-03-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from aa98ac5  [SPARK-30775][DOC] Improve the description of executor 
metrics in the monitoring documentation
 add cda2e30  Revert "[SPARK-31280][SQL] Perform propagating empty relation 
after RewritePredicateSubquery"

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/optimizer/Optimizer.scala   |  2 --
 .../catalyst/optimizer/RewriteSubquerySuite.scala  | 17 +++---
 .../apache/spark/sql/execution/PlannerSuite.scala  | 36 --
 3 files changed, 4 insertions(+), 51 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (1d0fc9a -> aa98ac5)

2020-03-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 1d0fc9a  [SPARK-29574][K8S][FOLLOWUP] Fix bash comparison error in 
Docker entrypoint.sh
 add aa98ac5  [SPARK-30775][DOC] Improve the description of executor 
metrics in the monitoring documentation

No new revisions were added by this update.

Summary of changes:
 docs/monitoring.md | 58 +++---
 1 file changed, 51 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0d997e5 -> 1d0fc9a)

2020-03-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0d997e5  [SPARK-31219][YARN] Enable closeIdleConnections in 
YarnShuffleService
 add 1d0fc9a  [SPARK-29574][K8S][FOLLOWUP] Fix bash comparison error in 
Docker entrypoint.sh

No new revisions were added by this update.

Summary of changes:
 .../kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0d997e5 -> 1d0fc9a)

2020-03-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0d997e5  [SPARK-31219][YARN] Enable closeIdleConnections in 
YarnShuffleService
 add 1d0fc9a  [SPARK-29574][K8S][FOLLOWUP] Fix bash comparison error in 
Docker entrypoint.sh

No new revisions were added by this update.

Summary of changes:
 .../kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0d997e5 -> 1d0fc9a)

2020-03-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0d997e5  [SPARK-31219][YARN] Enable closeIdleConnections in 
YarnShuffleService
 add 1d0fc9a  [SPARK-29574][K8S][FOLLOWUP] Fix bash comparison error in 
Docker entrypoint.sh

No new revisions were added by this update.

Summary of changes:
 .../kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (0d997e5 -> 1d0fc9a)

2020-03-30 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 0d997e5  [SPARK-31219][YARN] Enable closeIdleConnections in 
YarnShuffleService
 add 1d0fc9a  [SPARK-29574][K8S][FOLLOWUP] Fix bash comparison error in 
Docker entrypoint.sh

No new revisions were added by this update.

Summary of changes:
 .../kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-31219][YARN] Enable closeIdleConnections in YarnShuffleService

2020-03-30 Thread tgraves
This is an automated email from the ASF dual-hosted git repository.

tgraves pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 7329c25  [SPARK-31219][YARN] Enable closeIdleConnections in 
YarnShuffleService
7329c25 is described below

commit 7329c256c6d02cbc700d367320ef20d215bca8aa
Author: manuzhang 
AuthorDate: Mon Mar 30 12:44:46 2020 -0500

[SPARK-31219][YARN] Enable closeIdleConnections in YarnShuffleService

### What changes were proposed in this pull request?
Close idle connections at shuffle server side when an `IdleStateEvent` is 
triggered after `spark.shuffle.io.connectionTimeout` or `spark.network.timeout` 
time. It's based on following investigations.

1. We found connections on our clusters building up continuously (> 10k for 
some nodes). Is that normal ? We don't think so.
2. We looked into the connections on one node and found there were a lot of 
half-open connections. (connections only existed on one node)
3. We also checked those connections were very old (> 21 hours). (FYI, 
https://superuser.com/questions/565991/how-to-determine-the-socket-connection-up-time-on-linux)
4. Looking at the code, TransportContext registers an IdleStateHandler 
which should fire an IdleStateEvent when timeout. We did a heap dump of the 
YarnShuffleService and checked the attributes of IdleStateHandler. It turned 
out firstAllIdleEvent of many IdleStateHandlers were already false so 
IdleStateEvent were already fired.
5. Finally, we realized the IdleStateEvent would not be handled since 
closeIdleConnections are hardcoded to false for YarnShuffleService.

### Why are the changes needed?
Idle connections to YarnShuffleService could never be closed, and will be 
accumulating and taking up memory and file descriptors.

### Does this PR introduce any user-facing change?
No.

### How was this patch tested?
Existing tests.

Closes #27998 from manuzhang/spark-31219.

Authored-by: manuzhang 
Signed-off-by: Thomas Graves 
(cherry picked from commit 0d997e5156a751c99cd6f8be1528ed088a585d1f)
Signed-off-by: Thomas Graves 
---
 .../src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
 
b/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
index 815a56d..c41efba 100644
--- 
a/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
+++ 
b/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
@@ -188,7 +188,7 @@ public class YarnShuffleService extends AuxiliaryService {
 
   int port = conf.getInt(
 SPARK_SHUFFLE_SERVICE_PORT_KEY, DEFAULT_SPARK_SHUFFLE_SERVICE_PORT);
-  transportContext = new TransportContext(transportConf, blockHandler);
+  transportContext = new TransportContext(transportConf, blockHandler, 
true);
   shuffleServer = transportContext.createServer(port, bootstraps);
   // the port should normally be fixed, but for tests its useful to find 
an open port
   port = shuffleServer.getPort();


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-31219][YARN] Enable closeIdleConnections in YarnShuffleService

2020-03-30 Thread tgraves
This is an automated email from the ASF dual-hosted git repository.

tgraves pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 0d997e5  [SPARK-31219][YARN] Enable closeIdleConnections in 
YarnShuffleService
0d997e5 is described below

commit 0d997e5156a751c99cd6f8be1528ed088a585d1f
Author: manuzhang 
AuthorDate: Mon Mar 30 12:44:46 2020 -0500

[SPARK-31219][YARN] Enable closeIdleConnections in YarnShuffleService

### What changes were proposed in this pull request?
Close idle connections at shuffle server side when an `IdleStateEvent` is 
triggered after `spark.shuffle.io.connectionTimeout` or `spark.network.timeout` 
time. It's based on following investigations.

1. We found connections on our clusters building up continuously (> 10k for 
some nodes). Is that normal ? We don't think so.
2. We looked into the connections on one node and found there were a lot of 
half-open connections. (connections only existed on one node)
3. We also checked those connections were very old (> 21 hours). (FYI, 
https://superuser.com/questions/565991/how-to-determine-the-socket-connection-up-time-on-linux)
4. Looking at the code, TransportContext registers an IdleStateHandler 
which should fire an IdleStateEvent when timeout. We did a heap dump of the 
YarnShuffleService and checked the attributes of IdleStateHandler. It turned 
out firstAllIdleEvent of many IdleStateHandlers were already false so 
IdleStateEvent were already fired.
5. Finally, we realized the IdleStateEvent would not be handled since 
closeIdleConnections are hardcoded to false for YarnShuffleService.

### Why are the changes needed?
Idle connections to YarnShuffleService could never be closed, and will be 
accumulating and taking up memory and file descriptors.

### Does this PR introduce any user-facing change?
No.

### How was this patch tested?
Existing tests.

Closes #27998 from manuzhang/spark-31219.

Authored-by: manuzhang 
Signed-off-by: Thomas Graves 
---
 .../src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
 
b/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
index 815a56d..c41efba 100644
--- 
a/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
+++ 
b/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
@@ -188,7 +188,7 @@ public class YarnShuffleService extends AuxiliaryService {
 
   int port = conf.getInt(
 SPARK_SHUFFLE_SERVICE_PORT_KEY, DEFAULT_SPARK_SHUFFLE_SERVICE_PORT);
-  transportContext = new TransportContext(transportConf, blockHandler);
+  transportContext = new TransportContext(transportConf, blockHandler, 
true);
   shuffleServer = transportContext.createServer(port, bootstraps);
   // the port should normally be fixed, but for tests its useful to find 
an open port
   port = shuffleServer.getPort();


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r38740 - /dev/spark/v3.0.0-rc1-bin/

2020-03-30 Thread rxin
Author: rxin
Date: Mon Mar 30 16:00:46 2020
New Revision: 38740

Log:
Apache Spark v3.0.0-rc1

Added:
dev/spark/v3.0.0-rc1-bin/
dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz   (with props)
dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz.asc
dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz.sha512
dev/spark/v3.0.0-rc1-bin/pyspark-3.0.0.tar.gz   (with props)
dev/spark/v3.0.0-rc1-bin/pyspark-3.0.0.tar.gz.asc
dev/spark/v3.0.0-rc1-bin/pyspark-3.0.0.tar.gz.sha512
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-hadoop2.7-hive1.2.tgz   (with 
props)
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-hadoop2.7-hive1.2.tgz.asc
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-hadoop2.7-hive1.2.tgz.sha512
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-hadoop2.7.tgz   (with props)
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-hadoop2.7.tgz.asc
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-hadoop2.7.tgz.sha512
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-hadoop3.2.tgz   (with props)
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-hadoop3.2.tgz.asc
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-hadoop3.2.tgz.sha512
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-without-hadoop.tgz   (with props)
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-without-hadoop.tgz.asc
dev/spark/v3.0.0-rc1-bin/spark-3.0.0-bin-without-hadoop.tgz.sha512
dev/spark/v3.0.0-rc1-bin/spark-3.0.0.tgz   (with props)
dev/spark/v3.0.0-rc1-bin/spark-3.0.0.tgz.asc
dev/spark/v3.0.0-rc1-bin/spark-3.0.0.tgz.sha512

Added: dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz.asc
==
--- dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz.asc (added)
+++ dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz.asc Mon Mar 30 16:00:46 2020
@@ -0,0 +1,17 @@
+-BEGIN PGP SIGNATURE-
+
+iQJEBAABCgAuFiEESovaSObiEqc0YyUC3qlj4uk0fWYFAl6CCPMQHHJ4aW5AYXBh
+Y2hlLm9yZwAKCRDeqWPi6TR9Zr8LD/9WOO4mDufkmhhXk78zWAyhRjJpG0Kjuvla
+KEnx8MK4MUtr77cQsmVLgj+FXFwmUvtZTZXHJX704Jk6xAAFXzii4EwIfk46wka0
+CY0arEleHJ6MBohLbOVW3sp86LduQBBd+dmBbIh7spJjd054RRqsAe8sVx0uqezD
+y4Fv+LM0B7kQhHdhsYymVClAwgwKOwecdks0l9PonE9YwyJixMEOZwxxk4aaRNwR
+VUH6X4mHlpWiQ+zHWTAmE7aOvjOwxQqciqtmgzLLRlDjuTtz160XLthUneoOVoDw
+spphs7pMpj8r4T9BZQCeIiuRvE5VeT6037Uz03X56xhzEvna9+0/frHR/Vb88gW8
+U5YJio4p8h286vLwb0X48K7lyfd60VM0kyfh31xl1ZppdAFXhV9qA7435wn6R4NU
+1zi/oXnHOgAWW037C+QFXpPnKzCY3BpmLw3uAGMgYRA+2NqrAT2HE8vmnlxJkrBS
+JT3OlJCCkIw2yitPN5zZaWZLpbvT07wFEH8KFoh7Wgs4FBl1mDeyGT53RhbSHjy1
++i85E6g9366CZNoD3bSUlPlY9iOtP4QK4Qp+VOn1j13Bu3BE9Fpuprani1ESsGME
+16qzwf5It3TVWK9czXqa8HBJvlrjaEInloWThmSysYFweKIRT+8CEu9+KyakTKVL
+fnGKXfbXzQ==
+=0ZBt
+-END PGP SIGNATURE-

Added: dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz.sha512
==
--- dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz.sha512 (added)
+++ dev/spark/v3.0.0-rc1-bin/SparkR_3.0.0.tar.gz.sha512 Mon Mar 30 16:00:46 2020
@@ -0,0 +1,3 @@
+SparkR_3.0.0.tar.gz: A4828C8D BA3BA1AA 116EEA62 D7028B85 85FF87AE 8AE9F0B5
+ 421F1A3E E5E04F19 F1D4F0A6 144CEF29 8D690FC8 D9836830
+ 4518FF9E 96004114 1083326B 84B5C0EC

Added: dev/spark/v3.0.0-rc1-bin/pyspark-3.0.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v3.0.0-rc1-bin/pyspark-3.0.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v3.0.0-rc1-bin/pyspark-3.0.0.tar.gz.asc
==
--- dev/spark/v3.0.0-rc1-bin/pyspark-3.0.0.tar.gz.asc (added)
+++ dev/spark/v3.0.0-rc1-bin/pyspark-3.0.0.tar.gz.asc Mon Mar 30 16:00:46 2020
@@ -0,0 +1,17 @@
+-BEGIN PGP SIGNATURE-
+
+iQJEBAABCgAuFiEESovaSObiEqc0YyUC3qlj4uk0fWYFAl6CCPUQHHJ4aW5AYXBh
+Y2hlLm9yZwAKCRDeqWPi6TR9ZmRGD/9UkePDo4IawkYALJoaqpwnjp1Md3RP5dbK
+l/x1VLfHzAkbYQo+tKe692koHo45tE0izt+99humvZT7SjP4sVPHuR16Ik0gE6h0
+Yn8CG4Qsof30Se9feg6EllACBDEvueGlcchHN+aPyYJoLjajAzfH/5P6fC9rHe5Z
+d3aYd93cqYtIKbDtQ6fxnI387wTmWkVKAXWNB7K5iEB8KFjzCjGeyac5JbnYBC6G
+Y9uWcxqQ+3XV2SIfDQuxFuj421RBx2IIu56qJLgVEzcs8yLh4APM29DfYv7YcRGg
+ILex3j8SWjgqG1rdDhc2U/SeakR/rErJ+oebxD9dTC19wMTnp37cgS0HgtWLHaU2
+RvxaMdAvF3GjN2LFhSRht/uZV350O3EI+L6ye9WauXzaK4iD7Mi5x7BIBN1csNWn
+MW0B+goqTpzvC78h5R2ETCw1xmAarjKmdLKf3AUuqGeobv/7+4sLuwq+PSyrTgUi
+BHPIgkYYk+EhHryB6wLkKYRXWKKmMyGCl+5HLYPuY4GyZm4rwc2et8v1pX3RvcCF
+NoOcg/TZgn6+Tz0OjUm4TARs9RkbJEhKk1EWKCFvPalhenLbHHOvDJJPoqp3LNVT
+/HQ1f1JRWqXWfc/O1BR9CRFNbZTxKorPxMXIEYn583lufZyvWiyAnYKD6ev0UAdB
+/iwwQeeM/Q=

[spark] branch branch-3.0 updated: [SPARK-31296][SQL][TESTS] Benchmark date-time rebasing in Parquet datasource

2020-03-30 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 3ab2f88  [SPARK-31296][SQL][TESTS] Benchmark date-time rebasing in 
Parquet datasource
3ab2f88 is described below

commit 3ab2f8877e4618031b3258cb90eb249145077e08
Author: Maxim Gekk 
AuthorDate: Mon Mar 30 16:46:31 2020 +0800

[SPARK-31296][SQL][TESTS] Benchmark date-time rebasing in Parquet datasource

### What changes were proposed in this pull request?
In the PR, I propose to add new benchmark `DateTimeRebaseBenchmark` which 
should measure the performance of rebasing of dates/timestamps from/to to the 
hybrid calendar (Julian+Gregorian) to/from Proleptic Gregorian calendar:
1. In write, it saves separately dates and timestamps before and after 1582 
year w/ and w/o rebasing.
2. In read, it loads previously saved parquet files by vectorized reader 
and by regular reader.

Here is the summary of benchmarking:
- Saving timestamps is **~6 times slower**
- Loading timestamps w/ vectorized **off** is **~4 times slower**
- Loading timestamps w/ vectorized **on** is **~10 times slower**

### Why are the changes needed?
To know the impact of date-time rebasing introduced by #27915, #27953, 
#27807.

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
Run the `DateTimeRebaseBenchmark` benchmark using Amazon EC2:

| Item | Description |
|  | |
| Region | us-west-2 (Oregon) |
| Instance | r3.xlarge |
| AMI | ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20190722.1 
(ami-06f2f779464715dc5) |
| Java | OpenJDK8/11 |

Closes #28057 from MaxGekk/rebase-bechmark.

Lead-authored-by: Maxim Gekk 
Co-authored-by: Max Gekk 
Signed-off-by: Wenchen Fan 
(cherry picked from commit a1dbcd13a3eeaee50cc1a46e909f9478d6d55177)
Signed-off-by: Wenchen Fan 
---
 .../DateTimeRebaseBenchmark-jdk11-results.txt  |  53 +++
 .../benchmarks/DateTimeRebaseBenchmark-results.txt |  53 +++
 .../benchmark/DateTimeRebaseBenchmark.scala| 161 +
 3 files changed, 267 insertions(+)

diff --git a/sql/core/benchmarks/DateTimeRebaseBenchmark-jdk11-results.txt 
b/sql/core/benchmarks/DateTimeRebaseBenchmark-jdk11-results.txt
new file mode 100644
index 000..52522f8
--- /dev/null
+++ b/sql/core/benchmarks/DateTimeRebaseBenchmark-jdk11-results.txt
@@ -0,0 +1,53 @@
+
+Rebasing dates/timestamps in Parquet datasource
+
+
+OpenJDK 64-Bit Server VM 11.0.6+10-post-Ubuntu-1ubuntu118.04.1 on Linux 
4.15.0-1058-aws
+Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
+Save dates to parquet:Best Time(ms)   Avg Time(ms)   
Stdev(ms)Rate(M/s)   Per Row(ns)   Relative
+
+after 1582, noop   9272   9272 
  0 10.8  92.7   1.0X
+before 1582, noop  9142   9142 
  0 10.9  91.4   1.0X
+after 1582, rebase off21841  21841 
  0  4.6 218.4   0.4X
+after 1582, rebase on 58245  58245 
  0  1.7 582.4   0.2X
+before 1582, rebase off   19813  19813 
  0  5.0 198.1   0.5X
+before 1582, rebase on63737  63737 
  0  1.6 637.4   0.1X
+
+OpenJDK 64-Bit Server VM 11.0.6+10-post-Ubuntu-1ubuntu118.04.1 on Linux 
4.15.0-1058-aws
+Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
+Load dates from parquet:  Best Time(ms)   Avg Time(ms)   
Stdev(ms)Rate(M/s)   Per Row(ns)   Relative
+
+after 1582, vec off, rebase off   13004  13063 
 67  7.7 130.0   1.0X
+after 1582, vec off, rebase on36224  36253 
 26  2.8 362.2   0.4X
+after 1582, vec on, rebase off 3596   3654 
 54 27.8  36.0   3.6X
+after 1582, vec on, rebase on 26144  26253 
112  3.8 261.4   0.5X
+before 1582, vec off, rebase off  12872  12914 
 51  7.8 128.7   1.0X

[spark] branch master updated (34d6b90 -> a1dbcd1)

2020-03-30 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 34d6b90  [SPARK-31283][ML] Simplify ChiSq by adding a common method
 add a1dbcd1  [SPARK-31296][SQL][TESTS] Benchmark date-time rebasing in 
Parquet datasource

No new revisions were added by this update.

Summary of changes:
 .../DateTimeRebaseBenchmark-jdk11-results.txt  |  53 +++
 .../benchmarks/DateTimeRebaseBenchmark-results.txt |  53 +++
 .../benchmark/DateTimeRebaseBenchmark.scala| 161 +
 3 files changed, 267 insertions(+)
 create mode 100644 
sql/core/benchmarks/DateTimeRebaseBenchmark-jdk11-results.txt
 create mode 100644 sql/core/benchmarks/DateTimeRebaseBenchmark-results.txt
 create mode 100644 
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/DateTimeRebaseBenchmark.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] 01/01: Preparing development version 3.0.1-SNAPSHOT

2020-03-30 Thread rxin
This is an automated email from the ASF dual-hosted git repository.

rxin pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git

commit fc5079841907443369af98b17c20f1ac24b3727d
Author: Reynold Xin 
AuthorDate: Mon Mar 30 08:42:27 2020 +

Preparing development version 3.0.1-SNAPSHOT
---
 R/pkg/DESCRIPTION  | 2 +-
 assembly/pom.xml   | 2 +-
 common/kvstore/pom.xml | 2 +-
 common/network-common/pom.xml  | 2 +-
 common/network-shuffle/pom.xml | 2 +-
 common/network-yarn/pom.xml| 2 +-
 common/sketch/pom.xml  | 2 +-
 common/tags/pom.xml| 2 +-
 common/unsafe/pom.xml  | 2 +-
 core/pom.xml   | 2 +-
 docs/_config.yml   | 4 ++--
 examples/pom.xml   | 2 +-
 external/avro/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml  | 2 +-
 external/kafka-0-10-assembly/pom.xml   | 2 +-
 external/kafka-0-10-sql/pom.xml| 2 +-
 external/kafka-0-10-token-provider/pom.xml | 2 +-
 external/kafka-0-10/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml  | 2 +-
 external/kinesis-asl/pom.xml   | 2 +-
 external/spark-ganglia-lgpl/pom.xml| 2 +-
 graphx/pom.xml | 2 +-
 hadoop-cloud/pom.xml   | 2 +-
 launcher/pom.xml   | 2 +-
 mllib-local/pom.xml| 2 +-
 mllib/pom.xml  | 2 +-
 pom.xml| 2 +-
 python/pyspark/version.py  | 2 +-
 repl/pom.xml   | 2 +-
 resource-managers/kubernetes/core/pom.xml  | 2 +-
 resource-managers/kubernetes/integration-tests/pom.xml | 2 +-
 resource-managers/mesos/pom.xml| 2 +-
 resource-managers/yarn/pom.xml | 2 +-
 sql/catalyst/pom.xml   | 2 +-
 sql/core/pom.xml   | 2 +-
 sql/hive-thriftserver/pom.xml  | 2 +-
 sql/hive/pom.xml   | 2 +-
 streaming/pom.xml  | 2 +-
 tools/pom.xml  | 2 +-
 39 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index c8cb1c3..3eff30b 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 3.0.0
+Version: 3.0.1
 Title: R Front End for 'Apache Spark'
 Description: Provides an R Front end for 'Apache Spark' 
.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 0a52a00..8bef9d8 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0
+3.0.1-SNAPSHOT
 ../pom.xml
   
 
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index fa4fcb1f..fc1441d 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0
+3.0.1-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 14a1b7d..de2a6fb 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0
+3.0.1-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index e75a843..6c0c016 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0
+3.0.1-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index 004af0a..b8df191 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0
+3.0.1-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/sketch/pom.xml b/common/sketch/pom.xml
index a35156a..8119709 100644
--- a/common/sketch/pom.xml
+++ b/common/sketch/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0
+3.0.1-SNAPSHOT
 ../../pom.xml
   
 
diff --git a/common/tags/pom.xml b/common/tags/pom.xml
inde

[spark] branch branch-3.0 updated (5687b31 -> fc50798)

2020-03-30 Thread rxin
This is an automated email from the ASF dual-hosted git repository.

rxin pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5687b31  [SPARK-30532] DataFrameStatFunctions to work with 
TABLE.COLUMN syntax
 add 6550d0d  Preparing Spark release v3.0.0-rc1
 new fc50798  Preparing development version 3.0.1-SNAPSHOT

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 R/pkg/DESCRIPTION  | 2 +-
 assembly/pom.xml   | 2 +-
 common/kvstore/pom.xml | 2 +-
 common/network-common/pom.xml  | 2 +-
 common/network-shuffle/pom.xml | 2 +-
 common/network-yarn/pom.xml| 2 +-
 common/sketch/pom.xml  | 2 +-
 common/tags/pom.xml| 2 +-
 common/unsafe/pom.xml  | 2 +-
 core/pom.xml   | 2 +-
 docs/_config.yml   | 4 ++--
 examples/pom.xml   | 2 +-
 external/avro/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml  | 2 +-
 external/kafka-0-10-assembly/pom.xml   | 2 +-
 external/kafka-0-10-sql/pom.xml| 2 +-
 external/kafka-0-10-token-provider/pom.xml | 2 +-
 external/kafka-0-10/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml  | 2 +-
 external/kinesis-asl/pom.xml   | 2 +-
 external/spark-ganglia-lgpl/pom.xml| 2 +-
 graphx/pom.xml | 2 +-
 hadoop-cloud/pom.xml   | 2 +-
 launcher/pom.xml   | 2 +-
 mllib-local/pom.xml| 2 +-
 mllib/pom.xml  | 2 +-
 pom.xml| 2 +-
 python/pyspark/version.py  | 2 +-
 repl/pom.xml   | 2 +-
 resource-managers/kubernetes/core/pom.xml  | 2 +-
 resource-managers/kubernetes/integration-tests/pom.xml | 2 +-
 resource-managers/mesos/pom.xml| 2 +-
 resource-managers/yarn/pom.xml | 2 +-
 sql/catalyst/pom.xml   | 2 +-
 sql/core/pom.xml   | 2 +-
 sql/hive-thriftserver/pom.xml  | 2 +-
 sql/hive/pom.xml   | 2 +-
 streaming/pom.xml  | 2 +-
 tools/pom.xml  | 2 +-
 39 files changed, 40 insertions(+), 40 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] tag v3.0.0-rc1 created (now 6550d0d)

2020-03-30 Thread rxin
This is an automated email from the ASF dual-hosted git repository.

rxin pushed a change to tag v3.0.0-rc1
in repository https://gitbox.apache.org/repos/asf/spark.git.


  at 6550d0d  (commit)
This tag includes the following new commits:

 new 6550d0d  Preparing Spark release v3.0.0-rc1

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] 01/01: Preparing Spark release v3.0.0-rc1

2020-03-30 Thread rxin
This is an automated email from the ASF dual-hosted git repository.

rxin pushed a commit to tag v3.0.0-rc1
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 6550d0d5283efdbbd838f3aeaf0476c7f52a0fb1
Author: Reynold Xin 
AuthorDate: Mon Mar 30 08:42:10 2020 +

Preparing Spark release v3.0.0-rc1
---
 assembly/pom.xml   | 2 +-
 common/kvstore/pom.xml | 2 +-
 common/network-common/pom.xml  | 2 +-
 common/network-shuffle/pom.xml | 2 +-
 common/network-yarn/pom.xml| 2 +-
 common/sketch/pom.xml  | 2 +-
 common/tags/pom.xml| 2 +-
 common/unsafe/pom.xml  | 2 +-
 core/pom.xml   | 2 +-
 docs/_config.yml   | 2 +-
 examples/pom.xml   | 2 +-
 external/avro/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml  | 2 +-
 external/kafka-0-10-assembly/pom.xml   | 2 +-
 external/kafka-0-10-sql/pom.xml| 2 +-
 external/kafka-0-10-token-provider/pom.xml | 2 +-
 external/kafka-0-10/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml  | 2 +-
 external/kinesis-asl/pom.xml   | 2 +-
 external/spark-ganglia-lgpl/pom.xml| 2 +-
 graphx/pom.xml | 2 +-
 hadoop-cloud/pom.xml   | 2 +-
 launcher/pom.xml   | 2 +-
 mllib-local/pom.xml| 2 +-
 mllib/pom.xml  | 2 +-
 pom.xml| 2 +-
 python/pyspark/version.py  | 2 +-
 repl/pom.xml   | 2 +-
 resource-managers/kubernetes/core/pom.xml  | 2 +-
 resource-managers/kubernetes/integration-tests/pom.xml | 2 +-
 resource-managers/mesos/pom.xml| 2 +-
 resource-managers/yarn/pom.xml | 2 +-
 sql/catalyst/pom.xml   | 2 +-
 sql/core/pom.xml   | 2 +-
 sql/hive-thriftserver/pom.xml  | 2 +-
 sql/hive/pom.xml   | 2 +-
 streaming/pom.xml  | 2 +-
 tools/pom.xml  | 2 +-
 38 files changed, 38 insertions(+), 38 deletions(-)

diff --git a/assembly/pom.xml b/assembly/pom.xml
index 193ad3d..0a52a00 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0-SNAPSHOT
+3.0.0
 ../pom.xml
   
 
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index a1c8a8e..fa4fcb1f 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0-SNAPSHOT
+3.0.0
 ../../pom.xml
   
 
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 163c250..14a1b7d 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0-SNAPSHOT
+3.0.0
 ../../pom.xml
   
 
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index a6d9981..e75a843 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0-SNAPSHOT
+3.0.0
 ../../pom.xml
   
 
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index 76a402b..004af0a 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0-SNAPSHOT
+3.0.0
 ../../pom.xml
   
 
diff --git a/common/sketch/pom.xml b/common/sketch/pom.xml
index 3c3c0d2..a35156a 100644
--- a/common/sketch/pom.xml
+++ b/common/sketch/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0-SNAPSHOT
+3.0.0
 ../../pom.xml
   
 
diff --git a/common/tags/pom.xml b/common/tags/pom.xml
index 883b73a..dedc7df 100644
--- a/common/tags/pom.xml
+++ b/common/tags/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0-SNAPSHOT
+3.0.0
 ../../pom.xml
   
 
diff --git a/common/unsafe/pom.xml b/common/unsafe/pom.xml
index 93a4f67..ebb0525 100644
--- a/common/unsafe/pom.xml
+++ b/common/unsafe/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.12
-3.0.0-SNAPSHOT
+3.0.0
 ../../pom.xml
   
 
diff --git a/cor

svn commit: r38725 - /dev/spark/KEYS

2020-03-30 Thread rxin
Author: rxin
Date: Mon Mar 30 07:26:00 2020
New Revision: 38725

Log:
Update KEYS

Modified:
dev/spark/KEYS

Modified: dev/spark/KEYS
==
--- dev/spark/KEYS (original)
+++ dev/spark/KEYS Mon Mar 30 07:26:00 2020
@@ -1167,3 +1167,61 @@ rMA+YcuC9o2K7dKjVv3KinQ2Tiv4TVxyTjcyZurg
 0TbepIdiQlc=
 =wdlY
 -END PGP PUBLIC KEY BLOCK-
+
+pub   rsa4096 2020-03-30 [SC]
+  4A8BDA48E6E212A734632502DEA963E2E9347D66
+uid   [ultimate] Reynold Xin (CODE SIGNING KEY) 
+sub   rsa4096 2020-03-30 [E]
+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBF6BkJkBEACmRKcV6c575E6jOyZBwLteV7hJsETNYx9jMkENiyeyTFJ3A8Hg
++gPAmoU6jvzugR98qgVSH0uj/HZH1zEkJx049+OHwBcZ48mGJakIaKcg3k1CPRTL
+VDRWg7M4P7nQisMHsPHrdGPJFVBE7Mn6pafuRZ46gtnXf2Ec1EsvMBOYjRNt6nSg
+GvoQdiv5SjUuwxfrw7CICj1agxwLarBcWpIF6PMU7yG+XjTIrSM63KuuV+fOZvKM
+AdjwwUNNj2aOkprPHfmFIgSnEMsxvoJQNqYTaWzwT8WAyW1qTd0LhYYDTnb4J+j2
+BxgG5ASHYpsLQ1Moy+lYsTxWsoZMvqTqv/h+Mlb8fiUTiYppeMnLzxtI/t8Trvt8
+rXNGSkNd8dM5uqJ9Ba2MS6UB6EZUd5e7aPy8z5ThlhygRjLk0527O4BYAWlZw5F8
+egq/X0liCeRHoFUsyNnuQYSqo2spdTIV2ExKo/hEF1FgbXF6s1v/TcfzS0PkSYEH
+5yhKYoEkYOXIneIjUasy8xM9O2578NsVu1GH0n+E29KDA0w+QKwpbjgb9VWKCjk1
+CPvK7oi3DKA4A28w/h5jI9Xzb343L0gb+IhdgL5lNWp2HoSy+y7Smnbz6IchjAP7
+zCtQ9ZJCLdXgCtDlXUeF+TXzEfKUYwa0jnha/fArM3PVGvQlWdpVhe/oLQARAQAB
+tDBSZXlub2xkIFhpbiAoQ09ERSBTSUdOSU5HIEtFWSkgPHJ4aW5AYXBhY2hlLm9y
+Zz6JAk4EEwEIADgWIQRKi9pI5uISpzRjJQLeqWPi6TR9ZgUCXoGQmQIbAwULCQgH
+AgYVCgkICwIEFgIDAQIeAQIXgAAKCRDeqWPi6TR9ZrBJEACW92VdruNL+dYYH0Cu
+9oxZx0thCE1twc/6rvgvIj//0kZ4ZA6RoDId8vSmKSkB0GwMT7daIoeIvRTiEdMQ
+Wai7zqvNEdT1qdNn7MfN1rveN1tBNVndzbZ8S8Nz4sqZ/8R3wG90c2XLwno3joXA
+FhFRfVa+TWI1Ux84/ZXuzD14f54dorVo0CT51CnU67ERBAijl7UugPM3Fs7ApU/o
+SWCMq7ScPde81jmgMqBDLcj/hueCOTU5m8irOGGY439qEF+H41I+IB60yzAS4Gez
+xZl55Mv7ZKdwWtCcwtUYIm4R8NNu4alTxUpxw4ttRW3Kzue78TOIMTWTwRKrP5t2
+yq9bMT1fSO7h/Ntn8dXUL0EM/h+6k5py5Kr0+mrV/s0Z530Fit6AC/ReWV6hSGdk
+F1Z1ECa4AoUHqtoQKL+CNgO2qlJn/sKj3g10NiSwqUdUuxCSOpsY72udRLG9tfkB
+OwW3lTKLp66gYYE3nYaHzJKGdRs7aJ8RRALMQkadsyqpdVMp+Yvbj/3Hn3uB3jTt
+S+RolH545toeuhXaiIWlm2434oHW6QjzpPwaNp5AiWm+vMfPkhhCX6WT0jv9nEtM
+kJJVgwlWNKYEW9nLaIRMWWONSy9aJapZfLW0XDiKidibPHqNFih9z49eDVLobi5e
+mzmOFkKFxs9D4sg9oVmId6Y9SbkCDQRegZCZARAA5ZMv1ki5mKJVpASRGfTHVH5o
+9HixwJOinkHjSK3zFpuvh0bs+rKZL2+TUXci9Em64xXuYbiGH3YgH061H9tgAMaN
+iSIFGPlbBPbduJjdiUALqauOjjCIoWJLyuAC25zSGCeAwzQiRXN6VJUYwjQnDMDG
+8iUyL+IdXjq2T6vFVZGR/uVteRqqvEcg9km6IrFmXefqfry4hZ5a7SbmThCHqGxx
+5Oy+VkWw1IP7fHIUdC9ie45X6n08yC2BfWI4+RBny8906pSXEN/ag0Yw7vWkiyuK
+wZsoe0pRczV8mx6QF2+oJjRMtziKYW72jKE9a/DXXzQ3Luq5gyZeq0cluYNGHVdj
+ijA2ORNLloAfGjVGRKVznUFN8LMkcxm4jiiHKRkZEcjgm+1tRzGPufFidyhQIYO2
+YCOpnPQh5IXznb3RZ0JqJcXdne+7Nge85URTEMmMyx5kXvD03ZmUObshDL12YoM3
+bGzObo6jYg+h38Xlx9+9QAwGkf+gApIPI8KqPAVyP6s60AR4iR6iehEOciz7h6/b
+T9bKMw0w9cvyJzY1IJsy2sQYFwNyHYWQkyDciRAmIwriHhBDfXdBodF95V3uGbIp
+DZw3jVxcgJWKZ3y65N1aCguEI1fyy9JU12++GMBa+wuv9kdhSoj2qgInFB1VXGC7
+bBlRnHB44tsFTBEqqOcAEQEAAYkCNgQYAQgAIBYhBEqL2kjm4hKnNGMlAt6pY+Lp
+NH1mBQJegZCZAhsMAAoJEN6pY+LpNH1mwIYQAIRqbhEjL6uMxM19OMPDydbhiWoI
+8BmoqzsvRNF9VidjPRicYJ5JL5FFvvTyT6g87L8aRhiAdX/la92PdJ9DTS3sfIKF
+pIcUDFybKgk4pmGWl0fNIwEjHewf6HlndCFmVuPe32V/ZkCwb58dro15xzxblckB
+kgsqb0Xbfz/3Iwlqr5eTKH5iPrDFcYKy1ODcFmXS+udMm5uwn+d/RNmj8B3kgwrw
+brs53264qdWbfsxGPC1ZkDNNSRyIy6wGvc/diRm4TSV/Lmd5OoDX4UkPJ++JhGoO
+cYKxc2KzrEZxzMgJ3xFRs3zeymOwtgXUU1GBCuD7uxr1vacFwUV+9ymTeyUdTxB3
++/DzxYOJGQL/3IXlyQ2azoCWUpCjW0MFM1OolragOFJeQ+V0xrlOiXXAFfHo0KPG
+y0QdK810Ok+XYR6U9Y7yb6tYDgi+w9r46XjurdiZnUxxLUpFG++tSgBQ5X4y2UGw
+C4n0T8/jn6KIUZ0kx51ZZ6CEChjBt+AU+HCnw2sZfgq8Nlos95tw2MT6kn8BrY68
+n297ev/1T6B0OasQaw3Itw29+T+FdzdU4c6XW/rC6VAlBikWIS5zCT//vAeBacxL
+HYoqwKL52HzG121lfWXhx5vNF4bg/fKrFEOy2Wp1fMG6nRcuUUROvieD6ZU4ZrLA
+NjpTIP+lOkfxRwUi
+=rggH
+-END PGP PUBLIC KEY BLOCK-



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org