svn commit: r23090 - in /dev/spark: ./ spark-2.2.1-rc1-bin/ spark-2.2.1-rc1-docs/ spark-2.2.1-rc1-docs/_site/ spark-2.2.1-rc1-docs/_site/api/ spark-2.2.1-rc1-docs/_site/api/R/ spark-2.2.1-rc1-docs/_si

2017-11-13 Thread felixcheung
Author: felixcheung
Date: Tue Nov 14 04:30:04 2017
New Revision: 23090

Log:
Apache Spark spark-2.2.1-rc1


[This commit notification would consist of 1388 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-21911][ML][FOLLOW-UP] Fix doc for parallel ML Tuning in PySpark

2017-11-13 Thread jkbradley
Repository: spark
Updated Branches:
  refs/heads/master c8b7f97b8 -> d8741b2b0


[SPARK-21911][ML][FOLLOW-UP] Fix doc for parallel ML Tuning in PySpark

## What changes were proposed in this pull request?

Fix doc issue mentioned here: 
https://github.com/apache/spark/pull/19122#issuecomment-340111834

## How was this patch tested?

N/A

Author: WeichenXu 

Closes #19641 from WeichenXu123/fix_doc.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d8741b2b
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d8741b2b
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d8741b2b

Branch: refs/heads/master
Commit: d8741b2b0fe8b8da74f120859e969326fb170629
Parents: c8b7f97
Author: WeichenXu 
Authored: Mon Nov 13 17:00:51 2017 -0800
Committer: Joseph K. Bradley 
Committed: Mon Nov 13 17:00:51 2017 -0800

--
 docs/ml-tuning.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/d8741b2b/docs/ml-tuning.md
--
diff --git a/docs/ml-tuning.md b/docs/ml-tuning.md
index 64dc46c..54d9cd2 100644
--- a/docs/ml-tuning.md
+++ b/docs/ml-tuning.md
@@ -55,7 +55,7 @@ for multiclass problems. The default metric used to choose 
the best `ParamMap` c
 method in each of these evaluators.
 
 To help construct the parameter grid, users can use the 
[`ParamGridBuilder`](api/scala/index.html#org.apache.spark.ml.tuning.ParamGridBuilder)
 utility.
-By default, sets of parameters from the parameter grid are evaluated in 
serial. Parameter evaluation can be done in parallel by setting `parallelism` 
with a value of 2 or more (a value of 1 will be serial) before running model 
selection with `CrossValidator` or `TrainValidationSplit` (NOTE: this is not 
yet supported in Python).
+By default, sets of parameters from the parameter grid are evaluated in 
serial. Parameter evaluation can be done in parallel by setting `parallelism` 
with a value of 2 or more (a value of 1 will be serial) before running model 
selection with `CrossValidator` or `TrainValidationSplit`.
 The value of `parallelism` should be chosen carefully to maximize parallelism 
without exceeding cluster resources, and larger values may not always lead to 
improved performance.  Generally speaking, a value up to 10 should be 
sufficient for most clusters.
 
 # Cross-Validation


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-22377][BUILD] Use /usr/sbin/lsof if lsof does not exists in release-build.sh

2017-11-13 Thread gurwls223
Repository: spark
Updated Branches:
  refs/heads/branch-2.1 ca19271cc -> 7bdad58e2


[SPARK-22377][BUILD] Use /usr/sbin/lsof if lsof does not exists in 
release-build.sh

## What changes were proposed in this pull request?

This PR proposes to use `/usr/sbin/lsof` if `lsof` is missing in the path to 
fix nightly snapshot jenkins jobs. Please refer 
https://github.com/apache/spark/pull/19359#issuecomment-340139557:

> Looks like some of the snapshot builds are having lsof issues:
>
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-branch-2.1-maven-snapshots/182/console
>
>https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-branch-2.2-maven-snapshots/134/console
>
>spark-build/dev/create-release/release-build.sh: line 344: lsof: command not 
>found
>usage: kill [ -s signal | -p ] [ -a ] pid ...
>kill -l [ signal ]

Up to my knowledge,  the full path of `lsof` is required for non-root user in 
few OSs.

## How was this patch tested?

Manually tested as below:

```bash
#!/usr/bin/env bash

LSOF=lsof
if ! hash $LSOF 2>/dev/null; then
  echo "a"
  LSOF=/usr/sbin/lsof
fi

$LSOF -P | grep "a"
```

Author: hyukjinkwon 

Closes #19695 from HyukjinKwon/SPARK-22377.

(cherry picked from commit c8b7f97b8a58bf4a9f6e3a07dd6e5b0f646d8d99)
Signed-off-by: hyukjinkwon 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7bdad58e
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7bdad58e
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/7bdad58e

Branch: refs/heads/branch-2.1
Commit: 7bdad58e2baac98e7b77f17aaa6c88de230a220e
Parents: ca19271
Author: hyukjinkwon 
Authored: Tue Nov 14 08:28:13 2017 +0900
Committer: hyukjinkwon 
Committed: Tue Nov 14 08:28:43 2017 +0900

--
 dev/create-release/release-build.sh | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/7bdad58e/dev/create-release/release-build.sh
--
diff --git a/dev/create-release/release-build.sh 
b/dev/create-release/release-build.sh
index ad32c31..eefd864 100755
--- a/dev/create-release/release-build.sh
+++ b/dev/create-release/release-build.sh
@@ -121,6 +121,13 @@ else
   fi
 fi
 
+# This is a band-aid fix to avoid the failure of Maven nightly snapshot in 
some Jenkins
+# machines by explicitly calling /usr/sbin/lsof. Please see SPARK-22377 and 
the discussion
+# in its pull request.
+LSOF=lsof
+if ! hash $LSOF 2>/dev/null; then
+  LSOF=/usr/sbin/lsof
+fi
 
 if [ -z "$SPARK_PACKAGE_VERSION" ]; then
   SPARK_PACKAGE_VERSION="${SPARK_VERSION}-$(date +%Y_%m_%d_%H_%M)-${git_hash}"
@@ -341,7 +348,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
 -DskipTests $PUBLISH_PROFILES clean deploy
 
   # Clean-up Zinc nailgun process
-  lsof -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
+  $LSOF -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
 
   rm $tmp_settings
   cd ..
@@ -379,7 +386,7 @@ if [[ "$1" == "publish-release" ]]; then
 -DskipTests $PUBLISH_PROFILES clean install
 
   # Clean-up Zinc nailgun process
-  lsof -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
+  $LSOF -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
 
   ./dev/change-version-to-2.10.sh
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-22377][BUILD] Use /usr/sbin/lsof if lsof does not exists in release-build.sh

2017-11-13 Thread gurwls223
Repository: spark
Updated Branches:
  refs/heads/master f7534b37e -> c8b7f97b8


[SPARK-22377][BUILD] Use /usr/sbin/lsof if lsof does not exists in 
release-build.sh

## What changes were proposed in this pull request?

This PR proposes to use `/usr/sbin/lsof` if `lsof` is missing in the path to 
fix nightly snapshot jenkins jobs. Please refer 
https://github.com/apache/spark/pull/19359#issuecomment-340139557:

> Looks like some of the snapshot builds are having lsof issues:
>
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-branch-2.1-maven-snapshots/182/console
>
>https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-branch-2.2-maven-snapshots/134/console
>
>spark-build/dev/create-release/release-build.sh: line 344: lsof: command not 
>found
>usage: kill [ -s signal | -p ] [ -a ] pid ...
>kill -l [ signal ]

Up to my knowledge,  the full path of `lsof` is required for non-root user in 
few OSs.

## How was this patch tested?

Manually tested as below:

```bash
#!/usr/bin/env bash

LSOF=lsof
if ! hash $LSOF 2>/dev/null; then
  echo "a"
  LSOF=/usr/sbin/lsof
fi

$LSOF -P | grep "a"
```

Author: hyukjinkwon 

Closes #19695 from HyukjinKwon/SPARK-22377.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c8b7f97b
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c8b7f97b
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c8b7f97b

Branch: refs/heads/master
Commit: c8b7f97b8a58bf4a9f6e3a07dd6e5b0f646d8d99
Parents: f7534b3
Author: hyukjinkwon 
Authored: Tue Nov 14 08:28:13 2017 +0900
Committer: hyukjinkwon 
Committed: Tue Nov 14 08:28:13 2017 +0900

--
 dev/create-release/release-build.sh | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/c8b7f97b/dev/create-release/release-build.sh
--
diff --git a/dev/create-release/release-build.sh 
b/dev/create-release/release-build.sh
index 7e8d5c7..5b43f9b 100755
--- a/dev/create-release/release-build.sh
+++ b/dev/create-release/release-build.sh
@@ -130,6 +130,13 @@ else
   fi
 fi
 
+# This is a band-aid fix to avoid the failure of Maven nightly snapshot in 
some Jenkins
+# machines by explicitly calling /usr/sbin/lsof. Please see SPARK-22377 and 
the discussion
+# in its pull request.
+LSOF=lsof
+if ! hash $LSOF 2>/dev/null; then
+  LSOF=/usr/sbin/lsof
+fi
 
 if [ -z "$SPARK_PACKAGE_VERSION" ]; then
   SPARK_PACKAGE_VERSION="${SPARK_VERSION}-$(date +%Y_%m_%d_%H_%M)-${git_hash}"
@@ -345,7 +352,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
   #  -DskipTests $SCALA_2_12_PROFILES $PUBLISH_PROFILES clean deploy
 
   # Clean-up Zinc nailgun process
-  lsof -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
+  $LSOF -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
 
   rm $tmp_settings
   cd ..
@@ -382,7 +389,7 @@ if [[ "$1" == "publish-release" ]]; then
   #  -DskipTests $SCALA_2_12_PROFILES §$PUBLISH_PROFILES clean install
 
   # Clean-up Zinc nailgun process
-  lsof -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
+  $LSOF -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
 
   #./dev/change-scala-version.sh 2.11
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-22377][BUILD] Use /usr/sbin/lsof if lsof does not exists in release-build.sh

2017-11-13 Thread gurwls223
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 d905e85d2 -> 3ea6fd0c4


[SPARK-22377][BUILD] Use /usr/sbin/lsof if lsof does not exists in 
release-build.sh

## What changes were proposed in this pull request?

This PR proposes to use `/usr/sbin/lsof` if `lsof` is missing in the path to 
fix nightly snapshot jenkins jobs. Please refer 
https://github.com/apache/spark/pull/19359#issuecomment-340139557:

> Looks like some of the snapshot builds are having lsof issues:
>
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-branch-2.1-maven-snapshots/182/console
>
>https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-branch-2.2-maven-snapshots/134/console
>
>spark-build/dev/create-release/release-build.sh: line 344: lsof: command not 
>found
>usage: kill [ -s signal | -p ] [ -a ] pid ...
>kill -l [ signal ]

Up to my knowledge,  the full path of `lsof` is required for non-root user in 
few OSs.

## How was this patch tested?

Manually tested as below:

```bash
#!/usr/bin/env bash

LSOF=lsof
if ! hash $LSOF 2>/dev/null; then
  echo "a"
  LSOF=/usr/sbin/lsof
fi

$LSOF -P | grep "a"
```

Author: hyukjinkwon 

Closes #19695 from HyukjinKwon/SPARK-22377.

(cherry picked from commit c8b7f97b8a58bf4a9f6e3a07dd6e5b0f646d8d99)
Signed-off-by: hyukjinkwon 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3ea6fd0c
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3ea6fd0c
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3ea6fd0c

Branch: refs/heads/branch-2.2
Commit: 3ea6fd0c4610cd5cd0762802e88ac392c92d631c
Parents: d905e85
Author: hyukjinkwon 
Authored: Tue Nov 14 08:28:13 2017 +0900
Committer: hyukjinkwon 
Committed: Tue Nov 14 08:28:28 2017 +0900

--
 dev/create-release/release-build.sh | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/3ea6fd0c/dev/create-release/release-build.sh
--
diff --git a/dev/create-release/release-build.sh 
b/dev/create-release/release-build.sh
index 819f325..1272b6d 100755
--- a/dev/create-release/release-build.sh
+++ b/dev/create-release/release-build.sh
@@ -121,6 +121,13 @@ else
   fi
 fi
 
+# This is a band-aid fix to avoid the failure of Maven nightly snapshot in 
some Jenkins
+# machines by explicitly calling /usr/sbin/lsof. Please see SPARK-22377 and 
the discussion
+# in its pull request.
+LSOF=lsof
+if ! hash $LSOF 2>/dev/null; then
+  LSOF=/usr/sbin/lsof
+fi
 
 if [ -z "$SPARK_PACKAGE_VERSION" ]; then
   SPARK_PACKAGE_VERSION="${SPARK_VERSION}-$(date +%Y_%m_%d_%H_%M)-${git_hash}"
@@ -337,7 +344,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
 -DskipTests $PUBLISH_PROFILES clean deploy
 
   # Clean-up Zinc nailgun process
-  lsof -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
+  $LSOF -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
 
   rm $tmp_settings
   cd ..
@@ -375,7 +382,7 @@ if [[ "$1" == "publish-release" ]]; then
 -DskipTests $PUBLISH_PROFILES clean install
 
   # Clean-up Zinc nailgun process
-  lsof -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
+  $LSOF -P |grep $ZINC_PORT | grep LISTEN | awk '{ print $2; }' | xargs kill
 
   ./dev/change-version-to-2.10.sh
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-22471][SQL] SQLListener consumes much memory causing OutOfMemoryError

2017-11-13 Thread vanzin
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 af0b1855f -> d905e85d2


[SPARK-22471][SQL] SQLListener consumes much memory causing OutOfMemoryError

## What changes were proposed in this pull request?

This PR addresses the issue 
[SPARK-22471](https://issues.apache.org/jira/browse/SPARK-22471). The modified 
version of `SQLListener` respects the setting `spark.ui.retainedStages` and 
keeps the number of the tracked stages within the specified limit. The hash map 
`_stageIdToStageMetrics` does not outgrow the limit, hence overall memory 
consumption does not grow with time anymore.

A 2.2-compatible fix. Maybe incompatible with 2.3 due to #19681.

## How was this patch tested?

A new unit test covers this fix - see `SQLListenerMemorySuite.scala`.

Author: Arseniy Tashoyan 

Closes #19711 from tashoyan/SPARK-22471-branch-2.2.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d905e85d
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d905e85d
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d905e85d

Branch: refs/heads/branch-2.2
Commit: d905e85d2f2229fc26e8af8f74771de38a25c577
Parents: af0b185
Author: Arseniy Tashoyan 
Authored: Mon Nov 13 13:50:12 2017 -0800
Committer: Marcelo Vanzin 
Committed: Mon Nov 13 13:50:12 2017 -0800

--
 .../spark/sql/execution/ui/SQLListener.scala|  13 ++-
 .../execution/ui/SQLListenerMemorySuite.scala   | 106 +++
 .../sql/execution/ui/SQLListenerSuite.scala |  47 +---
 3 files changed, 119 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/d905e85d/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala
index b4a9123..e0c8cb3 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala
@@ -101,6 +101,9 @@ class SQLListener(conf: SparkConf) extends SparkListener 
with Logging {
 
   private val retainedExecutions = 
conf.getInt("spark.sql.ui.retainedExecutions", 1000)
 
+  private val retainedStages = conf.getInt("spark.ui.retainedStages",
+SparkUI.DEFAULT_RETAINED_STAGES)
+
   private val activeExecutions = mutable.HashMap[Long, SQLExecutionUIData]()
 
   // Old data in the following fields must be removed in 
"trimExecutionsIfNecessary".
@@ -113,7 +116,7 @@ class SQLListener(conf: SparkConf) extends SparkListener 
with Logging {
*/
   private val _jobIdToExecutionId = mutable.HashMap[Long, Long]()
 
-  private val _stageIdToStageMetrics = mutable.HashMap[Long, SQLStageMetrics]()
+  private val _stageIdToStageMetrics = mutable.LinkedHashMap[Long, 
SQLStageMetrics]()
 
   private val failedExecutions = mutable.ListBuffer[SQLExecutionUIData]()
 
@@ -207,6 +210,14 @@ class SQLListener(conf: SparkConf) extends SparkListener 
with Logging {
 }
   }
 
+  override def onStageCompleted(stageCompleted: SparkListenerStageCompleted): 
Unit = synchronized {
+val extraStages = _stageIdToStageMetrics.size - retainedStages
+if (extraStages > 0) {
+  val toRemove = _stageIdToStageMetrics.take(extraStages).keys
+  _stageIdToStageMetrics --= toRemove
+}
+  }
+
   override def onTaskEnd(taskEnd: SparkListenerTaskEnd): Unit = synchronized {
 if (taskEnd.taskMetrics != null) {
   updateTaskAccumulatorValues(

http://git-wip-us.apache.org/repos/asf/spark/blob/d905e85d/sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerMemorySuite.scala
--
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerMemorySuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerMemorySuite.scala
new file mode 100644
index 000..24a09f3
--- /dev/null
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerMemorySuite.scala
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed 

spark git commit: [SPARK-22487][SQL][FOLLOWUP] still keep spark.sql.hive.version

2017-11-13 Thread lixiao
Repository: spark
Updated Branches:
  refs/heads/master 176ae4d53 -> f7534b37e


[SPARK-22487][SQL][FOLLOWUP] still keep spark.sql.hive.version

## What changes were proposed in this pull request?

a followup of https://github.com/apache/spark/pull/19712 , adds back the 
`spark.sql.hive.version`, so that if users try to read this config, they can 
still get a default value instead of null.

## How was this patch tested?

N/A

Author: Wenchen Fan 

Closes #19719 from cloud-fan/minor.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f7534b37
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f7534b37
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f7534b37

Branch: refs/heads/master
Commit: f7534b37ee91be14e511ab29259c3f83c7ad50af
Parents: 176ae4d
Author: Wenchen Fan 
Authored: Mon Nov 13 13:10:13 2017 -0800
Committer: gatorsmile 
Committed: Mon Nov 13 13:10:13 2017 -0800

--
 .../sql/hive/thriftserver/SparkSQLEnv.scala |  1 +
 .../thriftserver/SparkSQLSessionManager.scala   |  1 +
 .../thriftserver/HiveThriftServer2Suites.scala  | 23 +++-
 .../org/apache/spark/sql/hive/HiveUtils.scala   |  8 +++
 4 files changed, 28 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/f7534b37/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala
--
diff --git 
a/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala
 
b/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala
index 5db93b2..6b19f97 100644
--- 
a/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala
+++ 
b/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLEnv.scala
@@ -55,6 +55,7 @@ private[hive] object SparkSQLEnv extends Logging {
   metadataHive.setOut(new PrintStream(System.out, true, "UTF-8"))
   metadataHive.setInfo(new PrintStream(System.err, true, "UTF-8"))
   metadataHive.setError(new PrintStream(System.err, true, "UTF-8"))
+  sparkSession.conf.set(HiveUtils.FAKE_HIVE_VERSION.key, 
HiveUtils.builtinHiveVersion)
 }
   }
 

http://git-wip-us.apache.org/repos/asf/spark/blob/f7534b37/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLSessionManager.scala
--
diff --git 
a/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLSessionManager.scala
 
b/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLSessionManager.scala
index 00920c2..48c0ebe 100644
--- 
a/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLSessionManager.scala
+++ 
b/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLSessionManager.scala
@@ -77,6 +77,7 @@ private[hive] class SparkSQLSessionManager(hiveServer: 
HiveServer2, sqlContext:
 } else {
   sqlContext.newSession()
 }
+ctx.setConf(HiveUtils.FAKE_HIVE_VERSION.key, HiveUtils.builtinHiveVersion)
 if (sessionConf != null && sessionConf.containsKey("use:database")) {
   ctx.sql(s"use ${sessionConf.get("use:database")}")
 }

http://git-wip-us.apache.org/repos/asf/spark/blob/f7534b37/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2Suites.scala
--
diff --git 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2Suites.scala
 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2Suites.scala
index b80596f..7289da7 100644
--- 
a/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2Suites.scala
+++ 
b/sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2Suites.scala
@@ -155,9 +155,9 @@ class HiveThriftBinaryServerSuite extends 
HiveThriftJdbcTest {
 
   test("Checks Hive version") {
 withJdbcStatement() { statement =>
-  val resultSet = statement.executeQuery("SET 
spark.sql.hive.metastore.version")
+  val resultSet = statement.executeQuery("SET spark.sql.hive.version")
   resultSet.next()
-  assert(resultSet.getString(1) === "spark.sql.hive.metastore.version")
+  assert(resultSet.getString(1) === "spark.sql.hive.version")
   assert(resultSet.getString(2) === HiveUtils.builtinHiveVersion)
 }
   }
@@ -521,7 +521,20 @@ class HiveThriftBinaryServerSuite extends 

[2/2] spark git commit: Preparing development version 2.2.2-SNAPSHOT

2017-11-13 Thread felixcheung
Preparing development version 2.2.2-SNAPSHOT


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/af0b1855
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/af0b1855
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/af0b1855

Branch: refs/heads/branch-2.2
Commit: af0b1855fe6d51317541dc50b3a9aef450304369
Parents: 41116ab
Author: Felix Cheung 
Authored: Mon Nov 13 19:04:34 2017 +
Committer: Felix Cheung 
Committed: Mon Nov 13 19:04:34 2017 +

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 38 files changed, 39 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/af0b1855/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 4ac45fc..380b3ef 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.2.1
+Version: 2.2.2
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/af0b1855/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index ded172d..eeb75e9 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.2.1
+2.2.2-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/af0b1855/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 1a976a5..9d83ad8 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.2.1
+2.2.2-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/af0b1855/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 1159953..f841f93 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.2.1
+2.2.2-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/af0b1855/common/network-yarn/pom.xml
--
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index 1ad4afe..c1c2ebb 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.2.1
+2.2.2-SNAPSHOT
 ../../pom.xml
   
 


[1/2] spark git commit: Preparing Spark release v2.2.1-rc1

2017-11-13 Thread felixcheung
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 c68b4c54f -> af0b1855f


Preparing Spark release v2.2.1-rc1


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/41116ab7
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/41116ab7
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/41116ab7

Branch: refs/heads/branch-2.2
Commit: 41116ab7fca46db7255b01e8727e2e5d571a3e35
Parents: c68b4c5
Author: Felix Cheung 
Authored: Mon Nov 13 19:04:27 2017 +
Committer: Felix Cheung 
Committed: Mon Nov 13 19:04:27 2017 +

--
 assembly/pom.xml  | 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 2 +-
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 6 +++---
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 37 files changed, 39 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/41116ab7/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index da7b0c9..ded172d 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.2.1-SNAPSHOT
+2.2.1
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/41116ab7/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 303e25f..1a976a5 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.2.1-SNAPSHOT
+2.2.1
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/41116ab7/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 25558da..1159953 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.2.1-SNAPSHOT
+2.2.1
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/41116ab7/common/network-yarn/pom.xml
--
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index 310b4b8..1ad4afe 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.2.1-SNAPSHOT
+2.2.1
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/41116ab7/common/sketch/pom.xml
--
diff --git a/common/sketch/pom.xml b/common/sketch/pom.xml
index 076d98a..ed7d842 100644
--- a/common/sketch/pom.xml
+++ b/common/sketch/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.2.1-SNAPSHOT
+2.2.1
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/41116ab7/common/tags/pom.xml

[spark] Git Push Summary

2017-11-13 Thread felixcheung
Repository: spark
Updated Tags:  refs/tags/v2.2.1-rc1 [created] 41116ab7f

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] Git Push Summary

2017-11-13 Thread felixcheung
Repository: spark
Updated Tags:  refs/tags/v2.2.1-rc1 [deleted] 124b9a106

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] Git Push Summary [forced push!] [Forced Update!]

2017-11-13 Thread felixcheung
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 b1f8c84ef -> c68b4c54f (forced update)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] Git Push Summary

2017-11-13 Thread felixcheung
Repository: spark
Updated Tags:  refs/tags/v2.2.1-rc1 [created] 124b9a106

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[2/2] spark git commit: Preparing development version 2.2.2-SNAPSHOT

2017-11-13 Thread felixcheung
Preparing development version 2.2.2-SNAPSHOT


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b1f8c84e
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b1f8c84e
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b1f8c84e

Branch: refs/heads/branch-2.2
Commit: b1f8c84ef0a95df21cf30f85f1ca035db6712d19
Parents: 124b9a1
Author: Felix Cheung 
Authored: Mon Nov 13 17:57:12 2017 +
Committer: Felix Cheung 
Committed: Mon Nov 13 17:57:12 2017 +

--
 R/pkg/DESCRIPTION | 2 +-
 docs/_config.yml  | 4 ++--
 python/pyspark/version.py | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/b1f8c84e/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 4ac45fc..380b3ef 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.2.1
+Version: 2.2.2
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/b1f8c84e/docs/_config.yml
--
diff --git a/docs/_config.yml b/docs/_config.yml
index 77321fc..a78e2d5 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -14,8 +14,8 @@ include:
 
 # These allow the documentation to be updated with newer releases
 # of Spark, Scala, and Mesos.
-SPARK_VERSION: 2.2.1
-SPARK_VERSION_SHORT: 2.2.1
+SPARK_VERSION: 2.2.2-SNAPSHOT
+SPARK_VERSION_SHORT: 2.2.2
 SCALA_BINARY_VERSION: "2.11"
 SCALA_VERSION: "2.11.8"
 MESOS_VERSION: 1.0.0

http://git-wip-us.apache.org/repos/asf/spark/blob/b1f8c84e/python/pyspark/version.py
--
diff --git a/python/pyspark/version.py b/python/pyspark/version.py
index db4fc8e..5b36495 100644
--- a/python/pyspark/version.py
+++ b/python/pyspark/version.py
@@ -16,4 +16,4 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-__version__ = "2.2.1"
+__version__ = "2.2.2.dev0"


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[1/2] spark git commit: Preparing Spark release v2.2.1-rc1

2017-11-13 Thread felixcheung
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 c68b4c54f -> b1f8c84ef


Preparing Spark release v2.2.1-rc1


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/124b9a10
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/124b9a10
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/124b9a10

Branch: refs/heads/branch-2.2
Commit: 124b9a1067fcd339ba69ff84ba2823e3da5f27f6
Parents: c68b4c5
Author: Felix Cheung 
Authored: Mon Nov 13 17:57:08 2017 +
Committer: Felix Cheung 
Committed: Mon Nov 13 17:57:08 2017 +

--
 docs/_config.yml  | 2 +-
 python/pyspark/version.py | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/124b9a10/docs/_config.yml
--
diff --git a/docs/_config.yml b/docs/_config.yml
index b61455e..77321fc 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -14,7 +14,7 @@ include:
 
 # These allow the documentation to be updated with newer releases
 # of Spark, Scala, and Mesos.
-SPARK_VERSION: 2.2.1-SNAPSHOT
+SPARK_VERSION: 2.2.1
 SPARK_VERSION_SHORT: 2.2.1
 SCALA_BINARY_VERSION: "2.11"
 SCALA_VERSION: "2.11.8"

http://git-wip-us.apache.org/repos/asf/spark/blob/124b9a10/python/pyspark/version.py
--
diff --git a/python/pyspark/version.py b/python/pyspark/version.py
index c0bb196..db4fc8e 100644
--- a/python/pyspark/version.py
+++ b/python/pyspark/version.py
@@ -16,4 +16,4 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-__version__ = "2.2.1.dev0"
+__version__ = "2.2.1"


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [MINOR][CORE] Using bufferedInputStream for dataDeserializeStream

2017-11-13 Thread srowen
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 2f6dece03 -> c68b4c54f


[MINOR][CORE] Using bufferedInputStream for dataDeserializeStream

## What changes were proposed in this pull request?

Small fix. Using bufferedInputStream for dataDeserializeStream.

## How was this patch tested?

Existing UT.

Author: Xianyang Liu 

Closes #19735 from ConeyLiu/smallfix.

(cherry picked from commit 176ae4d53e0269cfc2cfa62d3a2991e28f5a9182)
Signed-off-by: Sean Owen 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c68b4c54
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c68b4c54
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c68b4c54

Branch: refs/heads/branch-2.2
Commit: c68b4c54fe25ce81f6f5e01eb4c954a7fb0cdd2c
Parents: 2f6dece
Author: Xianyang Liu 
Authored: Mon Nov 13 06:19:13 2017 -0600
Committer: Sean Owen 
Committed: Mon Nov 13 06:19:21 2017 -0600

--
 .../main/scala/org/apache/spark/serializer/SerializerManager.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/c68b4c54/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala 
b/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala
index 311383e..1d4b05c 100644
--- a/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala
+++ b/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala
@@ -206,7 +206,7 @@ private[spark] class SerializerManager(
 val autoPick = !blockId.isInstanceOf[StreamBlockId]
 getSerializer(classTag, autoPick)
   .newInstance()
-  .deserializeStream(wrapForCompression(blockId, inputStream))
+  .deserializeStream(wrapForCompression(blockId, stream))
   .asIterator.asInstanceOf[Iterator[T]]
   }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [MINOR][CORE] Using bufferedInputStream for dataDeserializeStream

2017-11-13 Thread srowen
Repository: spark
Updated Branches:
  refs/heads/master 209b9361a -> 176ae4d53


[MINOR][CORE] Using bufferedInputStream for dataDeserializeStream

## What changes were proposed in this pull request?

Small fix. Using bufferedInputStream for dataDeserializeStream.

## How was this patch tested?

Existing UT.

Author: Xianyang Liu 

Closes #19735 from ConeyLiu/smallfix.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/176ae4d5
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/176ae4d5
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/176ae4d5

Branch: refs/heads/master
Commit: 176ae4d53e0269cfc2cfa62d3a2991e28f5a9182
Parents: 209b936
Author: Xianyang Liu 
Authored: Mon Nov 13 06:19:13 2017 -0600
Committer: Sean Owen 
Committed: Mon Nov 13 06:19:13 2017 -0600

--
 .../main/scala/org/apache/spark/serializer/SerializerManager.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/176ae4d5/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala 
b/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala
index 311383e..1d4b05c 100644
--- a/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala
+++ b/core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala
@@ -206,7 +206,7 @@ private[spark] class SerializerManager(
 val autoPick = !blockId.isInstanceOf[StreamBlockId]
 getSerializer(classTag, autoPick)
   .newInstance()
-  .deserializeStream(wrapForCompression(blockId, inputStream))
+  .deserializeStream(wrapForCompression(blockId, stream))
   .asIterator.asInstanceOf[Iterator[T]]
   }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-22442][SQL][BRANCH-2.2][FOLLOWUP] ScalaReflection should produce correct field names for special characters

2017-11-13 Thread wenchen
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 f73637798 -> 2f6dece03


[SPARK-22442][SQL][BRANCH-2.2][FOLLOWUP] ScalaReflection should produce correct 
field names for special characters

## What changes were proposed in this pull request?

`val TermName: TermNameExtractor` is new in scala 2.11. For 2.10, we should use 
deprecated `newTermName`.

## How was this patch tested?

Build locally with scala 2.10.

Author: Liang-Chi Hsieh 

Closes #19736 from viirya/SPARK-22442-2.2-followup.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2f6dece0
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2f6dece0
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2f6dece0

Branch: refs/heads/branch-2.2
Commit: 2f6dece033f0e93c3969d94acbc3ad7d56c78b92
Parents: f736377
Author: Liang-Chi Hsieh 
Authored: Mon Nov 13 12:41:42 2017 +0100
Committer: Wenchen Fan 
Committed: Mon Nov 13 12:41:42 2017 +0100

--
 .../apache/spark/sql/catalyst/expressions/objects/objects.scala  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/2f6dece0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
index 0b45dfe..c523766 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
@@ -27,7 +27,7 @@ import org.apache.spark.{SparkConf, SparkEnv}
 import org.apache.spark.serializer._
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.catalyst.InternalRow
-import org.apache.spark.sql.catalyst.ScalaReflection.universe.TermName
+import org.apache.spark.sql.catalyst.ScalaReflection.universe.newTermName
 import org.apache.spark.sql.catalyst.encoders.RowEncoder
 import org.apache.spark.sql.catalyst.expressions._
 import org.apache.spark.sql.catalyst.expressions.codegen.{CodegenContext, 
ExprCode}
@@ -190,7 +190,7 @@ case class Invoke(
   override def eval(input: InternalRow): Any =
 throw new UnsupportedOperationException("Only code-generated evaluation is 
supported.")
 
-  private lazy val encodedFunctionName = 
TermName(functionName).encodedName.toString
+  private lazy val encodedFunctionName = 
newTermName(functionName).encodedName.toString
 
   @transient lazy val method = targetObject.dataType match {
 case ObjectType(cls) =>


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org