svn commit: r26627 - in /dev/spark/2.4.0-SNAPSHOT-2018_05_01_20_01-e15850b-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-05-01 Thread pwendell
Author: pwendell
Date: Wed May  2 03:16:45 2018
New Revision: 26627

Log:
Apache Spark 2.4.0-SNAPSHOT-2018_05_01_20_01-e15850b docs


[This commit notification would consist of 1460 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-24131][PYSPARK] Add majorMinorVersion API to PySpark for determining Spark versions

2018-05-01 Thread gurwls223
Repository: spark
Updated Branches:
  refs/heads/master 6782359a0 -> e15850be6


[SPARK-24131][PYSPARK] Add majorMinorVersion API to PySpark for determining 
Spark versions

## What changes were proposed in this pull request?

We need to determine Spark major and minor versions in PySpark. We can add a 
`majorMinorVersion` API to PySpark which is similar to the Scala API in 
`VersionUtils.majorMinorVersion`.

## How was this patch tested?

Added tests.

Author: Liang-Chi Hsieh 

Closes #21203 from viirya/SPARK-24131.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e15850be
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/e15850be
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/e15850be

Branch: refs/heads/master
Commit: e15850be6e0210614a734a307f5b83bdf44e2456
Parents: 6782359
Author: Liang-Chi Hsieh 
Authored: Wed May 2 10:55:01 2018 +0800
Committer: hyukjinkwon 
Committed: Wed May 2 10:55:01 2018 +0800

--
 python/pyspark/util.py | 21 +
 1 file changed, 21 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/e15850be/python/pyspark/util.py
--
diff --git a/python/pyspark/util.py b/python/pyspark/util.py
index 49afc13..04df835 100644
--- a/python/pyspark/util.py
+++ b/python/pyspark/util.py
@@ -16,6 +16,7 @@
 # limitations under the License.
 #
 
+import re
 import sys
 import inspect
 from py4j.protocol import Py4JJavaError
@@ -61,6 +62,26 @@ def _get_argspec(f):
 return argspec
 
 
+def majorMinorVersion(version):
+"""
+Get major and minor version numbers for given Spark version string.
+
+>>> version = "2.4.0"
+>>> majorMinorVersion(version)
+(2, 4)
+
+>>> version = "abc"
+>>> majorMinorVersion(version) is None
+True
+
+"""
+m = re.search('^(\d+)\.(\d+)(\..*)?$', version)
+if m is None:
+return None
+else:
+return (int(m.group(1)), int(m.group(2)))
+
+
 if __name__ == "__main__":
 import doctest
 (failure_count, test_count) = doctest.testmod()


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r26620 - in /dev/spark/2.3.1-SNAPSHOT-2018_05_01_10_01-682f05d-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-05-01 Thread pwendell
Author: pwendell
Date: Tue May  1 17:16:40 2018
New Revision: 26620

Log:
Apache Spark 2.3.1-SNAPSHOT-2018_05_01_10_01-682f05d docs


[This commit notification would consist of 1443 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23941][MESOS] Mesos task failed on specific spark app name

2018-05-01 Thread vanzin
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 e77d62a72 -> 4f1ae3af9


[SPARK-23941][MESOS] Mesos task failed on specific spark app name

## What changes were proposed in this pull request?
Shell escaped the name passed to spark-submit and change how conf attributes 
are shell escaped.

## How was this patch tested?
This test has been tested manually with Hive-on-spark with mesos or with the 
use case described in the issue with the sparkPi application with a custom name 
which contains illegal shell characters.

With this PR, hive-on-spark on mesos works like a charm with hive 
3.0.0-SNAPSHOT.

I state that this contribution is my original work and that I license the work 
to the project under the project’s open source license

Author: Bounkong Khamphousone 

Closes #21014 from tiboun/fix/SPARK-23941.

(cherry picked from commit 6782359a04356e4cde32940861bf2410ef37f445)
Signed-off-by: Marcelo Vanzin 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/4f1ae3af
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/4f1ae3af
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/4f1ae3af

Branch: refs/heads/branch-2.2
Commit: 4f1ae3af9e11dec3d359e35df5bd299d2c7c4fd0
Parents: e77d62a
Author: Bounkong Khamphousone 
Authored: Tue May 1 08:28:21 2018 -0700
Committer: Marcelo Vanzin 
Committed: Tue May 1 08:28:49 2018 -0700

--
 .../spark/scheduler/cluster/mesos/MesosClusterScheduler.scala| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/4f1ae3af/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
--
diff --git 
a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
 
b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
index 9354a3b..41f351a 100644
--- 
a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
+++ 
b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
@@ -485,9 +485,9 @@ private[spark] class MesosClusterScheduler(
   .filter { case (key, _) => !replicatedOptionsBlacklist.contains(key) }
   .toMap
 (defaultConf ++ driverConf).foreach { case (key, value) =>
-  options ++= Seq("--conf", 
s$key=${shellEscape(value)}.stripMargin) }
+  options ++= Seq("--conf", s"${key}=${value}") }
 
-options
+options.map(shellEscape)
   }
 
   /**


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23941][MESOS] Mesos task failed on specific spark app name

2018-05-01 Thread vanzin
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 52a420ff6 -> 682f05d63


[SPARK-23941][MESOS] Mesos task failed on specific spark app name

## What changes were proposed in this pull request?
Shell escaped the name passed to spark-submit and change how conf attributes 
are shell escaped.

## How was this patch tested?
This test has been tested manually with Hive-on-spark with mesos or with the 
use case described in the issue with the sparkPi application with a custom name 
which contains illegal shell characters.

With this PR, hive-on-spark on mesos works like a charm with hive 
3.0.0-SNAPSHOT.

I state that this contribution is my original work and that I license the work 
to the project under the project’s open source license

Author: Bounkong Khamphousone 

Closes #21014 from tiboun/fix/SPARK-23941.

(cherry picked from commit 6782359a04356e4cde32940861bf2410ef37f445)
Signed-off-by: Marcelo Vanzin 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/682f05d6
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/682f05d6
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/682f05d6

Branch: refs/heads/branch-2.3
Commit: 682f05d638a67b869d0e170f6c99683445e2449d
Parents: 52a420f
Author: Bounkong Khamphousone 
Authored: Tue May 1 08:28:21 2018 -0700
Committer: Marcelo Vanzin 
Committed: Tue May 1 08:28:34 2018 -0700

--
 .../spark/scheduler/cluster/mesos/MesosClusterScheduler.scala| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/682f05d6/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
--
diff --git 
a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
 
b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
index d224a73..b36f464 100644
--- 
a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
+++ 
b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
@@ -530,9 +530,9 @@ private[spark] class MesosClusterScheduler(
   .filter { case (key, _) => !replicatedOptionsBlacklist.contains(key) }
   .toMap
 (defaultConf ++ driverConf).foreach { case (key, value) =>
-  options ++= Seq("--conf", 
s$key=${shellEscape(value)}.stripMargin) }
+  options ++= Seq("--conf", s"${key}=${value}") }
 
-options
+options.map(shellEscape)
   }
 
   /**


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23941][MESOS] Mesos task failed on specific spark app name

2018-05-01 Thread vanzin
Repository: spark
Updated Branches:
  refs/heads/master 7bbec0dce -> 6782359a0


[SPARK-23941][MESOS] Mesos task failed on specific spark app name

## What changes were proposed in this pull request?
Shell escaped the name passed to spark-submit and change how conf attributes 
are shell escaped.

## How was this patch tested?
This test has been tested manually with Hive-on-spark with mesos or with the 
use case described in the issue with the sparkPi application with a custom name 
which contains illegal shell characters.

With this PR, hive-on-spark on mesos works like a charm with hive 
3.0.0-SNAPSHOT.

I state that this contribution is my original work and that I license the work 
to the project under the project’s open source license

Author: Bounkong Khamphousone 

Closes #21014 from tiboun/fix/SPARK-23941.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6782359a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/6782359a
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/6782359a

Branch: refs/heads/master
Commit: 6782359a04356e4cde32940861bf2410ef37f445
Parents: 7bbec0d
Author: Bounkong Khamphousone 
Authored: Tue May 1 08:28:21 2018 -0700
Committer: Marcelo Vanzin 
Committed: Tue May 1 08:28:21 2018 -0700

--
 .../spark/scheduler/cluster/mesos/MesosClusterScheduler.scala| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/6782359a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
--
diff --git 
a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
 
b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
index d224a73..b36f464 100644
--- 
a/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
+++ 
b/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
@@ -530,9 +530,9 @@ private[spark] class MesosClusterScheduler(
   .filter { case (key, _) => !replicatedOptionsBlacklist.contains(key) }
   .toMap
 (defaultConf ++ driverConf).foreach { case (key, value) =>
-  options ++= Seq("--conf", 
s$key=${shellEscape(value)}.stripMargin) }
+  options ++= Seq("--conf", s"${key}=${value}") }
 
-options
+options.map(shellEscape)
   }
 
   /**


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-24061][SS] Add TypedFilter support for continuous processing

2018-05-01 Thread zsxwing
Repository: spark
Updated Branches:
  refs/heads/master b857fb549 -> 7bbec0dce


[SPARK-24061][SS] Add TypedFilter support for continuous processing

## What changes were proposed in this pull request?

Add TypedFilter support for continuous processing application.

## How was this patch tested?

unit tests

Author: wangyanlin01 

Closes #21136 from yanlin-Lynn/SPARK-24061.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7bbec0dc
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7bbec0dc
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/7bbec0dc

Branch: refs/heads/master
Commit: 7bbec0dced35aeed79c1a24b6f7a1e0a3508b0fb
Parents: b857fb5
Author: wangyanlin01 
Authored: Tue May 1 16:22:52 2018 +0800
Committer: Shixiong Zhu 
Committed: Tue May 1 16:22:52 2018 +0800

--
 .../analysis/UnsupportedOperationChecker.scala  |  3 ++-
 .../analysis/UnsupportedOperationsSuite.scala   | 23 
 2 files changed, 25 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/7bbec0dc/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
index ff9d6d7..d3d6c63 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala
@@ -345,7 +345,8 @@ object UnsupportedOperationChecker {
 plan.foreachUp { implicit subPlan =>
   subPlan match {
 case (_: Project | _: Filter | _: MapElements | _: MapPartitions |
-  _: DeserializeToObject | _: SerializeFromObject | _: 
SubqueryAlias) =>
+  _: DeserializeToObject | _: SerializeFromObject | _: 
SubqueryAlias |
+  _: TypedFilter) =>
 case node if node.nodeName == "StreamingRelationV2" =>
 case node =>
   throwError(s"Continuous processing does not support ${node.nodeName} 
operations.")

http://git-wip-us.apache.org/repos/asf/spark/blob/7bbec0dc/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala
--
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala
index 60d1351..cb487c8 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala
@@ -621,6 +621,13 @@ class UnsupportedOperationsSuite extends SparkFunSuite {
 outputMode = Append,
 expectedMsgs = Seq("monotonically_increasing_id"))
 
+  assertSupportedForContinuousProcessing(
+"TypedFilter", TypedFilter(
+  null,
+  null,
+  null,
+  null,
+  new TestStreamingRelationV2(attribute)), OutputMode.Append())
 
   /*
 
===
@@ -771,6 +778,16 @@ class UnsupportedOperationsSuite extends SparkFunSuite {
 }
   }
 
+  /** Assert that the logical plan is supported for continuous procsssing mode 
*/
+  def assertSupportedForContinuousProcessing(
+name: String,
+plan: LogicalPlan,
+outputMode: OutputMode): Unit = {
+test(s"continuous processing - $name: supported") {
+  UnsupportedOperationChecker.checkForContinuous(plan, outputMode)
+}
+  }
+
   /**
* Assert that the logical plan is not supported inside a streaming plan.
*
@@ -840,4 +857,10 @@ class UnsupportedOperationsSuite extends SparkFunSuite {
 def this(attribute: Attribute) = this(Seq(attribute))
 override def isStreaming: Boolean = true
   }
+
+  case class TestStreamingRelationV2(output: Seq[Attribute]) extends LeafNode {
+def this(attribute: Attribute) = this(Seq(attribute))
+override def isStreaming: Boolean = true
+override def nodeName: String = "StreamingRelationV2"
+  }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org