[spark] branch branch-3.0 updated (9ca5934 -> cd71fbf)

2020-06-14 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 9ca5934  [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in 
structured streaming tab
 add cd71fbf  [SPARK-31926][SQL][TEST-HIVE1.2][TEST-MAVEN] Fix concurrency 
issue for ThriftCLIService to getPortNumber

No new revisions were added by this update.

Summary of changes:
 project/SparkBuild.scala   |  1 -
 .../src/test/resources/log4j.properties|  2 +-
 .../sql/hive/thriftserver/SharedThriftServer.scala | 50 --
 .../thriftserver/ThriftServerQueryTestSuite.scala  |  3 ++
 .../ThriftServerWithSparkContextSuite.scala| 11 -
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 11 files changed, 106 insertions(+), 31 deletions(-)
 copy sql/{hive => hive-thriftserver}/src/test/resources/log4j.properties (96%)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (9ca5934 -> cd71fbf)

2020-06-14 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 9ca5934  [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in 
structured streaming tab
 add cd71fbf  [SPARK-31926][SQL][TEST-HIVE1.2][TEST-MAVEN] Fix concurrency 
issue for ThriftCLIService to getPortNumber

No new revisions were added by this update.

Summary of changes:
 project/SparkBuild.scala   |  1 -
 .../src/test/resources/log4j.properties|  2 +-
 .../sql/hive/thriftserver/SharedThriftServer.scala | 50 --
 .../thriftserver/ThriftServerQueryTestSuite.scala  |  3 ++
 .../ThriftServerWithSparkContextSuite.scala| 11 -
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 11 files changed, 106 insertions(+), 31 deletions(-)
 copy sql/{hive => hive-thriftserver}/src/test/resources/log4j.properties (96%)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (9ca5934 -> cd71fbf)

2020-06-14 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 9ca5934  [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in 
structured streaming tab
 add cd71fbf  [SPARK-31926][SQL][TEST-HIVE1.2][TEST-MAVEN] Fix concurrency 
issue for ThriftCLIService to getPortNumber

No new revisions were added by this update.

Summary of changes:
 project/SparkBuild.scala   |  1 -
 .../src/test/resources/log4j.properties|  2 +-
 .../sql/hive/thriftserver/SharedThriftServer.scala | 50 --
 .../thriftserver/ThriftServerQueryTestSuite.scala  |  3 ++
 .../ThriftServerWithSparkContextSuite.scala| 11 -
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 11 files changed, 106 insertions(+), 31 deletions(-)
 copy sql/{hive => hive-thriftserver}/src/test/resources/log4j.properties (96%)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (8282bbf -> a0187cd)

2020-06-14 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 8282bbf  [SPARK-27633][SQL] Remove redundant aliases in 
NestedColumnAliasing
 add a0187cd  [SPARK-31926][SQL][TEST-HIVE1.2][TEST-MAVEN] Fix concurrency 
issue for ThriftCLIService to getPortNumber

No new revisions were added by this update.

Summary of changes:
 project/SparkBuild.scala   |  1 -
 .../src/test/resources/log4j.properties|  2 +-
 .../sql/hive/thriftserver/SharedThriftServer.scala | 50 --
 .../thriftserver/ThriftServerQueryTestSuite.scala  |  3 ++
 .../ThriftServerWithSparkContextSuite.scala| 11 -
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 11 files changed, 106 insertions(+), 31 deletions(-)
 copy sql/{hive => hive-thriftserver}/src/test/resources/log4j.properties (96%)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (8282bbf -> a0187cd)

2020-06-14 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 8282bbf  [SPARK-27633][SQL] Remove redundant aliases in 
NestedColumnAliasing
 add a0187cd  [SPARK-31926][SQL][TEST-HIVE1.2][TEST-MAVEN] Fix concurrency 
issue for ThriftCLIService to getPortNumber

No new revisions were added by this update.

Summary of changes:
 project/SparkBuild.scala   |  1 -
 .../src/test/resources/log4j.properties|  2 +-
 .../sql/hive/thriftserver/SharedThriftServer.scala | 50 --
 .../thriftserver/ThriftServerQueryTestSuite.scala  |  3 ++
 .../ThriftServerWithSparkContextSuite.scala| 11 -
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 11 files changed, 106 insertions(+), 31 deletions(-)
 copy sql/{hive => hive-thriftserver}/src/test/resources/log4j.properties (96%)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (9ca5934 -> cd71fbf)

2020-06-14 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 9ca5934  [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in 
structured streaming tab
 add cd71fbf  [SPARK-31926][SQL][TEST-HIVE1.2][TEST-MAVEN] Fix concurrency 
issue for ThriftCLIService to getPortNumber

No new revisions were added by this update.

Summary of changes:
 project/SparkBuild.scala   |  1 -
 .../src/test/resources/log4j.properties|  2 +-
 .../sql/hive/thriftserver/SharedThriftServer.scala | 50 --
 .../thriftserver/ThriftServerQueryTestSuite.scala  |  3 ++
 .../ThriftServerWithSparkContextSuite.scala| 11 -
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 11 files changed, 106 insertions(+), 31 deletions(-)
 copy sql/{hive => hive-thriftserver}/src/test/resources/log4j.properties (96%)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (8282bbf -> a0187cd)

2020-06-14 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 8282bbf  [SPARK-27633][SQL] Remove redundant aliases in 
NestedColumnAliasing
 add a0187cd  [SPARK-31926][SQL][TEST-HIVE1.2][TEST-MAVEN] Fix concurrency 
issue for ThriftCLIService to getPortNumber

No new revisions were added by this update.

Summary of changes:
 project/SparkBuild.scala   |  1 -
 .../src/test/resources/log4j.properties|  2 +-
 .../sql/hive/thriftserver/SharedThriftServer.scala | 50 --
 .../thriftserver/ThriftServerQueryTestSuite.scala  |  3 ++
 .../ThriftServerWithSparkContextSuite.scala| 11 -
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 11 files changed, 106 insertions(+), 31 deletions(-)
 copy sql/{hive => hive-thriftserver}/src/test/resources/log4j.properties (96%)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (8282bbf -> a0187cd)

2020-06-14 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 8282bbf  [SPARK-27633][SQL] Remove redundant aliases in 
NestedColumnAliasing
 add a0187cd  [SPARK-31926][SQL][TEST-HIVE1.2][TEST-MAVEN] Fix concurrency 
issue for ThriftCLIService to getPortNumber

No new revisions were added by this update.

Summary of changes:
 project/SparkBuild.scala   |  1 -
 .../src/test/resources/log4j.properties|  2 +-
 .../sql/hive/thriftserver/SharedThriftServer.scala | 50 --
 .../thriftserver/ThriftServerQueryTestSuite.scala  |  3 ++
 .../ThriftServerWithSparkContextSuite.scala| 11 -
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 .../service/cli/thrift/ThriftBinaryCLIService.java | 11 -
 .../hive/service/cli/thrift/ThriftCLIService.java  |  3 ++
 .../service/cli/thrift/ThriftHttpCLIService.java   | 21 ++---
 11 files changed, 106 insertions(+), 31 deletions(-)
 copy sql/{hive => hive-thriftserver}/src/test/resources/log4j.properties (96%)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] tag v3.0.0 created (now 3fdfce3)

2020-06-14 Thread rxin
This is an automated email from the ASF dual-hosted git repository.

rxin pushed a change to tag v3.0.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


  at 3fdfce3  (commit)
No new revisions were added by this update.


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5f6eee -> 8282bbf)

2020-06-14 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5f6eee  [SPARK-31642][FOLLOWUP] Fix Sorting for duration column and 
make Status column sortable
 add 8282bbf  [SPARK-27633][SQL] Remove redundant aliases in 
NestedColumnAliasing

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/NestedColumnAliasing.scala  | 13 -
 .../optimizer/NestedColumnAliasingSuite.scala  | 31 ++
 2 files changed, 43 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5f6eee -> 8282bbf)

2020-06-14 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5f6eee  [SPARK-31642][FOLLOWUP] Fix Sorting for duration column and 
make Status column sortable
 add 8282bbf  [SPARK-27633][SQL] Remove redundant aliases in 
NestedColumnAliasing

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/NestedColumnAliasing.scala  | 13 -
 .../optimizer/NestedColumnAliasingSuite.scala  | 31 ++
 2 files changed, 43 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5f6eee -> 8282bbf)

2020-06-14 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5f6eee  [SPARK-31642][FOLLOWUP] Fix Sorting for duration column and 
make Status column sortable
 add 8282bbf  [SPARK-27633][SQL] Remove redundant aliases in 
NestedColumnAliasing

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/NestedColumnAliasing.scala  | 13 -
 .../optimizer/NestedColumnAliasingSuite.scala  | 31 ++
 2 files changed, 43 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5f6eee -> 8282bbf)

2020-06-14 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5f6eee  [SPARK-31642][FOLLOWUP] Fix Sorting for duration column and 
make Status column sortable
 add 8282bbf  [SPARK-27633][SQL] Remove redundant aliases in 
NestedColumnAliasing

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/NestedColumnAliasing.scala  | 13 -
 .../optimizer/NestedColumnAliasingSuite.scala  | 31 ++
 2 files changed, 43 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5f6eee -> 8282bbf)

2020-06-14 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5f6eee  [SPARK-31642][FOLLOWUP] Fix Sorting for duration column and 
make Status column sortable
 add 8282bbf  [SPARK-27633][SQL] Remove redundant aliases in 
NestedColumnAliasing

No new revisions were added by this update.

Summary of changes:
 .../catalyst/optimizer/NestedColumnAliasing.scala  | 13 -
 .../optimizer/NestedColumnAliasingSuite.scala  | 31 ++
 2 files changed, 43 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: Revert "[SPARK-29152][CORE][2.4] Executor Plugin shutdown when dynamic allocation is enabled"

2020-06-14 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new a89a674  Revert "[SPARK-29152][CORE][2.4] Executor Plugin shutdown 
when dynamic allocation is enabled"
a89a674 is described below

commit a89a674553b4e91fd7b5c95816d1be36d35e4fb5
Author: Dongjoon Hyun 
AuthorDate: Sun Jun 14 18:48:17 2020 -0700

Revert "[SPARK-29152][CORE][2.4] Executor Plugin shutdown when dynamic 
allocation is enabled"

This reverts commit 90e928c05073561d8f2ee40ebe50b9f7c5208754.
---
 .../scala/org/apache/spark/executor/Executor.scala | 40 +-
 1 file changed, 16 insertions(+), 24 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/executor/Executor.scala 
b/core/src/main/scala/org/apache/spark/executor/Executor.scala
index d142e43..f7ff0b8 100644
--- a/core/src/main/scala/org/apache/spark/executor/Executor.scala
+++ b/core/src/main/scala/org/apache/spark/executor/Executor.scala
@@ -24,7 +24,6 @@ import java.net.{URI, URL}
 import java.nio.ByteBuffer
 import java.util.Properties
 import java.util.concurrent._
-import java.util.concurrent.atomic.AtomicBoolean
 import javax.annotation.concurrent.GuardedBy
 
 import scala.collection.JavaConverters._
@@ -64,11 +63,6 @@ private[spark] class Executor(
 
   logInfo(s"Starting executor ID $executorId on host $executorHostname")
 
-  private val executorShutdown = new AtomicBoolean(false)
-  ShutdownHookManager.addShutdownHook(
-() => stop()
-  )
-
   // Application dependencies (added through SparkContext) that we've fetched 
so far on this node.
   // Each map holds the master's timestamp for the version of that file or JAR 
we got.
   private val currentFiles: HashMap[String, Long] = new HashMap[String, Long]()
@@ -250,26 +244,24 @@ private[spark] class Executor(
   }
 
   def stop(): Unit = {
-if (!executorShutdown.getAndSet(true)) {
-  env.metricsSystem.report()
-  heartbeater.shutdown()
-  heartbeater.awaitTermination(10, TimeUnit.SECONDS)
-  threadPool.shutdown()
-
-  // Notify plugins that executor is shutting down so they can terminate 
cleanly
-  Utils.withContextClassLoader(replClassLoader) {
-executorPlugins.foreach { plugin =>
-  try {
-plugin.shutdown()
-  } catch {
-case e: Exception =>
-  logWarning("Plugin " + plugin.getClass().getCanonicalName() + " 
shutdown failed", e)
-  }
+env.metricsSystem.report()
+heartbeater.shutdown()
+heartbeater.awaitTermination(10, TimeUnit.SECONDS)
+threadPool.shutdown()
+
+// Notify plugins that executor is shutting down so they can terminate 
cleanly
+Utils.withContextClassLoader(replClassLoader) {
+  executorPlugins.foreach { plugin =>
+try {
+  plugin.shutdown()
+} catch {
+  case e: Exception =>
+logWarning("Plugin " + plugin.getClass().getCanonicalName() + " 
shutdown failed", e)
 }
   }
-  if (!isLocal) {
-env.stop()
-  }
+}
+if (!isLocal) {
+  env.stop()
 }
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: Revert "[SPARK-29152][CORE][2.4] Executor Plugin shutdown when dynamic allocation is enabled"

2020-06-14 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new a89a674  Revert "[SPARK-29152][CORE][2.4] Executor Plugin shutdown 
when dynamic allocation is enabled"
a89a674 is described below

commit a89a674553b4e91fd7b5c95816d1be36d35e4fb5
Author: Dongjoon Hyun 
AuthorDate: Sun Jun 14 18:48:17 2020 -0700

Revert "[SPARK-29152][CORE][2.4] Executor Plugin shutdown when dynamic 
allocation is enabled"

This reverts commit 90e928c05073561d8f2ee40ebe50b9f7c5208754.
---
 .../scala/org/apache/spark/executor/Executor.scala | 40 +-
 1 file changed, 16 insertions(+), 24 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/executor/Executor.scala 
b/core/src/main/scala/org/apache/spark/executor/Executor.scala
index d142e43..f7ff0b8 100644
--- a/core/src/main/scala/org/apache/spark/executor/Executor.scala
+++ b/core/src/main/scala/org/apache/spark/executor/Executor.scala
@@ -24,7 +24,6 @@ import java.net.{URI, URL}
 import java.nio.ByteBuffer
 import java.util.Properties
 import java.util.concurrent._
-import java.util.concurrent.atomic.AtomicBoolean
 import javax.annotation.concurrent.GuardedBy
 
 import scala.collection.JavaConverters._
@@ -64,11 +63,6 @@ private[spark] class Executor(
 
   logInfo(s"Starting executor ID $executorId on host $executorHostname")
 
-  private val executorShutdown = new AtomicBoolean(false)
-  ShutdownHookManager.addShutdownHook(
-() => stop()
-  )
-
   // Application dependencies (added through SparkContext) that we've fetched 
so far on this node.
   // Each map holds the master's timestamp for the version of that file or JAR 
we got.
   private val currentFiles: HashMap[String, Long] = new HashMap[String, Long]()
@@ -250,26 +244,24 @@ private[spark] class Executor(
   }
 
   def stop(): Unit = {
-if (!executorShutdown.getAndSet(true)) {
-  env.metricsSystem.report()
-  heartbeater.shutdown()
-  heartbeater.awaitTermination(10, TimeUnit.SECONDS)
-  threadPool.shutdown()
-
-  // Notify plugins that executor is shutting down so they can terminate 
cleanly
-  Utils.withContextClassLoader(replClassLoader) {
-executorPlugins.foreach { plugin =>
-  try {
-plugin.shutdown()
-  } catch {
-case e: Exception =>
-  logWarning("Plugin " + plugin.getClass().getCanonicalName() + " 
shutdown failed", e)
-  }
+env.metricsSystem.report()
+heartbeater.shutdown()
+heartbeater.awaitTermination(10, TimeUnit.SECONDS)
+threadPool.shutdown()
+
+// Notify plugins that executor is shutting down so they can terminate 
cleanly
+Utils.withContextClassLoader(replClassLoader) {
+  executorPlugins.foreach { plugin =>
+try {
+  plugin.shutdown()
+} catch {
+  case e: Exception =>
+logWarning("Plugin " + plugin.getClass().getCanonicalName() + " 
shutdown failed", e)
 }
   }
-  if (!isLocal) {
-env.stop()
-  }
+}
+if (!isLocal) {
+  env.stop()
 }
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: Revert "[SPARK-29152][CORE][2.4] Executor Plugin shutdown when dynamic allocation is enabled"

2020-06-14 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new a89a674  Revert "[SPARK-29152][CORE][2.4] Executor Plugin shutdown 
when dynamic allocation is enabled"
a89a674 is described below

commit a89a674553b4e91fd7b5c95816d1be36d35e4fb5
Author: Dongjoon Hyun 
AuthorDate: Sun Jun 14 18:48:17 2020 -0700

Revert "[SPARK-29152][CORE][2.4] Executor Plugin shutdown when dynamic 
allocation is enabled"

This reverts commit 90e928c05073561d8f2ee40ebe50b9f7c5208754.
---
 .../scala/org/apache/spark/executor/Executor.scala | 40 +-
 1 file changed, 16 insertions(+), 24 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/executor/Executor.scala 
b/core/src/main/scala/org/apache/spark/executor/Executor.scala
index d142e43..f7ff0b8 100644
--- a/core/src/main/scala/org/apache/spark/executor/Executor.scala
+++ b/core/src/main/scala/org/apache/spark/executor/Executor.scala
@@ -24,7 +24,6 @@ import java.net.{URI, URL}
 import java.nio.ByteBuffer
 import java.util.Properties
 import java.util.concurrent._
-import java.util.concurrent.atomic.AtomicBoolean
 import javax.annotation.concurrent.GuardedBy
 
 import scala.collection.JavaConverters._
@@ -64,11 +63,6 @@ private[spark] class Executor(
 
   logInfo(s"Starting executor ID $executorId on host $executorHostname")
 
-  private val executorShutdown = new AtomicBoolean(false)
-  ShutdownHookManager.addShutdownHook(
-() => stop()
-  )
-
   // Application dependencies (added through SparkContext) that we've fetched 
so far on this node.
   // Each map holds the master's timestamp for the version of that file or JAR 
we got.
   private val currentFiles: HashMap[String, Long] = new HashMap[String, Long]()
@@ -250,26 +244,24 @@ private[spark] class Executor(
   }
 
   def stop(): Unit = {
-if (!executorShutdown.getAndSet(true)) {
-  env.metricsSystem.report()
-  heartbeater.shutdown()
-  heartbeater.awaitTermination(10, TimeUnit.SECONDS)
-  threadPool.shutdown()
-
-  // Notify plugins that executor is shutting down so they can terminate 
cleanly
-  Utils.withContextClassLoader(replClassLoader) {
-executorPlugins.foreach { plugin =>
-  try {
-plugin.shutdown()
-  } catch {
-case e: Exception =>
-  logWarning("Plugin " + plugin.getClass().getCanonicalName() + " 
shutdown failed", e)
-  }
+env.metricsSystem.report()
+heartbeater.shutdown()
+heartbeater.awaitTermination(10, TimeUnit.SECONDS)
+threadPool.shutdown()
+
+// Notify plugins that executor is shutting down so they can terminate 
cleanly
+Utils.withContextClassLoader(replClassLoader) {
+  executorPlugins.foreach { plugin =>
+try {
+  plugin.shutdown()
+} catch {
+  case e: Exception =>
+logWarning("Plugin " + plugin.getClass().getCanonicalName() + " 
shutdown failed", e)
 }
   }
-  if (!isLocal) {
-env.stop()
-  }
+}
+if (!isLocal) {
+  env.stop()
 }
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in structured streaming tab

2020-06-14 Thread sarutak
This is an automated email from the ASF dual-hosted git repository.

sarutak pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 9ca5934  [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in 
structured streaming tab
9ca5934 is described below

commit 9ca5934cb6bf8257d91a33fcf6f2738822fae34a
Author: iRakson 
AuthorDate: Mon Jun 15 10:39:55 2020 +0900

[SPARK-31983][WEBUI][3.0] Fix sorting for duration column in structured 
streaming tab

### What changes were proposed in this pull request?
Sorting result for duration column in tables of structured streaming tab is 
wrong sometimes.
https://user-images.githubusercontent.com/15366835/84572178-10755700-adb6-11ea-9131-338e8ba7fb24.png";>

We are sorting on string, which results in this behaviour.
`sorttable_numeric` and `sorttable_customkey` is used to fix this.

Refer 
[this](https://github.com/apache/spark/pull/28752#issuecomment-643451586) and 
[this](https://github.com/apache/spark/pull/28752#issuecomment-643569254)

After changes :
https://user-images.githubusercontent.com/15366835/84572299-a8734080-adb6-11ea-9aa3-b4bc594de4cf.png";>

### Why are the changes needed?
Sorting results are wrong for duration column in tables of structured 
streaming tab.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Screenshots attached.

Closes #28823 from iRakson/testsort.

Authored-by: iRakson 
Signed-off-by: Kousuke Saruta 
---
 .../sql/streaming/ui/StreamingQueryPage.scala  | 23 --
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
index 7336765..43b93a3 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
@@ -57,12 +57,12 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
 val name = UIUtils.getQueryName(query)
 val status = UIUtils.getQueryStatus(query)
 val duration = if (queryActive) {
-  SparkUIUtils.formatDurationVerbose(System.currentTimeMillis() - 
query.startTimestamp)
+  System.currentTimeMillis() - query.startTimestamp
 } else {
   withNoProgress(query, {
 val endTimeMs = query.lastProgress.timestamp
-SparkUIUtils.formatDurationVerbose(parseProgressTimestamp(endTimeMs) - 
query.startTimestamp)
-  }, "-")
+parseProgressTimestamp(endTimeMs) - query.startTimestamp
+  }, 0)
 }
 
 
@@ -71,7 +71,9 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
{query.id} 
 {query.runId}  
{SparkUIUtils.formatDate(query.startTimestamp)} 
-   {duration} 
+  
+{SparkUIUtils.formatDurationVerbose(duration)}
+  
{withNoProgress(query, {
 (query.recentProgress.map(p => 
withNumberInvalid(p.inputRowsPerSecond)).sum /
   query.recentProgress.length).formatted("%.2f") }, "NaN")}
@@ -93,8 +95,13 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
 "Name", "Status", "Id", "Run ID", "Start Time", "Duration", "Avg Input 
/sec",
 "Avg Process /sec", "Lastest Batch")
 
+  val headerCss = Seq("", "", "", "", "", "sorttable_numeric", 
"sorttable_numeric",
+"sorttable_numeric", "")
+  // header classes size must be equal to header row size
+  assert(headerRow.size == headerCss.size)
+
   Some(SparkUIUtils.listingTable(headerRow, generateDataRow(request, 
queryActive = true),
-activeQueries, true, Some("activeQueries-table"), Seq(null), false))
+activeQueries, true, Some("activeQueries-table"), headerCss, false))
 } else {
   None
 }
@@ -104,8 +111,12 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
 "Name", "Status", "Id", "Run ID", "Start Time", "Duration", "Avg Input 
/sec",
 "Avg Process /sec", "Lastest Batch", "Error")
 
+  val headerCss = Seq("", "", "", "", "", "sorttable_numeric", 
"sorttable_numeric",
+"sorttable_numeric", "", "")
+  assert(headerRow.size == headerCss.size)
+
   Some(SparkUIUtils.listingTable(headerRow, generateDataRow(request, 
queryActive = false),
-inactiveQueries, true, Some("completedQueries-table"), Seq(null), 
false))
+inactiveQueries, true, Some("completedQueries-table"), headerCss, 
false))
 } else {
   None
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in structured streaming tab

2020-06-14 Thread sarutak
This is an automated email from the ASF dual-hosted git repository.

sarutak pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 9ca5934  [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in 
structured streaming tab
9ca5934 is described below

commit 9ca5934cb6bf8257d91a33fcf6f2738822fae34a
Author: iRakson 
AuthorDate: Mon Jun 15 10:39:55 2020 +0900

[SPARK-31983][WEBUI][3.0] Fix sorting for duration column in structured 
streaming tab

### What changes were proposed in this pull request?
Sorting result for duration column in tables of structured streaming tab is 
wrong sometimes.
https://user-images.githubusercontent.com/15366835/84572178-10755700-adb6-11ea-9131-338e8ba7fb24.png";>

We are sorting on string, which results in this behaviour.
`sorttable_numeric` and `sorttable_customkey` is used to fix this.

Refer 
[this](https://github.com/apache/spark/pull/28752#issuecomment-643451586) and 
[this](https://github.com/apache/spark/pull/28752#issuecomment-643569254)

After changes :
https://user-images.githubusercontent.com/15366835/84572299-a8734080-adb6-11ea-9aa3-b4bc594de4cf.png";>

### Why are the changes needed?
Sorting results are wrong for duration column in tables of structured 
streaming tab.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Screenshots attached.

Closes #28823 from iRakson/testsort.

Authored-by: iRakson 
Signed-off-by: Kousuke Saruta 
---
 .../sql/streaming/ui/StreamingQueryPage.scala  | 23 --
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
index 7336765..43b93a3 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
@@ -57,12 +57,12 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
 val name = UIUtils.getQueryName(query)
 val status = UIUtils.getQueryStatus(query)
 val duration = if (queryActive) {
-  SparkUIUtils.formatDurationVerbose(System.currentTimeMillis() - 
query.startTimestamp)
+  System.currentTimeMillis() - query.startTimestamp
 } else {
   withNoProgress(query, {
 val endTimeMs = query.lastProgress.timestamp
-SparkUIUtils.formatDurationVerbose(parseProgressTimestamp(endTimeMs) - 
query.startTimestamp)
-  }, "-")
+parseProgressTimestamp(endTimeMs) - query.startTimestamp
+  }, 0)
 }
 
 
@@ -71,7 +71,9 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
{query.id} 
 {query.runId}  
{SparkUIUtils.formatDate(query.startTimestamp)} 
-   {duration} 
+  
+{SparkUIUtils.formatDurationVerbose(duration)}
+  
{withNoProgress(query, {
 (query.recentProgress.map(p => 
withNumberInvalid(p.inputRowsPerSecond)).sum /
   query.recentProgress.length).formatted("%.2f") }, "NaN")}
@@ -93,8 +95,13 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
 "Name", "Status", "Id", "Run ID", "Start Time", "Duration", "Avg Input 
/sec",
 "Avg Process /sec", "Lastest Batch")
 
+  val headerCss = Seq("", "", "", "", "", "sorttable_numeric", 
"sorttable_numeric",
+"sorttable_numeric", "")
+  // header classes size must be equal to header row size
+  assert(headerRow.size == headerCss.size)
+
   Some(SparkUIUtils.listingTable(headerRow, generateDataRow(request, 
queryActive = true),
-activeQueries, true, Some("activeQueries-table"), Seq(null), false))
+activeQueries, true, Some("activeQueries-table"), headerCss, false))
 } else {
   None
 }
@@ -104,8 +111,12 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
 "Name", "Status", "Id", "Run ID", "Start Time", "Duration", "Avg Input 
/sec",
 "Avg Process /sec", "Lastest Batch", "Error")
 
+  val headerCss = Seq("", "", "", "", "", "sorttable_numeric", 
"sorttable_numeric",
+"sorttable_numeric", "", "")
+  assert(headerRow.size == headerCss.size)
+
   Some(SparkUIUtils.listingTable(headerRow, generateDataRow(request, 
queryActive = false),
-inactiveQueries, true, Some("completedQueries-table"), Seq(null), 
false))
+inactiveQueries, true, Some("completedQueries-table"), headerCss, 
false))
 } else {
   None
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in structured streaming tab

2020-06-14 Thread sarutak
This is an automated email from the ASF dual-hosted git repository.

sarutak pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 9ca5934  [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in 
structured streaming tab
9ca5934 is described below

commit 9ca5934cb6bf8257d91a33fcf6f2738822fae34a
Author: iRakson 
AuthorDate: Mon Jun 15 10:39:55 2020 +0900

[SPARK-31983][WEBUI][3.0] Fix sorting for duration column in structured 
streaming tab

### What changes were proposed in this pull request?
Sorting result for duration column in tables of structured streaming tab is 
wrong sometimes.
https://user-images.githubusercontent.com/15366835/84572178-10755700-adb6-11ea-9131-338e8ba7fb24.png";>

We are sorting on string, which results in this behaviour.
`sorttable_numeric` and `sorttable_customkey` is used to fix this.

Refer 
[this](https://github.com/apache/spark/pull/28752#issuecomment-643451586) and 
[this](https://github.com/apache/spark/pull/28752#issuecomment-643569254)

After changes :
https://user-images.githubusercontent.com/15366835/84572299-a8734080-adb6-11ea-9aa3-b4bc594de4cf.png";>

### Why are the changes needed?
Sorting results are wrong for duration column in tables of structured 
streaming tab.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Screenshots attached.

Closes #28823 from iRakson/testsort.

Authored-by: iRakson 
Signed-off-by: Kousuke Saruta 
---
 .../sql/streaming/ui/StreamingQueryPage.scala  | 23 --
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
index 7336765..43b93a3 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/ui/StreamingQueryPage.scala
@@ -57,12 +57,12 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
 val name = UIUtils.getQueryName(query)
 val status = UIUtils.getQueryStatus(query)
 val duration = if (queryActive) {
-  SparkUIUtils.formatDurationVerbose(System.currentTimeMillis() - 
query.startTimestamp)
+  System.currentTimeMillis() - query.startTimestamp
 } else {
   withNoProgress(query, {
 val endTimeMs = query.lastProgress.timestamp
-SparkUIUtils.formatDurationVerbose(parseProgressTimestamp(endTimeMs) - 
query.startTimestamp)
-  }, "-")
+parseProgressTimestamp(endTimeMs) - query.startTimestamp
+  }, 0)
 }
 
 
@@ -71,7 +71,9 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
{query.id} 
 {query.runId}  
{SparkUIUtils.formatDate(query.startTimestamp)} 
-   {duration} 
+  
+{SparkUIUtils.formatDurationVerbose(duration)}
+  
{withNoProgress(query, {
 (query.recentProgress.map(p => 
withNumberInvalid(p.inputRowsPerSecond)).sum /
   query.recentProgress.length).formatted("%.2f") }, "NaN")}
@@ -93,8 +95,13 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
 "Name", "Status", "Id", "Run ID", "Start Time", "Duration", "Avg Input 
/sec",
 "Avg Process /sec", "Lastest Batch")
 
+  val headerCss = Seq("", "", "", "", "", "sorttable_numeric", 
"sorttable_numeric",
+"sorttable_numeric", "")
+  // header classes size must be equal to header row size
+  assert(headerRow.size == headerCss.size)
+
   Some(SparkUIUtils.listingTable(headerRow, generateDataRow(request, 
queryActive = true),
-activeQueries, true, Some("activeQueries-table"), Seq(null), false))
+activeQueries, true, Some("activeQueries-table"), headerCss, false))
 } else {
   None
 }
@@ -104,8 +111,12 @@ private[ui] class StreamingQueryPage(parent: 
StreamingQueryTab)
 "Name", "Status", "Id", "Run ID", "Start Time", "Duration", "Avg Input 
/sec",
 "Avg Process /sec", "Lastest Batch", "Error")
 
+  val headerCss = Seq("", "", "", "", "", "sorttable_numeric", 
"sorttable_numeric",
+"sorttable_numeric", "", "")
+  assert(headerRow.size == headerCss.size)
+
   Some(SparkUIUtils.listingTable(headerRow, generateDataRow(request, 
queryActive = false),
-inactiveQueries, true, Some("completedQueries-table"), Seq(null), 
false))
+inactiveQueries, true, Some("completedQueries-table"), headerCss, 
false))
 } else {
   None
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated (30637a8 -> 9ca5934)

2020-06-14 Thread sarutak
This is an automated email from the ASF dual-hosted git repository.

sarutak pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 30637a8  [SPARK-31593][SS] Remove unnecessary streaming query progress 
update
 add 9ca5934  [SPARK-31983][WEBUI][3.0] Fix sorting for duration column in 
structured streaming tab

No new revisions were added by this update.

Summary of changes:
 .../sql/streaming/ui/StreamingQueryPage.scala  | 23 --
 1 file changed, 17 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated: [SPARK-29152][CORE][2.4] Executor Plugin shutdown when dynamic allocation is enabled

2020-06-14 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
 new 90e928c  [SPARK-29152][CORE][2.4] Executor Plugin shutdown when 
dynamic allocation is enabled
90e928c is described below

commit 90e928c05073561d8f2ee40ebe50b9f7c5208754
Author: iRakson 
AuthorDate: Sun Jun 14 14:51:27 2020 -0700

[SPARK-29152][CORE][2.4] Executor Plugin shutdown when dynamic allocation 
is enabled

### What changes were proposed in this pull request?
Added a Shutdown Hook in `executor.scala` which will ensure that executor's 
`stop()` method is always called.

### Why are the changes needed?
In case executors are not going down gracefully, their `stop()` is not 
called.

### Does this PR introduce any user-facing change?
No.

### How was this patch tested?
Manually

Closes #26901 from iRakson/SPARK-29152_2.4.

Authored-by: iRakson 
Signed-off-by: Dongjoon Hyun 
---
 .../scala/org/apache/spark/executor/Executor.scala | 40 +-
 1 file changed, 24 insertions(+), 16 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/executor/Executor.scala 
b/core/src/main/scala/org/apache/spark/executor/Executor.scala
index f7ff0b8..d142e43 100644
--- a/core/src/main/scala/org/apache/spark/executor/Executor.scala
+++ b/core/src/main/scala/org/apache/spark/executor/Executor.scala
@@ -24,6 +24,7 @@ import java.net.{URI, URL}
 import java.nio.ByteBuffer
 import java.util.Properties
 import java.util.concurrent._
+import java.util.concurrent.atomic.AtomicBoolean
 import javax.annotation.concurrent.GuardedBy
 
 import scala.collection.JavaConverters._
@@ -63,6 +64,11 @@ private[spark] class Executor(
 
   logInfo(s"Starting executor ID $executorId on host $executorHostname")
 
+  private val executorShutdown = new AtomicBoolean(false)
+  ShutdownHookManager.addShutdownHook(
+() => stop()
+  )
+
   // Application dependencies (added through SparkContext) that we've fetched 
so far on this node.
   // Each map holds the master's timestamp for the version of that file or JAR 
we got.
   private val currentFiles: HashMap[String, Long] = new HashMap[String, Long]()
@@ -244,24 +250,26 @@ private[spark] class Executor(
   }
 
   def stop(): Unit = {
-env.metricsSystem.report()
-heartbeater.shutdown()
-heartbeater.awaitTermination(10, TimeUnit.SECONDS)
-threadPool.shutdown()
-
-// Notify plugins that executor is shutting down so they can terminate 
cleanly
-Utils.withContextClassLoader(replClassLoader) {
-  executorPlugins.foreach { plugin =>
-try {
-  plugin.shutdown()
-} catch {
-  case e: Exception =>
-logWarning("Plugin " + plugin.getClass().getCanonicalName() + " 
shutdown failed", e)
+if (!executorShutdown.getAndSet(true)) {
+  env.metricsSystem.report()
+  heartbeater.shutdown()
+  heartbeater.awaitTermination(10, TimeUnit.SECONDS)
+  threadPool.shutdown()
+
+  // Notify plugins that executor is shutting down so they can terminate 
cleanly
+  Utils.withContextClassLoader(replClassLoader) {
+executorPlugins.foreach { plugin =>
+  try {
+plugin.shutdown()
+  } catch {
+case e: Exception =>
+  logWarning("Plugin " + plugin.getClass().getCanonicalName() + " 
shutdown failed", e)
+  }
 }
   }
-}
-if (!isLocal) {
-  env.stop()
+  if (!isLocal) {
+env.stop()
+  }
 }
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-2.4 updated (e44190a -> 90e928c)

2020-06-14 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git.


from e44190a  [SPARK-31968][SQL] Duplicate partition columns check when 
writing data
 add 90e928c  [SPARK-29152][CORE][2.4] Executor Plugin shutdown when 
dynamic allocation is enabled

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/executor/Executor.scala | 40 +-
 1 file changed, 24 insertions(+), 16 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (54e702c -> f5f6eee)

2020-06-14 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 54e702c  [SPARK-31970][CORE] Make MDC configuration step be consistent 
between setLocalProperty and log4j.properties
 add f5f6eee  [SPARK-31642][FOLLOWUP] Fix Sorting for duration column and 
make Status column sortable

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/streaming/ui/StreamingQueryPage.scala  | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (54e702c -> f5f6eee)

2020-06-14 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 54e702c  [SPARK-31970][CORE] Make MDC configuration step be consistent 
between setLocalProperty and log4j.properties
 add f5f6eee  [SPARK-31642][FOLLOWUP] Fix Sorting for duration column and 
make Status column sortable

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/streaming/ui/StreamingQueryPage.scala  | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (54e702c -> f5f6eee)

2020-06-14 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 54e702c  [SPARK-31970][CORE] Make MDC configuration step be consistent 
between setLocalProperty and log4j.properties
 add f5f6eee  [SPARK-31642][FOLLOWUP] Fix Sorting for duration column and 
make Status column sortable

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/streaming/ui/StreamingQueryPage.scala  | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (54e702c -> f5f6eee)

2020-06-14 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 54e702c  [SPARK-31970][CORE] Make MDC configuration step be consistent 
between setLocalProperty and log4j.properties
 add f5f6eee  [SPARK-31642][FOLLOWUP] Fix Sorting for duration column and 
make Status column sortable

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/streaming/ui/StreamingQueryPage.scala  | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (1e40bcc -> 54e702c)

2020-06-14 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 1e40bcc  [SPARK-31593][SS] Remove unnecessary streaming query progress 
update
 add 54e702c  [SPARK-31970][CORE] Make MDC configuration step be consistent 
between setLocalProperty and log4j.properties

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/executor/Executor.scala | 7 ++-
 docs/configuration.md| 8 
 2 files changed, 6 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (1e40bcc -> 54e702c)

2020-06-14 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 1e40bcc  [SPARK-31593][SS] Remove unnecessary streaming query progress 
update
 add 54e702c  [SPARK-31970][CORE] Make MDC configuration step be consistent 
between setLocalProperty and log4j.properties

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/executor/Executor.scala | 7 ++-
 docs/configuration.md| 8 
 2 files changed, 6 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (1e40bcc -> 54e702c)

2020-06-14 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 1e40bcc  [SPARK-31593][SS] Remove unnecessary streaming query progress 
update
 add 54e702c  [SPARK-31970][CORE] Make MDC configuration step be consistent 
between setLocalProperty and log4j.properties

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/executor/Executor.scala | 7 ++-
 docs/configuration.md| 8 
 2 files changed, 6 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (1e40bcc -> 54e702c)

2020-06-14 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 1e40bcc  [SPARK-31593][SS] Remove unnecessary streaming query progress 
update
 add 54e702c  [SPARK-31970][CORE] Make MDC configuration step be consistent 
between setLocalProperty and log4j.properties

No new revisions were added by this update.

Summary of changes:
 core/src/main/scala/org/apache/spark/executor/Executor.scala | 7 ++-
 docs/configuration.md| 8 
 2 files changed, 6 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org