[jira] [Updated] (SPARK-4497) HiveThriftServer2 does not exit properly on failure

2015-04-14 Thread Yin Huai (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated SPARK-4497:

Description: 
start thriftserver with 
{{sbin/start-thriftserver.sh --master ...}}

If there is an error (in my case namenode is in standby mode) the driver shuts 
down properly:
{code}
14/11/19 16:32:58 ERROR HiveThriftServer2: Error starting HiveThriftServer2

14/11/19 16:32:59 INFO SparkUI: Stopped Spark web UI at http://myip:4040
14/11/19 16:32:59 INFO DAGScheduler: Stopping DAGScheduler
14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Shutting down all executors
14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Asking each executor to 
shut down
14/11/19 16:33:00 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor 
stopped!
14/11/19 16:33:00 INFO MemoryStore: MemoryStore cleared
14/11/19 16:33:00 INFO BlockManager: BlockManager stopped
14/11/19 16:33:00 INFO BlockManagerMaster: BlockManagerMaster stopped
14/11/19 16:33:00 INFO SparkContext: Successfully stopped SparkContext
{code}

but trying to run {{sbin/start-thriftserver.sh --master ... }} again results in 
an error that Thrifserver is already running.

{{ps -aef|grep }} shows
{code}
root 32334 1  0 16:32 ?00:00:00 /usr/local/bin/java 
org.apache.spark.deploy.SparkSubmitDriverBootstrapper --class 
org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --master 
spark://myip:7077 --conf -spark.executor.extraJavaOptions=-verbose:gc 
-XX:-PrintGCDetails -XX:+PrintGCTimeStamps spark-internal --hiveconf 
hive.root.logger=INFO,console
{code}
This is problematic since we have a process that tries to restart the driver if 
it dies

  was:
start thriftserver with 
 sbin/start-thriftserver.sh --master ...

If there is an error (in my case namenode is in standby mode) the driver shuts 
down properly:

14/11/19 16:32:58 ERROR HiveThriftServer2: Error starting HiveThriftServer2

14/11/19 16:32:59 INFO SparkUI: Stopped Spark web UI at http://myip:4040
14/11/19 16:32:59 INFO DAGScheduler: Stopping DAGScheduler
14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Shutting down all executors
14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Asking each executor to 
shut down
14/11/19 16:33:00 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor 
stopped!
14/11/19 16:33:00 INFO MemoryStore: MemoryStore cleared
14/11/19 16:33:00 INFO BlockManager: BlockManager stopped
14/11/19 16:33:00 INFO BlockManagerMaster: BlockManagerMaster stopped
14/11/19 16:33:00 INFO SparkContext: Successfully stopped SparkContext


but trying to run  sbin/start-thriftserver.sh --master ... again results in an 
error that Thrifserver is already running.

ps -aef|grep  shows

root 32334 1  0 16:32 ?00:00:00 /usr/local/bin/java 
org.apache.spark.deploy.SparkSubmitDriverBootstrapper --class 
org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --master 
spark://myip:7077 --conf -spark.executor.extraJavaOptions=-verbose:gc 
-XX:-PrintGCDetails -XX:+PrintGCTimeStamps spark-internal --hiveconf 
hive.root.logger=INFO,console

This is problematic since we have a process that tries to restart the driver if 
it dies


> HiveThriftServer2 does not exit properly on failure
> ---
>
> Key: SPARK-4497
> URL: https://issues.apache.org/jira/browse/SPARK-4497
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.2.0
>Reporter: Yana Kadiyska
>Priority: Critical
>
> start thriftserver with 
> {{sbin/start-thriftserver.sh --master ...}}
> If there is an error (in my case namenode is in standby mode) the driver 
> shuts down properly:
> {code}
> 14/11/19 16:32:58 ERROR HiveThriftServer2: Error starting HiveThriftServer2
> 
> 14/11/19 16:32:59 INFO SparkUI: Stopped Spark web UI at http://myip:4040
> 14/11/19 16:32:59 INFO DAGScheduler: Stopping DAGScheduler
> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Shutting down all 
> executors
> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Asking each executor to 
> shut down
> 14/11/19 16:33:00 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor 
> stopped!
> 14/11/19 16:33:00 INFO MemoryStore: MemoryStore cleared
> 14/11/19 16:33:00 INFO BlockManager: BlockManager stopped
> 14/11/19 16:33:00 INFO BlockManagerMaster: BlockManagerMaster stopped
> 14/11/19 16:33:00 INFO SparkContext: Successfully stopped SparkContext
> {code}
> but trying to run {{sbin/start-thriftserver.sh --master ... }} again results 
> in an error that Thrifserver is already running.
> {{ps -aef|grep }} shows
> {code}
> root 32334 1  0 16:32 ?00:00:00 /usr/local/bin/java 
> org.apache.spark.deploy.SparkSubmitDriverBootstrapper --class 
> org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --master 
> spark://myip:7077 --conf -spark.executor.extraJavaOptions=-ver

[jira] [Updated] (SPARK-4497) HiveThriftServer2 does not exit properly on failure

2015-05-27 Thread Yin Huai (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated SPARK-4497:

Target Version/s:   (was: 1.4.0)

> HiveThriftServer2 does not exit properly on failure
> ---
>
> Key: SPARK-4497
> URL: https://issues.apache.org/jira/browse/SPARK-4497
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.2.0
>Reporter: Yana Kadiyska
>Priority: Critical
>
> start thriftserver with 
> {{sbin/start-thriftserver.sh --master ...}}
> If there is an error (in my case namenode is in standby mode) the driver 
> shuts down properly:
> {code}
> 14/11/19 16:32:58 ERROR HiveThriftServer2: Error starting HiveThriftServer2
> 
> 14/11/19 16:32:59 INFO SparkUI: Stopped Spark web UI at http://myip:4040
> 14/11/19 16:32:59 INFO DAGScheduler: Stopping DAGScheduler
> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Shutting down all 
> executors
> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Asking each executor to 
> shut down
> 14/11/19 16:33:00 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor 
> stopped!
> 14/11/19 16:33:00 INFO MemoryStore: MemoryStore cleared
> 14/11/19 16:33:00 INFO BlockManager: BlockManager stopped
> 14/11/19 16:33:00 INFO BlockManagerMaster: BlockManagerMaster stopped
> 14/11/19 16:33:00 INFO SparkContext: Successfully stopped SparkContext
> {code}
> but trying to run {{sbin/start-thriftserver.sh --master ... }} again results 
> in an error that Thrifserver is already running.
> {{ps -aef|grep }} shows
> {code}
> root 32334 1  0 16:32 ?00:00:00 /usr/local/bin/java 
> org.apache.spark.deploy.SparkSubmitDriverBootstrapper --class 
> org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --master 
> spark://myip:7077 --conf -spark.executor.extraJavaOptions=-verbose:gc 
> -XX:-PrintGCDetails -XX:+PrintGCTimeStamps spark-internal --hiveconf 
> hive.root.logger=INFO,console
> {code}
> This is problematic since we have a process that tries to restart the driver 
> if it dies



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4497) HiveThriftServer2 does not exit properly on failure

2014-11-19 Thread Michael Armbrust (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Armbrust updated SPARK-4497:

Priority: Critical  (was: Major)
Target Version/s: 1.3.0

> HiveThriftServer2 does not exit properly on failure
> ---
>
> Key: SPARK-4497
> URL: https://issues.apache.org/jira/browse/SPARK-4497
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.2.0
>Reporter: Yana Kadiyska
>Priority: Critical
>
> start thriftserver with 
>  sbin/start-thriftserver.sh --master ...
> If there is an error (in my case namenode is in standby mode) the driver 
> shuts down properly:
> 14/11/19 16:32:58 ERROR HiveThriftServer2: Error starting HiveThriftServer2
> 
> 14/11/19 16:32:59 INFO SparkUI: Stopped Spark web UI at http://myip:4040
> 14/11/19 16:32:59 INFO DAGScheduler: Stopping DAGScheduler
> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Shutting down all 
> executors
> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Asking each executor to 
> shut down
> 14/11/19 16:33:00 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor 
> stopped!
> 14/11/19 16:33:00 INFO MemoryStore: MemoryStore cleared
> 14/11/19 16:33:00 INFO BlockManager: BlockManager stopped
> 14/11/19 16:33:00 INFO BlockManagerMaster: BlockManagerMaster stopped
> 14/11/19 16:33:00 INFO SparkContext: Successfully stopped SparkContext
> but trying to run  sbin/start-thriftserver.sh --master ... again results in 
> an error that Thrifserver is already running.
> ps -aef|grep  shows
> root 32334 1  0 16:32 ?00:00:00 /usr/local/bin/java 
> org.apache.spark.deploy.SparkSubmitDriverBootstrapper --class 
> org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --master 
> spark://myip:7077 --conf -spark.executor.extraJavaOptions=-verbose:gc 
> -XX:-PrintGCDetails -XX:+PrintGCTimeStamps spark-internal --hiveconf 
> hive.root.logger=INFO,console
> This is problematic since we have a process that tries to restart the driver 
> if it dies



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-4497) HiveThriftServer2 does not exit properly on failure

2015-02-02 Thread Michael Armbrust (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Armbrust updated SPARK-4497:

Target Version/s: 1.4.0  (was: 1.3.0)

> HiveThriftServer2 does not exit properly on failure
> ---
>
> Key: SPARK-4497
> URL: https://issues.apache.org/jira/browse/SPARK-4497
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.2.0
>Reporter: Yana Kadiyska
>Priority: Critical
>
> start thriftserver with 
>  sbin/start-thriftserver.sh --master ...
> If there is an error (in my case namenode is in standby mode) the driver 
> shuts down properly:
> 14/11/19 16:32:58 ERROR HiveThriftServer2: Error starting HiveThriftServer2
> 
> 14/11/19 16:32:59 INFO SparkUI: Stopped Spark web UI at http://myip:4040
> 14/11/19 16:32:59 INFO DAGScheduler: Stopping DAGScheduler
> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Shutting down all 
> executors
> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Asking each executor to 
> shut down
> 14/11/19 16:33:00 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor 
> stopped!
> 14/11/19 16:33:00 INFO MemoryStore: MemoryStore cleared
> 14/11/19 16:33:00 INFO BlockManager: BlockManager stopped
> 14/11/19 16:33:00 INFO BlockManagerMaster: BlockManagerMaster stopped
> 14/11/19 16:33:00 INFO SparkContext: Successfully stopped SparkContext
> but trying to run  sbin/start-thriftserver.sh --master ... again results in 
> an error that Thrifserver is already running.
> ps -aef|grep  shows
> root 32334 1  0 16:32 ?00:00:00 /usr/local/bin/java 
> org.apache.spark.deploy.SparkSubmitDriverBootstrapper --class 
> org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --master 
> spark://myip:7077 --conf -spark.executor.extraJavaOptions=-verbose:gc 
> -XX:-PrintGCDetails -XX:+PrintGCTimeStamps spark-internal --hiveconf 
> hive.root.logger=INFO,console
> This is problematic since we have a process that tries to restart the driver 
> if it dies



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org