[GitHub] spark issue #17620: [SPARK-20305][Spark Core]Master may keep in the state of...
Github user jiangxb1987 commented on the issue: https://github.com/apache/spark/pull/17620 @lvdongr Would you please close this and submit another PR to address the root cause? Thank you! --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #17620: [SPARK-20305][Spark Core]Master may keep in the state of...
Github user jiangxb1987 commented on the issue: https://github.com/apache/spark/pull/17620 Agree, let's close this and see whether we can reproduce the root cause of the failure. Thanks! @jerryshao --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #17620: [SPARK-20305][Spark Core]Master may keep in the state of...
Github user jerryshao commented on the issue: https://github.com/apache/spark/pull/17620 @jiangxb1987 according to what @lvdongr described, seems there's an issue in state transition for recovered master: > This happend at the time the previous master leader remove the died worker ,clear the worker's node on persistEngine(we use zookeeper),But before the worker node was removed from the zookeeper ,the leader changed. The new master leader recovery from the zookeeper ,and read the died worker node. Then the new leader find the worker died and trying to remove it ,and try to clear the node on zookeeper,but the node has been removed by the previous leader ,so an exception was throw, and the recovery fail. Then the leader will always in COMPLETING_RECOVERY state,and all the application registered cannot get resources . Based on the description, it looks like a contention issue, but here the fix is just simply a `try catch` solution. IMHO we'd better fix the root cause if we want to move forward this issue. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #17620: [SPARK-20305][Spark Core]Master may keep in the state of...
Github user jiangxb1987 commented on the issue: https://github.com/apache/spark/pull/17620 Should we move forward with this PR or should we close this? @jerryshao --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #17620: [SPARK-20305][Spark Core]Master may keep in the state of...
Github user lvdongr commented on the issue: https://github.com/apache/spark/pull/17620 You can see the main method in Master.scala. def main(argStrings: Array[String]) { Utils.initDaemon(log) val conf = new SparkConf val args = new MasterArguments(argStrings, conf) val (rpcEnv, _, _) = startRpcEnvAndEndpoint(args.host, args.port, args.webUiPort, conf) rpcEnv.awaitTermination() } When the rpcEnv is shut down,the main method will finish,and Master process will stop as I test already. I choose this way ,because the onstop method will be called before stopping master.So the service in master will also be closed,such as webui,metrics,persistenceEngine. I think it will be safer. Thank you for your last reply @jerryshao --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #17620: [SPARK-20305][Spark Core]Master may keep in the state of...
Github user lvdongr commented on the issue: https://github.com/apache/spark/pull/17620 This happend at the time the previous master leader remove the died worker ,clear the worker's node on persistEngine(we use zookeeper),But before the worker node was removed from the zookeeper ,the leader changed. The new master leader recovery from the zookeeper ,and read the died worker node. Then the new leader find the worker died and trying to remove it ,and try to clear the node on zookeeper,but the node has been removed by the previous leader ,so an exception was throw, and the recovery fail. Then the leader will always in COMPLETING_RECOVERY state,and all the application registered cannot get resources . ![failfetchresource](https://cloud.githubusercontent.com/assets/25652150/25209181/f7e31528-25ab-11e7-9eb2-e2f15db2dcac.png) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #17620: [SPARK-20305][Spark Core]Master may keep in the state of...
Github user jerryshao commented on the issue: https://github.com/apache/spark/pull/17620 I'm still not sure what issue you met during recovery and what will be happened if the issue is occurred? Looks from the fix you provided, what you mainly did is to shutdown rpcEnv, what will be happened after that, will this master process be orphaned or just exited? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #17620: [SPARK-20305][Spark Core]Master may keep in the state of...
Github user lvdongr commented on the issue: https://github.com/apache/spark/pull/17620 Execute me, Can this issue be closed or threre are some other problem? @jerryshao --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #17620: [SPARK-20305][Spark Core]Master may keep in the state of...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/17620 Can one of the admins verify this patch? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org