Github user rekhajoshm commented on the pull request:

    https://github.com/apache/spark/pull/6973#issuecomment-115518782
  
    thanks @vanzin for your comments.Updated the title.Other than SparkContext, 
SparkEnv stop can be invoked from Scheduler, or if Driver commanded a shutdown, 
Executor stop would invoke SparkEnv stop, hence making SparkEnv idempotent as 
well.
    
    On SPARK-2645, org.apache.spark.ServerStateException: Server is already 
stopped, from my understanding unhandled exception surfaces as 
uncaught_exception (exit code: 50)
    Issue may be erratic, even be due to somewhere a async code state not 
updated correctly, shutDown hook, or (wild) scala compiler optimization within 
steps, or only in machine/usage specific variants :-)
    From my very quick analysis, the one apparent culprit was stop() not being 
idempotent on SparkEnv.,  the fix was to prevent stopped env to be stopped 
again.
    
    @squito @srowen good point; in enthusiasm to avoid unhandled exception 
especially when I am not expecting any or only in erratic maneuvers, but 
completely agree with you.updated.
    
    Thanks again @vanzin @squito @srowen !


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to