Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7839#discussion_r36105847
  
    --- Diff: 
network/yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
 ---
    @@ -100,11 +119,33 @@ private boolean isAuthenticationEnabled() {
        */
       @Override
       protected void serviceInit(Configuration conf) {
    +
    +    // In case this NM was killed while there were running spark 
applications, we need to restore
    +    // lost state for the existing executors.  We look for an existing 
file in the NM's local dirs.
    +    // If we don't find one, then we choose a file to use to save the 
state next time.  However, we
    +    // do *not* immediately register all the executors in that file, just 
in case the application
    +    // was terminated while the NM was restarting.  We wait until yarn 
tells the service about the
    +    // app again via #initializeApplication, so we know its still running. 
 That is important
    +    // for preventing a leak where the app data would stick around 
*forever*.  This does leave
    +    // a small race -- if the NM restarts *again*, after only some of the 
existing apps have been
    +    // re-registered, their info will be lost.
    +    registeredExecutorFile =
    +      
findRegisteredExecutorFile(conf.get("yarn.nodemanager.local-dirs").split(","));
    +    try {
    +      reloadRegisteredExecutors();
    +    } catch (Exception e) {
    +      logger.error("Failed to load previously registered executors", e);
    +    }
    +
         TransportConf transportConf = new TransportConf(new 
HadoopConfigProvider(conf));
         // If authentication is enabled, set up the shuffle server to use a
         // special RPC handler that filters out unauthenticated fetch requests
         boolean authEnabled = conf.getBoolean(SPARK_AUTHENTICATE_KEY, 
DEFAULT_SPARK_AUTHENTICATE);
    -    blockHandler = new ExternalShuffleBlockHandler(transportConf);
    +    try {
    +      blockHandler = new ExternalShuffleBlockHandler(transportConf, 
registeredExecutorFile);
    +    } catch (Exception e) {
    +      logger.error("Failed to initial external shuffle service", e);
    --- End diff --
    
    Is logging enough? Shouldn't this bubble up the exception?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to