Github user jerryshao commented on the issue:

    https://github.com/apache/spark/pull/19399
  
    I agree with @squito that the criteria to define application's success 
should be well considered. Here  in your current code, only if all the jobs are 
successful then the application is marked as successful, is it too strict that 
we cannot allow any failure and retry? Besides, if an application is 
successfully running all the Spark jobs, but fail on their own code (eg, saving 
to DB), and the application is exited with non-zero code, shall we mark the 
application succeed or failure?
    
    Also the structure to track all the jobs `jobToStatus ` will increase the 
memory occupation indefinitely in long running application.
    
    Besides with your changes I can see that page loading time will be 
increased, for those applications which have many jobs (like Spark Streaming) 
the problem will be severe.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to