Github user bhlx3lyx7 commented on a diff in the pull request:

    https://github.com/apache/incubator-griffin/pull/448#discussion_r228469933
  
    --- Diff: 
measure/src/main/scala/org/apache/griffin/measure/Application.scala ---
    @@ -104,12 +106,16 @@ object Application extends Loggable {
           case Success(_) =>
             info("process end success")
           case Failure(ex) =>
    -        error(s"process end error: ${ex.getMessage}")
    +        error(s"process end error: ${ex.getMessage}", ex)
             shutdown
             sys.exit(-5)
         }
     
         shutdown
    +
    +    if (!success) {
    +      sys.exit(-5)
    +    }
    --- End diff --
    
    If any rule step fails, explicitly break down the application can let the 
spark application shut down with a FAILED state. It makes sense.
    Actually, we can think about how to get the job state. Through livy or 
yarn, we can only get the lifecycle state of spark job, we still need to check 
logs to address the detailed error, and it would be much more difficult for 
common users. We can define some job states in calculation, like `starting`, 
`loading data source`, `pre-processing`, `rule step N success`, and let spark 
job report its state by configured notify method like http request to service 
side or something else. In such a way, service can manage states of job 
instances.
    
    I think job state management is the final solution, explicitly exit the 
application could work as a temporary solution at current.


---

Reply via email to