Ufuk Celebi created FLINK-21928:
-----------------------------------

             Summary: DuplicateJobSubmissionException after JobManager failover
                 Key: FLINK-21928
                 URL: https://issues.apache.org/jira/browse/FLINK-21928
             Project: Flink
          Issue Type: Bug
          Components: Runtime / Coordination
    Affects Versions: 1.12.2, 1.11.3, 1.10.3
         Environment: StandaloneApplicationClusterEntryPoint using a fixed job 
ID, High Availability enabled
            Reporter: Ufuk Celebi


Consider the following scenario:
 * Environment: StandaloneApplicationClusterEntryPoint using a fixed job ID, 
high availability enabled
 * Flink job reaches a globally terminal state

 * Flink job is marked as finished in the high-availability service's 
RunningJobsRegistry

 * The JobManager fails over

On recovery, the [Dispatcher throws DuplicateJobSubmissionException, because 
the job is marked as done in the 
RunningJobsRegistry|https://github.com/apache/flink/blob/release-1.12.2/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java#L332-L340].

When this happens, users cannot get out of the situation without manually 
redeploying the JobManager process and changing the job ID^1^.

The desired semantics are that we don't want to re-execute a job that has 
reached a globally terminal state. In this particular case, we know that the 
job has already reached such a state (as it has been marked in the registry). 
Therefore, we could handle this case by executing the regular termination 
sequence instead of throwing a DuplicateJobSubmission.

---

^1^ With ZooKeeper HA, the respective node is not ephemeral. In Kubernetes HA, 
there is no  notion of ephemeral data that is tied to a session in the first 
place afaik.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to