[ 
https://issues.apache.org/jira/browse/FLINK-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16477783#comment-16477783
 ] 

Aris Koliopoulos commented on FLINK-9043:
-----------------------------------------

To be honest I am not sure how Flink should be responsible for any kind of 
savepoint/checkpoint management across jobs and clusters. It would need to make 
assumptions about how the distributed file system in use is organised, how the 
new versions are deployed etc.

>From what I understand here, it is proposed to have flink automatically look 
>into "state.checkpoints.dir" for a usable externalised checkpoint if there is 
>none to start from scratch. 

We are kind of doing that, but the only reason it works for us is because we 
version the data as well as the code.

All paths on s3 contain the job name (a name for the execution graph 
essentially) and the data version. If we want to restore the state we use the 
same data version else we generate a new one. We don't really care if the 
externalized checkpoint there is the last one due to the idempotent properties 
of the system. 

Where I am getting to is that I cannot see how we can have an "one size" fits 
all there. There should a CI process around Flink that manages this (anything 
from a bash script, to ansible, to custom Java/Scala code).

I may a bit off topic here, I will read the thread again tomorrow.

 

> Introduce a friendly way to resume the job from externalized checkpoints 
> automatically
> --------------------------------------------------------------------------------------
>
>                 Key: FLINK-9043
>                 URL: https://issues.apache.org/jira/browse/FLINK-9043
>             Project: Flink
>          Issue Type: New Feature
>            Reporter: godfrey johnson
>            Assignee: Sihua Zhou
>            Priority: Major
>
> I know a flink job can reovery from checkpoint with restart strategy, but can 
> not recovery as spark streaming jobs when job is starting.
> Every time, the submitted flink job is regarded as a new job, while , in the 
> spark streaming  job, which can detect the checkpoint directory first,  and 
> then recovery from the latest succeed one. However, Flink only can recovery 
> until the job failed first, then retry with strategy.
>  
> So, would flink support to recover from the checkpoint directly in a new job?
> h2. New description by [~sihuazhou]
> Currently, it's quite a bit not friendly for users to recover job from the 
> externalized checkpoint, user need to find the dedicate dir for the job which 
> is not a easy thing when there are too many jobs. This ticket attend to 
> introduce a more friendly way to allow the user to use the externalized 
> checkpoint to do recovery.
> The implementation steps are copied from the comments of [~StephanEwen]:
>  - We could make this an option where you pass a flag (-r) to automatically 
> look for the latest checkpoint in a given directory.
>  - If more than one jobs checkpointed there before, this operation would fail.
>  - We might also need a way to have jobs not create the UUID subdirectory, 
> otherwise the scanning for the latest checkpoint would not easily work.
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to