[ 
https://issues.apache.org/jira/browse/FLINK-10074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594625#comment-16594625
 ] 

ASF GitHub Bot commented on FLINK-10074:
----------------------------------------

yanghua commented on a change in pull request #6567: [FLINK-10074] Allowable 
number of checkpoint failures
URL: https://github.com/apache/flink/pull/6567#discussion_r213203249
 
 

 ##########
 File path: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/CheckpointExceptionHandlerTest.java
 ##########
 @@ -49,6 +49,59 @@ public void testRethrowingHandler() {
                Assert.assertNull(environment.getLastDeclinedCheckpointCause());
        }
 
+       @Test
+       public void testRethrowingHandlerWithTolerableNumberTriggered() {
+               DeclineDummyEnvironment environment = new 
DeclineDummyEnvironment();
+               
environment.getExecutionConfig().setTaskTolerableCheckpointFailuresNumber(3);
+               CheckpointExceptionHandlerFactory 
checkpointExceptionHandlerFactory = new CheckpointExceptionHandlerFactory();
+               CheckpointExceptionHandler exceptionHandler =
+                       
checkpointExceptionHandlerFactory.createCheckpointExceptionHandler(true, 
environment);
+
+               CheckpointMetaData failedCheckpointMetaData = new 
CheckpointMetaData(42L, 4711L);
+               Exception testException = new Exception("test");
+               try {
+                       
exceptionHandler.tryHandleCheckpointException(failedCheckpointMetaData, 
testException);
+                       failedCheckpointMetaData = new CheckpointMetaData(43L, 
4711L);
+                       
exceptionHandler.tryHandleCheckpointException(failedCheckpointMetaData, 
testException);
+                       failedCheckpointMetaData = new CheckpointMetaData(44L, 
4711L);
+                       
exceptionHandler.tryHandleCheckpointException(failedCheckpointMetaData, 
testException);
+                       failedCheckpointMetaData = new CheckpointMetaData(45L, 
4711L);
+                       
exceptionHandler.tryHandleCheckpointException(failedCheckpointMetaData, 
testException);
+
+                       Assert.fail("Exception not rethrown.");
+               } catch (Exception e) {
+                       Assert.assertEquals(testException, e);
+               }
+
+               Assert.assertNull(environment.getLastDeclinedCheckpointCause());
+       }
+
+       @Test
+       public void testRethrowingHandlerWithTolerableNumberNotTriggered() {
+               DeclineDummyEnvironment environment = new 
DeclineDummyEnvironment();
+               
environment.getExecutionConfig().setTaskTolerableCheckpointFailuresNumber(3);
+               CheckpointExceptionHandlerFactory 
checkpointExceptionHandlerFactory = new CheckpointExceptionHandlerFactory();
+               CheckpointExceptionHandler exceptionHandler =
+                       
checkpointExceptionHandlerFactory.createCheckpointExceptionHandler(true, 
environment);
+
+               CheckpointMetaData failedCheckpointMetaData = new 
CheckpointMetaData(42L, 4711L);
+               Exception testException = new Exception("test");
+
+               try {
+                       
exceptionHandler.tryHandleCheckpointException(failedCheckpointMetaData, 
testException);
+                       failedCheckpointMetaData = new CheckpointMetaData(43L, 
4711L);
+                       
exceptionHandler.tryHandleCheckpointException(failedCheckpointMetaData, 
testException);
+                       failedCheckpointMetaData = new CheckpointMetaData(44L, 
4711L);
+                       
exceptionHandler.tryHandleCheckpointException(failedCheckpointMetaData, 
testException);
+                       failedCheckpointMetaData = new CheckpointMetaData(46L, 
4711L);
+                       
exceptionHandler.tryHandleCheckpointException(failedCheckpointMetaData, 
testException);
+               } catch (Exception e) {
+                       Assert.assertNotEquals(testException, e);
 
 Review comment:
   Yes, we can just throw it. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Allowable number of checkpoint failures 
> ----------------------------------------
>
>                 Key: FLINK-10074
>                 URL: https://issues.apache.org/jira/browse/FLINK-10074
>             Project: Flink
>          Issue Type: Improvement
>          Components: State Backends, Checkpointing
>            Reporter: Thomas Weise
>            Assignee: vinoyang
>            Priority: Major
>              Labels: pull-request-available
>
> For intermittent checkpoint failures it is desirable to have a mechanism to 
> avoid restarts. If, for example, a transient S3 error prevents checkpoint 
> completion, the next checkpoint may very well succeed. The user may wish to 
> not incur the expense of restart under such scenario and this could be 
> expressed with a failure threshold (number of subsequent checkpoint 
> failures), possibly combined with a list of exceptions to tolerate.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to