[ 
https://issues.apache.org/jira/browse/FLINK-16550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057449#comment-17057449
 ] 

Nico Kruber commented on FLINK-16550:
-------------------------------------

This actually also showed up in an end-to-end Flink cluster setup where I 
couldn't download my savepoint from s3 anymore and this showed up on job 
submission:

{code}
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
JobGraph.
        at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
        at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
        at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
        at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
        at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
        at 
org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
JobGraph.
        at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:199)
        at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1741)
        at 
org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:94)
        at 
org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:63)
        at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1620)
        at 
org.apache.flink.streaming.examples.windowing.TopSpeedWindowing.main(TopSpeedWindowing.java:96)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
        ... 8 more
Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit 
JobGraph.
        at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
        at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1736)
        ... 17 more
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
submit JobGraph.
        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$7(RestClusterClient.java:359)
        at 
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884)
        at 
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866)
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
        at 
org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:274)
        at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
        at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at 
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
        at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
        at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal 
server error., <Exception on server side:
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.
        at 
org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$3(Dispatcher.java:336)
        at 
java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
        at 
java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
        at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
        at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: 
org.apache.flink.runtime.client.JobExecutionException: Could not set up 
JobManager
        at 
org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
        at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
        ... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set 
up JobManager
        at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:152)
        at 
org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:84)
        at 
org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$6(Dispatcher.java:379)
        at 
org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
        ... 7 more
Caused by: java.io.FileNotFoundException: Cannot find checkpoint or savepoint 
file/directory '<path>' on file system 's3'.
        at 
org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpointPointer(AbstractFsCheckpointStorage.java:243)
        at 
org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpoint(AbstractFsCheckpointStorage.java:110)
        at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1152)
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.tryRestoreExecutionGraphFromSavepoint(SchedulerBase.java:307)
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:240)
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:216)
        at 
org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:120)
        at 
org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:105)
        at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
        at 
org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
        at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
        at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
        at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.<init>(JobManagerRunnerImpl.java:146)
        ... 10 more

End of exception on server side>]
        at 
org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:390)
        at 
org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:374)
        at 
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966)
        at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
        ... 4 more
{code}

The proposed patch fixes that scenario as well.

> HadoopS3* tests fail with NullPointerException exceptions
> ---------------------------------------------------------
>
>                 Key: FLINK-16550
>                 URL: https://issues.apache.org/jira/browse/FLINK-16550
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystems
>    Affects Versions: 1.11.0
>            Reporter: Robert Metzger
>            Priority: Blocker
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Logs: 
> https://travis-ci.org/github/apache/flink/jobs/660975486?utm_medium=notification
> All subsequent builds failed as well. It is likely that this commit / 
> FLINK-16014 introduced the issue, as these tests depend on S3 credentials to 
> be available.
> {code}
> 09:38:48.022 [INFO] -------------------------------------------------------
> 09:38:48.025 [INFO]  T E S T S
> 09:38:48.026 [INFO] -------------------------------------------------------
> 09:38:48.657 [INFO] Running 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterExceptionITCase
> 09:38:48.669 [INFO] Running 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> 09:38:54.541 [ERROR] Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time 
> elapsed: 5.88 s <<< FAILURE! - in 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterExceptionITCase
> 09:38:54.542 [ERROR] 
> testResumeAfterCommit(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterExceptionITCase)
>   Time elapsed: 3.592 s  <<< ERROR!
> java.lang.Exception: Unexpected exception, expected<java.io.IOException> but 
> was<java.lang.NullPointerException>
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterExceptionITCase.testResumeAfterCommit(HadoopS3RecoverableWriterExceptionITCase.java:162)
> 09:38:54.542 [ERROR] 
> testResumeWithWrongOffset(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterExceptionITCase)
>   Time elapsed: 0.24 s  <<< ERROR!
> java.lang.Exception: Unexpected exception, expected<java.io.IOException> but 
> was<java.lang.NullPointerException>
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterExceptionITCase.testResumeWithWrongOffset(HadoopS3RecoverableWriterExceptionITCase.java:182)
> 09:38:54.542 [ERROR] 
> testExceptionWritingAfterCloseForCommit(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterExceptionITCase)
>   Time elapsed: 0.448 s  <<< ERROR!
> java.lang.Exception: Unexpected exception, expected<java.io.IOException> but 
> was<java.lang.NullPointerException>
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterExceptionITCase.testExceptionWritingAfterCloseForCommit(HadoopS3RecoverableWriterExceptionITCase.java:144)
> 09:38:55.173 [INFO] Running 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase
> 09:38:58.737 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 10.066 s <<< FAILURE! - in 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> 09:38:58.737 [ERROR] 
> testDirectoryListing(org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase)  
> Time elapsed: 3.448 s  <<< ERROR!
> java.io.FileNotFoundException: No such file or directory: 
> s3://[secure]/temp/tests-f37db36e-c116-4c58-a16b-8ca241baae4b/testdir
> 09:38:59.447 [INFO] Running 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemBehaviorITCase
> 09:39:01.791 [ERROR] Tests run: 13, Failures: 0, Errors: 13, Skipped: 0, Time 
> elapsed: 6.611 s <<< FAILURE! - in 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase
> 09:39:01.797 [ERROR] 
> testCloseWithNoData(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 2.394 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testCloseWithNoData(HadoopS3RecoverableWriterITCase.java:186)
> 09:39:01.798 [ERROR] 
> testCommitAfterPersist(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.191 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testCommitAfterPersist(HadoopS3RecoverableWriterITCase.java:208)
> 09:39:01.799 [ERROR] 
> testRecoverWithEmptyState(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.235 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersist(HadoopS3RecoverableWriterITCase.java:384)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersistWithSmallData(HadoopS3RecoverableWriterITCase.java:352)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testRecoverWithEmptyState(HadoopS3RecoverableWriterITCase.java:302)
> 09:39:01.799 [ERROR] 
> testRecoverFromIntermWithoutAdditionalState(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.181 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersist(HadoopS3RecoverableWriterITCase.java:384)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersistWithSmallData(HadoopS3RecoverableWriterITCase.java:352)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testRecoverFromIntermWithoutAdditionalState(HadoopS3RecoverableWriterITCase.java:316)
> 09:39:01.799 [ERROR] 
> testCallingDeleteObjectTwiceDoesNotThroughException(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.181 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException(HadoopS3RecoverableWriterITCase.java:245)
> 09:39:01.801 [ERROR] 
> testCommitAfterNormalClose(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.174 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testCommitAfterNormalClose(HadoopS3RecoverableWriterITCase.java:196)
> 09:39:01.802 [ERROR] 
> testRecoverWithStateWithMultiPart(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.338 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersist(HadoopS3RecoverableWriterITCase.java:384)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersistWithMultiPartUploads(HadoopS3RecoverableWriterITCase.java:364)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testRecoverWithStateWithMultiPart(HadoopS3RecoverableWriterITCase.java:330)
> 09:39:01.803 [ERROR] 
> testRecoverFromIntermWithoutAdditionalStateWithMultiPart(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.486 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersist(HadoopS3RecoverableWriterITCase.java:384)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersistWithMultiPartUploads(HadoopS3RecoverableWriterITCase.java:364)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testRecoverFromIntermWithoutAdditionalStateWithMultiPart(HadoopS3RecoverableWriterITCase.java:337)
> 09:39:01.810 [ERROR] 
> testRecoverWithState(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.199 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersist(HadoopS3RecoverableWriterITCase.java:384)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersistWithSmallData(HadoopS3RecoverableWriterITCase.java:352)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testRecoverWithState(HadoopS3RecoverableWriterITCase.java:309)
> 09:39:01.810 [ERROR] 
> testCleanupRecoverableState(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.202 s  <<< ERROR!
> java.lang.Exception: Unexpected exception, 
> expected<java.io.FileNotFoundException> but 
> was<java.lang.NullPointerException>
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testCleanupRecoverableState(HadoopS3RecoverableWriterITCase.java:223)
> 09:39:01.810 [ERROR] 
> testCommitAfterRecovery(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.26 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testCommitAfterRecovery(HadoopS3RecoverableWriterITCase.java:270)
> 09:39:01.810 [ERROR] 
> testRecoverAfterMultiplePersistsState(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.165 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersist(HadoopS3RecoverableWriterITCase.java:384)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersistWithSmallData(HadoopS3RecoverableWriterITCase.java:352)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsState(HadoopS3RecoverableWriterITCase.java:323)
> 09:39:01.810 [ERROR] 
> testRecoverAfterMultiplePersistsStateWithMultiPart(org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase)
>   Time elapsed: 0.735 s  <<< ERROR!
> java.lang.NullPointerException
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersist(HadoopS3RecoverableWriterITCase.java:384)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testResumeAfterMultiplePersistWithMultiPartUploads(HadoopS3RecoverableWriterITCase.java:364)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart(HadoopS3RecoverableWriterITCase.java:344)
> 09:39:14.711 [WARNING] Tests run: 8, Failures: 0, Errors: 0, Skipped: 2, Time 
> elapsed: 15.262 s - in 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemBehaviorITCase
> 09:39:15.047 [INFO] 
> 09:39:15.047 [INFO] Results:
> 09:39:15.047 [INFO] 
> 09:39:15.047 [ERROR] Errors: 
> 09:39:15.047 [ERROR]   
> HadoopS3FileSystemITCase>AbstractHadoopFileSystemITTest.testDirectoryListing:127
>  » FileNotFound
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterExceptionITCase.testExceptionWritingAfterCloseForCommit
>  » 
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterExceptionITCase.testResumeAfterCommit »  Unexpected 
> e...
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterExceptionITCase.testResumeWithWrongOffset »  
> Unexpect...
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testCallingDeleteObjectTwiceDoesNotThroughException:245
>  » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testCleanupRecoverableState »  Unexpected 
> exce...
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testCloseWithNoData:186 » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testCommitAfterNormalClose:196 » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testCommitAfterPersist:208 » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testCommitAfterRecovery:270 » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsState:323->testResumeAfterMultiplePersistWithSmallData:352->testResumeAfterMultiplePersist:384
>  » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart:344->testResumeAfterMultiplePersistWithMultiPartUploads:364->testResumeAfterMultiplePersist:384
>  » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testRecoverFromIntermWithoutAdditionalState:316->testResumeAfterMultiplePersistWithSmallData:352->testResumeAfterMultiplePersist:384
>  » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testRecoverFromIntermWithoutAdditionalStateWithMultiPart:337->testResumeAfterMultiplePersistWithMultiPartUploads:364->testResumeAfterMultiplePersist:384
>  » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testRecoverWithEmptyState:302->testResumeAfterMultiplePersistWithSmallData:352->testResumeAfterMultiplePersist:384
>  » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testRecoverWithState:309->testResumeAfterMultiplePersistWithSmallData:352->testResumeAfterMultiplePersist:384
>  » NullPointer
> 09:39:15.047 [ERROR]   
> HadoopS3RecoverableWriterITCase.testRecoverWithStateWithMultiPart:330->testResumeAfterMultiplePersistWithMultiPartUploads:364->testResumeAfterMultiplePersist:384
>  » NullPointer
> 09:39:15.047 [INFO] 
> 09:39:15.047 [ERROR] Tests run: 26, Failures: 0, Errors: 17, Skipped: 2
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to