感谢邱老师,这个我查看了一下没有这个文件的,跟现在运行的相同任务的正常执行的chk目录下的文件相比这个chk-167目录下的文件数少了很多,我们当时是看着cp执行完成之后cancel了任务然后
从hdfs上查到这个目录cp路径去重启的任务

Congxian Qiu <qcx978132...@gmail.com> 于2020年8月19日周三 下午2:39写道:

> Hi
>    1 图挂了
>     2 你到 hdfs 上能找到 hdfs:*xxxxxxxxxx*/flink/checkpoints/
> 7226f43179649162e6bae2573a952e60/chk-167/_metadata 这个文件吗?
> Best,
> Congxian
>
>
> Yang Peng <yangpengklf...@gmail.com> 于2020年8月17日周一 下午5:47写道:
>
> > 找到了 具体日志如下:2020-08-13 19:45:21,932 ERROR
> > org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error
> > occurred in the cluster entrypoint.
> >
> > org.apache.flink.runtime.dispatcher.DispatcherException: Failed to take
> leadership with session id 98a2a688-266b-4929-9442-1f0b559ade43.
> >       at
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$null$30(Dispatcher.java:915)
> >       at
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> >       at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> >       at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> >       at
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> >       at
> org.apache.flink.runtime.concurrent.FutureUtils$WaitingConjunctFuture.handleCompletedFuture(FutureUtils.java:691)
> >       at
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> >       at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> >       at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> >       at
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
> >       at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
> >       at
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> >       at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
> >       at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
> >       at
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> >       at
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> >       at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> >       at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> >       at
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> >       at akka.japi.pf
> .UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> >       at
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> >       at
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> >       at
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> >       at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> >       at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> >       at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> >       at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> >       at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> >       at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> >       at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> >       at
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> >       at
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> >       at
> akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> >       at
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> > Caused by: java.lang.RuntimeException:
> org.apache.flink.runtime.client.JobExecutionException: Could not set up
> JobManager
> >       at
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
> >       at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> >       at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> >       at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> >       ... 4 more
> > Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
> not set up JobManager
> >       at
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
> >       at
> org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
> >       at
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
> >       at
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
> >       ... 7 more
> > Caused by: java.io.FileNotFoundException: Cannot find meta data file
> '_metadata' in directory
> 'hdfs:*xxxxxxxxxx*/flink/checkpoints/7226f43179649162e6bae2573a952e60/chk-167'.
> Please try to load the checkpoint/savepoint directly from the metadata file
> instead of the directory.
> >       at
> org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpointPointer(AbstractFsCheckpointStorage.java:258)
> >       at
> org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorage.resolveCheckpoint(AbstractFsCheckpointStorage.java:110)
> >       at
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1129)
> >       at
> org.apache.flink.runtime.scheduler.LegacyScheduler.tryRestoreExecutionGraphFromSavepoint(LegacyScheduler.java:237)
> >       at
> org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:196)
> >       at
> org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
> >       at
> org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
> >       at
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:275)
> >       at
> org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:265)
> >       at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
> >       at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
> >       at
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
> >       ... 10 more
> > 2020-08-13 19:45:21,941 INFO  org.apache.flink.runtime.blob.BlobServer
>                     - Stopped BLOB server at 0.0.0.0:39267
> >
> >
> > 上面日志提示hdfs上cp文件找不到但是我在hdfs目录上查找能够发现这个cp文件是存在的 而且里面有子文件
> >
> > [image: IMG20200817_174506.png]
> >
> >
> > Congxian Qiu <qcx978132...@gmail.com> 于2020年8月17日周一 上午11:36写道:
> >
> >> Hi
> >>    JM/TM 日志如果是 OnYarn 模式,且开了了 log aggreagte 的话[1],应该是能够获取到这个日志的。
> >>    据我所知,暂时没有已知问题会导致增量 checkpoint 不能恢复,如果你遇到的问题确定会导致 增量 checkpoint
> >> 恢复失败的话,可以考虑创建一个 Issue
> >>
> >> [1]
> >>
> >>
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/yarn_setup.html#log-files
> >> Best,
> >> Congxian
> >>
> >>
> >> Yang Peng <yangpengklf...@gmail.com> 于2020年8月17日周一 上午11:22写道:
> >>
> >> >
> >>
> 在我们自研的开发平台上提交任务用的detach模式,提交完之后就看不到其他日志了,这个问题当天出现了两次,是不是使用增量cp会存在这个恢复失败的情况
> >> >
> >> > Congxian Qiu <qcx978132...@gmail.com> 于2020年8月17日周一 上午10:39写道:
> >> >
> >> > > Hi
> >> > >    你还有失败作业的 JM 和 TM
> >> > > 日志吗?如果有的话可以看一下这两个日志来确定为什么没有恢复成功。因为你说代码未作任何改变,然后恢复失败,这个还是比较奇怪的。
> >> > > Best,
> >> > > Congxian
> >> > >
> >> > >
> >> > > Yang Peng <yangpengklf...@gmail.com> 于2020年8月17日周一 上午10:25写道:
> >> > >
> >> > > > 好的 感谢
> >> > > >
> >> > > > JasonLee <17610775...@163.com> 于2020年8月14日周五 下午9:22写道:
> >> > > >
> >> > > > > hi
> >> > > > >
> >> > > > > 没有日志不太好定位失败的原因 但是没有设置uid的话 是有可能重启失败的 建议还是都设置uid最好
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > Sent from: http://apache-flink.147419.n8.nabble.com/
> >> > > >
> >> > >
> >> >
> >>
> >
>

回复