你可以查看一下JM的日志,看看提交的job或者failover之后恢复的job id是不是397a081a0313f462818575fc725b3582
Best, Yang RS <tinyshr...@163.com> 于2021年11月15日周一 上午9:53写道: > 查看下client的日志,一般在flink的logs目录下 > > > > > 在 2021-11-12 20:59:59,"sky" <sun-kaiy...@qq.com.INVALID> 写道: > >我使用的事flink on yarn。在执行命令时: flink run -m yarn-cluster > ./examples/batch/WordCount.jar 结果却报错了: > >------------------------------------------------------------ > > The program finished with the following exception: > > > >org.apache.flink.client.program.ProgramInvocationException: The main > method caused an error: > org.apache.flink.runtime.rest.util.RestClientException: > [org.apache.flink.runtime.rest.handler.RestHandlerException: > org.apache.flink.runtime.messages.FlinkJobNotFoundException: Could not find > Flink job (397a081a0313f462818575fc725b3582) > > at > org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.propagateException(JobExecutionResultHandler.java:94) > > at > org.apache.flink.runtime.rest.handler.job.JobExecutionResultHandler.lambda$handleRequest$1(JobExecutionResultHandler.java:84) > > at > java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870) > > > > ... > >麻烦告知是什么原因呢,我配置文件是这样的: > > >#=============================================================================== > >high-availability: zookeeper > >high-availability.storageDir: hdfs://mycluster/flink/ha/ > >high-availability.zookeeper.quorum: > hadoop201:2181,hadoop202:2181,hadoop203:2181 > >high-availability.zookeeper.path.root: /flink > >high-availability.cluster-id: /default_one # important: customize per > cluster > >#设置ck的状态后端 > >state.backend: filesystem > >state.checkpoints.dir: hdfs://mycluster/flink/checkpoints > >#设置默认的savepoint的保存位置 > >state.savepoints.dir: hdfs://mycluster/flink/savepoints > ># 集群名称不能写错 > >jobmanager.archive.fs.dir: hdfs://mycluster/flink/completed-jobs/ > >historyserver.archive.fs.dir: hdfs://mycluster/flink/completed-jobs/ > > >#=============================================================================== > > > >谢谢! >