Hello,

这个问题解决了吗?我遇到相同的问题,还没定为到原因。

Best,
Paul Lam

> 2023年7月20日 12:04,王刚 <wanggang11...@autohome.com.cn> 写道:
> 
> 异常栈信息
> ```
> 
> 2023-07-20 11:43:01,627 ERROR 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner      [] - Terminating 
> TaskManagerRunner with exit code 1.
> org.apache.flink.util.FlinkException: Failed to start the TaskManagerRunner.
>        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:488)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerProcessSecurely$5(TaskManagerRunner.java:530)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_92]
>        at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_92]
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.2.1.jar:?]
>        at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerProcessSecurely(TaskManagerRunner.java:530)
>  [flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.yarn.YarnTaskExecutorRunner.runTaskManagerSecurely(YarnTaskExecutorRunner.java:94)
>  [flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.yarn.YarnTaskExecutorRunner.main(YarnTaskExecutorRunner.java:68)
>  [flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
> Caused by: org.apache.hadoop.ipc.RemoteException: token (token for flink: 
> HDFS_DELEGATION_TOKEN 
> owner=flink/lf-client-flink-28-243-196.hadoop.local@HADOOP.LOCAL, renewer=, 
> realUser=, issueDate=1689734389821, maxDate=1690339189821, 
> sequenceNumber=266208479, masterKeyId=1131) can't be found in cache
>        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1557) 
> ~[hadoop-common-3.2.1.jar:?]
>        at org.apache.hadoop.ipc.Client.call(Client.java:1494) 
> ~[hadoop-common-3.2.1.jar:?]
>        at org.apache.hadoop.ipc.Client.call(Client.java:1391) 
> ~[hadoop-common-3.2.1.jar:?]
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>  ~[hadoop-common-3.2.1.jar:?]
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>  ~[hadoop-common-3.2.1.jar:?]
>        at com.sun.proxy.$Proxy26.mkdirs(Unknown Source) ~[?:?]
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:660)
>  ~[hadoop-hdfs-client-3.2.1.jar:?]
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_92]
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_92]
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_92]
>        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_92]
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>  ~[hadoop-common-3.2.1.jar:?]
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>  ~[hadoop-common-3.2.1.jar:?]
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>  ~[hadoop-common-3.2.1.jar:?]
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>  ~[hadoop-common-3.2.1.jar:?]
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>  ~[hadoop-common-3.2.1.jar:?]
>        at com.sun.proxy.$Proxy27.mkdirs(Unknown Source) ~[?:?]
>        at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2425) 
> ~[hadoop-hdfs-client-3.2.1.jar:?]
>        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2401) 
> ~[hadoop-hdfs-client-3.2.1.jar:?]
>        at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1318)
>  ~[hadoop-hdfs-client-3.2.1.jar:?]
>        at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1315)
>  ~[hadoop-hdfs-client-3.2.1.jar:?]
>        at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  ~[hadoop-common-3.2.1.jar:?]
>        at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1332)
>  ~[hadoop-hdfs-client-3.2.1.jar:?]
>        at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1307)
>  ~[hadoop-hdfs-client-3.2.1.jar:?]
>        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2275) 
> ~[hadoop-common-3.2.1.jar:?]
>        at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.mkdirs(HadoopFileSystem.java:183)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.<init>(FileSystemBlobStore.java:64)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:108)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:86)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createZooKeeperHaServices(HighAvailabilityServicesUtils.java:89)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:137)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManagerRunnerServices(TaskManagerRunner.java:195)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.start(TaskManagerRunner.java:293)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:486)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>        ... 8 more
> 
> 
> 
> ```
> 
> 
> 顺祝商祺!
> 
> ----------------------------------
> 
> 汽车之家 Autohome Inc.
> 
> 王刚
> 
> 智能数据中心/数据技术团队/实时计算平台组
> 
> 北京市海淀区丹棱街3号中国电子大厦B座9层  100080
> 
> www.autohome.com.cn<http://www.autohome.com.cn/>
> 
> Tel:15510811690
> 
> Fax:15510811690
> 
> Mobile:15510811690
> 
> ----------------------------------
> 
> ________________________________
> 发件人: 王刚
> 发送时间: 2023年7月20日 12:00:01
> 收件人: user-zh@flink.apache.org
> 主题: Flink1.17.1 yarn token 过期问题
> 
> 
> flink 1.17.1 on Yarn实时任务运行了几天出现了Yarn 
> token过期问题,在1.12未出现。这块具体有什么变化嘛,我是否还需要再配置其他参数。
> 
> 具体配置:
> 
> ```
> security.kerberos.access.hadoopFileSystems: viewfs://AutoLfCluster;hdfs://ns1
> security.kerberos.login.keytab: /xxx/krb5.keytab
> security.kerberos.login.principal: flink/xxx
> security.kerberos.login.use-ticket-cache: false
> 
> ```
> 
> 
> 
> 
> 顺祝商祺!
> 
> ----------------------------------
> 
> 汽车之家 Autohome Inc.
> 
> 王刚
> 
> 智能数据中心/数据技术团队/实时计算平台组
> 
> 北京市海淀区丹棱街3号中国电子大厦B座9层  100080
> 
> www.autohome.com.cn<http://www.autohome.com.cn/>
> 
> Tel:15510811690
> 
> Fax:15510811690
> 
> Mobile:15510811690
> 
> ----------------------------------

回复