Hi,ALL

         写了个测试程序,大概跑了不到三个小时,flink集群就挂了,所有节点退出,报错如下:

2019-03-12 20:45:14,623 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph        - Job Tbox from 
Kafka Sink To Kafka And Print (21949294d4750b869b341c5d2942d499) switched from 
state RUNNING to FAILING.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
 The directory item limit of /tmp/ha is exceeded: limit=1048576 items=1048576


hdfs count结果:

2097151            4          124334563 hdfs://banma/tmp/ha


下面是flink-conf.yaml的配置:

[hdfs@qa-hdpdn06 flink-1.7.2]$ cat conf/flink-conf.yaml |grep ^[^#]
jobmanager.rpc.address: 10.4.11.252
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.heap.size: 1024m
taskmanager.numberOfTaskSlots: 10
parallelism.default: 1
 high-availability: zookeeper
 high-availability.storageDir: hdfs://banma/tmp/ha
 high-availability.zookeeper.quorum: qa-hdpdn05.ebanma.com:2181
rest.port: 8081

flink版本:官方最新的flink 1,7.2

为什么 high-availability.storageDir的目录会产生如此多的子目录?里面存的都是什么?什么情况下回触发这些存储操作?如何避免这个问题?

谢谢!

回复