这个是 hadoop 2.x 的已知设计缺陷。

hadoop 2.x 中,container request 没有唯一的标识,且分配下来的 container
的资源和请求的资源也可能不同,为了将分下来的 container 对应到之前的 request,flink 不得不去进行归一化的计算。如果 yarn
集群 RM 和 AM 机器上的配置不一致,是有潜在风险的。hadoop 3.x 中改进了这一问题,每个 container request 都有一个
id,可以用来将分配到的 container 与之前的 request 对应起来。出于对旧版本 hadoop 的兼容性考虑,flink
仍然采用的是计算资源的方式匹配 container。

flink 1.9 当中没有遇到这个问题,是因为默认所有 container 都是相同规格的,所以省略了匹配过程。目前 flink
社区正在开发支持申请不同规格 container 调度能力,因此在 1.11 种增加了验证 container 资源的逻辑。

Thank you~

Xintong Song



On Tue, Jul 28, 2020 at 2:46 PM 酷酷的浑蛋 <apach...@163.com> wrote:

> 谢谢你,我将flink-conf.yaml的taskmanager.memory.process.size由1728m调成2048m,这个问题解决了,
> 但是我觉得,这个问题应该很常见才对,怎么偏被我碰上了,flink这个设计是正常的吗?
>
>
>
>
> 在2020年07月28日 10:26,Xintong Song<tonysong...@gmail.com> 写道:
> 建议确认一下 Yarn 的配置 “yarn.scheduler.minimum-allocation-mb” 在 Yarn RM 和 Flink JM
> 这两台机器上是否一致。
>
> Yarn 会对 container request 做归一化。例如你请求的 TM container 是 1728m
> (taskmanager.memory.process.size) ,如果 minimum-allocation-mb 是 1024m,那么实际得到的
> container 大小必须是 minimum-allocation-mb 的整数倍,也就是 2048m。Flink 会去获取 Yarn 的配置,计算
> container request 实际分到的 container 应该多大,并对分到的 container 进行检查。现在看 JM 日志,分下来的
> container 并没有通过这个检查,造成 Flink 认为 container 规格不匹配。这里最可能的原因是 Flink 拿到的
> minimum-allocation-mb 和 Yarn RM 实际使用的不一致。
>
> Thank you~
>
> Xintong Song
>
>
>
> On Mon, Jul 27, 2020 at 7:42 PM 酷酷的浑蛋 <apach...@163.com> wrote:
>
>
> 首先,flink1.9提交到yarn集群是没有问题的,同等的配置提交flink1.11.1到yarn集群就报下面的错误
> 2020-07-27 17:08:14,661 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
>
> --------------------------------------------------------------------------------
> 2020-07-27 17:08:14,665 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> Starting YarnJobClusterEntrypoint (Version: 1.11.1, Scala: 2.11,
> Rev:7eb514a, Date:2020-07-15T07:02:09+02:00)
> 2020-07-27 17:08:14,665 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  OS
> current user: hadoop
> 2020-07-27 17:08:15,417 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  Current
> Hadoop/Kerberos user: wangty
> 2020-07-27 17:08:15,418 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  JVM:
> Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.191-b12
> 2020-07-27 17:08:15,418 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  Maximum
> heap size: 429 MiBytes
> 2020-07-27 17:08:15,418 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> JAVA_HOME: /usr/local/jdk/
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  Hadoop
> version: 2.7.7
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  JVM
> Options:
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> -Xmx469762048
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> -Xms469762048
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> -XX:MaxMetaspaceSize=268435456
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
>
> -Dlog.file=/data/emr/yarn/logs/application_1568724479991_18850539/container_e25_1568724479991_18850539_01_000001/jobmanager.log
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> -Dlog4j.configuration=file:log4j.properties
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> -Dlog4j.configurationFile=file:log4j.properties
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  Program
> Arguments: (none)
> 2020-07-27 17:08:15,419 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> Classpath:
>
> :lib/flink-csv-1.11.1.jar:lib/flink-json-1.11.1.jar:lib/flink-shaded-zookeeper-3.4.14.jar:lib/flink-table-blink_2.11-1.11.1.jar:lib/flink-table_2.11-1.11.1.jar:lib/log4j-1.2-api-2.12.1.jar:lib/log4j-api-2.12.1.jar:lib/log4j-core-2.12.1.jar:lib/log4j-slf4j-impl-2.12.1.jar:test.jar:flink-dist_2.11-1.11.1.jar:job.graph:flink-conf.yaml::/usr/local/service/hadoop/etc/hadoop:/usr/local/service/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/service/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/service/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/service/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/service/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/service/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-core-2.2.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-databind-2.2.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/service/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/service/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-annotations-2.2.3.jar:/usr/local/service/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/service/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/service/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/service/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/service/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/service/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/service/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/service/hadoop/share/hadoop/common/lib/hadoop-temrfs-1.0.6.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/service/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/service/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/service/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/service/hadoop/share/hadoop/common/lib/joda-time-2.9.7.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/service/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/service/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/usr/local/service/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/service/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/service/hadoop/share/hadoop/yarn/spark-2.0.2-yarn-shuffle.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/hadoop-lzo-0.4.20.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/service/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar
> 2020-07-27 17:08:15,420 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
>
> --------------------------------------------------------------------------------
> 2020-07-27 17:08:15,421 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> Registered UNIX signal handlers for [TERM, HUP, INT]
> 2020-07-27 17:08:15,424 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] - YARN
> daemon is running as: wangty Yarn client user obtainer: wangty
> 2020-07-27 17:08:15,427 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: taskmanager.memory.process.size, 1728m
> 2020-07-27 17:08:15,427 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: internal.jobgraph-path, job.graph
> 2020-07-27 17:08:15,427 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.execution.failover-strategy, region
> 2020-07-27 17:08:15,427 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: high-availability.cluster-id,
> application_1568724479991_18850539
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.rpc.address, localhost
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: execution.target, yarn-per-job
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.memory.process.size, 1 gb
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.rpc.port, 6123
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: execution.savepoint.ignore-unclaimed-state, false
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: execution.attached, true
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: internal.cluster.execution-mode, NORMAL
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: execution.shutdown-on-attached-exit, false
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: pipeline.jars,
> file:/data/rt/jar_version/sql/test.jar
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: parallelism.default, 3
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: taskmanager.numberOfTaskSlots, 1
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: pipeline.classpaths,
> http://x.x.32.138:38088/rt/udf/download?udfname=test
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: yarn.application.name, RTC_TEST
> 2020-07-27 17:08:15,428 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: yarn.application.queue, root.dp.dp_online
> 2020-07-27 17:08:15,429 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: $internal.deployment.config-dir,
> /data/server/flink-1.11.1/conf
> 2020-07-27 17:08:15,429 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: $internal.yarn.log-config-file,
> /data/server/flink-1.11.1/conf/log4j.properties
> 2020-07-27 17:08:15,455 WARN  org.apache.flink.configuration.Configuration
> [] - Config uses deprecated configuration key 'web.port'
> instead of proper key 'rest.bind-port'
> 2020-07-27 17:08:15,465 INFO
> org.apache.flink.runtime.clusterframework.BootstrapTools     [] - Setting
> directories for temporary files to:
>
> /data1/emr/yarn/local/usercache/wangty/appcache/application_1568724479991_18850539,/data2/emr/yarn/local/usercache/wangty/appcache/application_1568724479991_18850539,/data3/emr/yarn/local/usercache/wangty/appcache/application_1568724479991_18850539
> 2020-07-27 17:08:15,471 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] - Starting
> YarnJobClusterEntrypoint.
> 2020-07-27 17:08:15,993 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] - Install
> default filesystem.
> 2020-07-27 17:08:16,235 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] - Install
> security context.
> 2020-07-27 17:08:16,715 INFO
> org.apache.flink.runtime.security.modules.HadoopModule       [] - Hadoop
> user set to wangty (auth:SIMPLE)
> 2020-07-27 17:08:16,722 INFO
> org.apache.flink.runtime.security.modules.JaasModule         [] - Jaas
> file will be created as
>
> /data1/emr/yarn/local/usercache/wangty/appcache/application_1568724479991_18850539/jaas-8303363038541870345.conf.
> 2020-07-27 17:08:16,729 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -
> Initializing cluster services.
> 2020-07-27 17:08:16,741 INFO
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils        [] - Trying
> to start actor system, external address x.x.5.60:0, bind address 0.0.0.0:0
> .
> 2020-07-27 17:08:18,830 INFO  akka.event.slf4j.Slf4jLogger
> [] - Slf4jLogger started
> 2020-07-27 17:08:19,781 INFO  akka.remote.Remoting
> [] - Starting remoting
> 2020-07-27 17:08:19,936 INFO  akka.remote.Remoting
> [] - Remoting started; listening on addresses
> :[akka.tcp://flink@x.x.x.60:36696]
> 2020-07-27 17:08:20,021 INFO
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils        [] - Actor
> system started at akka.tcp://flink@x.x.x.60:36696
> 2020-07-27 17:08:20,042 WARN  org.apache.flink.configuration.Configuration
> [] - Config uses deprecated configuration key 'web.port'
> instead of proper key 'rest.port'
> 2020-07-27 17:08:20,049 INFO  org.apache.flink.runtime.blob.BlobServer
> [] - Created BLOB server storage directory
>
> /data3/emr/yarn/local/usercache/wangty/appcache/application_1568724479991_18850539/blobStore-86aff9db-0f30-4688-9e68-b8e5866a93c7
> 2020-07-27 17:08:20,054 INFO  org.apache.flink.runtime.blob.BlobServer
> [] - Started BLOB server at 0.0.0.0:56782 - max
> concurrent requests: 50 - max backlog: 1000
> 2020-07-27 17:08:20,063 INFO
> org.apache.flink.runtime.metrics.MetricRegistryImpl          [] - No
> metrics reporter configured, no metrics will be exposed/reported.
> 2020-07-27 17:08:20,066 INFO
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils        [] - Trying
> to start actor system, external address x.x.5.60:0, bind address 0.0.0.0:0
> .
> 2020-07-27 17:08:20,082 INFO  akka.event.slf4j.Slf4jLogger
> [] - Slf4jLogger started
> 2020-07-27 17:08:20,086 INFO  akka.remote.Remoting
> [] - Starting remoting
> 2020-07-27 17:08:20,093 INFO  akka.remote.Remoting
> [] - Remoting started; listening on addresses
> :[akka.tcp://flink-metrics@x.x.5.60:60801]
> 2020-07-27 17:08:20,794 INFO
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils        [] - Actor
> system started at akka.tcp://flink-metrics@x.x.5.60:60801
> 2020-07-27 17:08:20,810 INFO
> org.apache.flink.runtime.rpc.akka.AkkaRpcService             [] - Starting
> RPC endpoint for org.apache.flink.runtime.metrics.dump.MetricQueryService
> at akka://flink-metrics/user/rpc/MetricQueryService .
> 2020-07-27 17:08:20,856 WARN  org.apache.flink.configuration.Configuration
> [] - Config uses deprecated configuration key 'web.port'
> instead of proper key 'rest.bind-port'
> 2020-07-27 17:08:20,858 INFO
> org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Upload
> directory
> /tmp/flink-web-f3b225c5-e01d-4dfb-9091-aca7bb8e6192/flink-web-upload does
> not exist.
> 2020-07-27 17:08:20,859 INFO
> org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Created
> directory
> /tmp/flink-web-f3b225c5-e01d-4dfb-9091-aca7bb8e6192/flink-web-upload for
> file uploads.
> 2020-07-27 17:08:20,874 INFO
> org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] -
> Starting rest endpoint.
> 2020-07-27 17:08:21,103 INFO
> org.apache.flink.runtime.webmonitor.WebMonitorUtils          [] -
> Determined location of main cluster component log file:
>
> /data/emr/yarn/logs/application_1568724479991_18850539/container_e25_1568724479991_18850539_01_000001/jobmanager.log
> 2020-07-27 17:08:21,103 INFO
> org.apache.flink.runtime.webmonitor.WebMonitorUtils          [] -
> Determined location of main cluster component stdout file:
>
> /data/emr/yarn/logs/application_1568724479991_18850539/container_e25_1568724479991_18850539_01_000001/jobmanager.out
> 2020-07-27 17:08:21,241 INFO
> org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Rest
> endpoint listening at x.x.5.60:46723
> 2020-07-27 17:08:21,242 INFO
> org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] -
> http://x.x.5.60:46723 was granted leadership with
> leaderSessionID=00000000-0000-0000-0000-000000000000
> 2020-07-27 17:08:21,243 INFO
> org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Web
> frontend listening at http://x.x.5.60:46723.
> 2020-07-27 17:08:21,256 INFO
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils [] - The
> derived from fraction jvm overhead memory (172.800mb (181193935 bytes)) is
> less than its min value 192.000mb (201326592 bytes), min value will be used
> instead
> 2020-07-27 17:08:21,304 INFO
> org.apache.flink.runtime.rpc.akka.AkkaRpcService             [] - Starting
> RPC endpoint for org.apache.flink.yarn.YarnResourceManager at
> akka://flink/user/rpc/resourcemanager_0 .
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: taskmanager.memory.process.size, 1728m
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: internal.jobgraph-path, job.graph
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.execution.failover-strategy, region
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: high-availability.cluster-id,
> application_1568724479991_18850539
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.rpc.address, localhost
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: execution.target, yarn-per-job
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.memory.process.size, 1 gb
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: jobmanager.rpc.port, 6123
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: execution.savepoint.ignore-unclaimed-state, false
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: execution.attached, true
> 2020-07-27 17:08:21,314 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: internal.cluster.execution-mode, NORMAL
> 2020-07-27 17:08:21,315 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: execution.shutdown-on-attached-exit, false
> 2020-07-27 17:08:21,315 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: pipeline.jars,
> file:/data/rt/jar_version/sql/test.jar
> 2020-07-27 17:08:21,315 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: parallelism.default, 3
> 2020-07-27 17:08:21,315 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: taskmanager.numberOfTaskSlots, 1
> 2020-07-27 17:08:21,315 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: pipeline.classpaths,
> http://x.x.32.138:38088/rt/udf/download?udfname=test
> 2020-07-27 17:08:21,315 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: yarn.application.name, RTC_TEST
> 2020-07-27 17:08:21,315 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: yarn.application.queue, root.dp.dp_online
> 2020-07-27 17:08:21,315 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: $internal.deployment.config-dir,
> /data/server/flink-1.11.1/conf
> 2020-07-27 17:08:21,315 INFO
> org.apache.flink.configuration.GlobalConfiguration           [] - Loading
> configuration property: $internal.yarn.log-config-file,
> /data/server/flink-1.11.1/conf/log4j.properties
> 2020-07-27 17:08:21,333 INFO
> org.apache.flink.runtime.externalresource.ExternalResourceUtils [] -
> Enabled external resources: []
> 2020-07-27 17:08:21,334 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Cannot get scheduler resource types: This YARN
> version does not support 'getSchedulerResourceTypes'
> 2020-07-27 17:08:21,375 INFO
> org.apache.flink.runtime.dispatcher.runner.JobDispatcherLeaderProcess [] -
> Start JobDispatcherLeaderProcess.
> 2020-07-27 17:08:21,379 INFO
> org.apache.flink.runtime.rpc.akka.AkkaRpcService             [] - Starting
> RPC endpoint for org.apache.flink.runtime.dispatcher.MiniDispatcher at
> akka://flink/user/rpc/dispatcher_1 .
> 2020-07-27 17:08:21,408 INFO
> org.apache.flink.runtime.rpc.akka.AkkaRpcService             [] - Starting
> RPC endpoint for org.apache.flink.runtime.jobmaster.JobMaster at
> akka://flink/user/rpc/jobmanager_2 .
> 2020-07-27 17:08:21,414 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Initializing job RTC_TEST
> (9f074e66a0f70274c7a7af42e71525fb).
> 2020-07-27 17:08:21,437 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Using restart back off time strategy
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=5,
> backoffTimeMS=10000) for RTC_TEST (9f074e66a0f70274c7a7af42e71525fb).
> 2020-07-27 17:08:21,472 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Running initialization on master for job RTC_TEST
> (9f074e66a0f70274c7a7af42e71525fb).
> 2020-07-27 17:08:21,472 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Successfully ran initialization on master in 0 ms.
> 2020-07-27 17:08:21,488 INFO
> org.apache.flink.runtime.scheduler.adapter.DefaultExecutionTopology [] -
> Built 3 pipelined regions in 1 ms
> 2020-07-27 17:08:21,542 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Using application-defined state backend:
> RocksDBStateBackend{checkpointStreamBackend=File State Backend
> (checkpoints: 'hdfs://HDFS00000/data/checkpoint-data/wangty/RTC_TEST',
> savepoints: 'null', asynchronous: UNDEFINED, fileStateThreshold: -1),
> localRocksDbDirectories=null, enableIncrementalCheckpointing=FALSE,
> numberOfTransferThreads=-1, writeBatchSize=-1}
> 2020-07-27 17:08:21,543 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Configuring application-defined state backend with
> job/cluster config
> 2020-07-27 17:08:21,568 INFO
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend [] - Using
> predefined options: DEFAULT.
> 2020-07-27 17:08:21,569 INFO
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend [] - Using
> default options factory:
> DefaultConfigurableOptionsFactory{configuredOptions={}}.
> 2020-07-27 17:08:21,714 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Recovered 0 containers from previous attempts ([]).
> 2020-07-27 17:08:21,716 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Register application master response does not contain
> scheduler resource types, use
> '$internal.yarn.resourcemanager.enable-vcore-matching'.
> 2020-07-27 17:08:21,716 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Container matching strategy: IGNORE_VCORE.
> 2020-07-27 17:08:21,719 INFO
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl [] - Upper
> bound of the thread pool size is 500
> 2020-07-27 17:08:21,720 INFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy []
> - yarn.client.max-nodemanagers-proxies : 500
> 2020-07-27 17:08:21,723 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - ResourceManager akka.tcp://flink@x.x.5.60
> :36696/user/rpc/resourcemanager_0
> was granted leadership with fencing token 00000000000000000000000000000000
> 2020-07-27 17:08:21,727 INFO
> org.apache.flink.runtime.resourcemanager.slotmanager.SlotManagerImpl [] -
> Starting the SlotManager.
> 2020-07-27 17:08:22,126 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Using failover strategy
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy@c1dab34
> for RTC_TEST (9f074e66a0f70274c7a7af42e71525fb).
> 2020-07-27 17:08:22,130 INFO
> org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl      [] -
> JobManager runner for job RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) was
> granted leadership with session id 00000000-0000-0000-0000-000000000000 at
> akka.tcp://flink@x.x.5.60:36696/user/rpc/jobmanager_2.
> 2020-07-27 17:08:22,133 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Starting execution of job RTC_TEST
> (9f074e66a0f70274c7a7af42e71525fb) under job master id
> 00000000000000000000000000000000.
> 2020-07-27 17:08:22,135 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Starting scheduling with scheduling strategy
> [org.apache.flink.runtime.scheduler.strategy.EagerSchedulingStrategy]
> 2020-07-27 17:08:22,135 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job
> RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) switched from state CREATED to
> RUNNING.
> 2020-07-27 17:08:22,145 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3)
> (8bb9f7b4bcc93895851ec47123d2213a) switched from CREATED to SCHEDULED.
> 2020-07-27 17:08:22,145 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (2/3)
> (647da02fb921931e1a35ba4265d95c04) switched from CREATED to SCHEDULED.
> 2020-07-27 17:08:22,145 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (3/3)
> (78211c4a866e216b6c821b743b2bf52d) switched from CREATED to SCHEDULED.
> 2020-07-27 17:08:22,158 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Cannot
> serve slot request, no ResourceManager connected. Adding as pending request
> [SlotRequestId{de40e772bd7366814b7ed234f5cdfc53}]
> 2020-07-27 17:08:22,162 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Cannot
> serve slot request, no ResourceManager connected. Adding as pending request
> [SlotRequestId{3ca5a64e992d87a27f207e6020eea047}]
> 2020-07-27 17:08:22,162 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Cannot
> serve slot request, no ResourceManager connected. Adding as pending request
> [SlotRequestId{4945a0b0f9dcfe7547cfefab3ee59be7}]
> 2020-07-27 17:08:22,166 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Connecting to ResourceManager akka.tcp://flink@x.x.5.60
> :36696/user/rpc/resourcemanager_*(00000000000000000000000000000000)
> 2020-07-27 17:08:22,170 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Resolved ResourceManager address, beginning
> registration
> 2020-07-27 17:08:22,173 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Registering job manager
> 00000000000000000000000000000...@akka.tcp://flink@x.x.5.60
> :36696/user/rpc/jobmanager_2
> for job 9f074e66a0f70274c7a7af42e71525fb.
> 2020-07-27 17:08:22,177 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Registered job manager
> 00000000000000000000000000000...@akka.tcp://flink@x.x.5.60
> :36696/user/rpc/jobmanager_2
> for job 9f074e66a0f70274c7a7af42e71525fb.
> 2020-07-27 17:08:22,180 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - JobManager successfully registered at ResourceManager,
> leader id: 00000000000000000000000000000000.
> 2020-07-27 17:08:22,180 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] -
> Requesting new slot [SlotRequestId{de40e772bd7366814b7ed234f5cdfc53}] and
> profile ResourceProfile{UNKNOWN} from resource manager.
> 2020-07-27 17:08:22,181 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Request slot with profile ResourceProfile{UNKNOWN}
> for job 9f074e66a0f70274c7a7af42e71525fb with allocation id
> 4f255670332ee6a2bc934336cc0cee4c.
> 2020-07-27 17:08:22,181 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] -
> Requesting new slot [SlotRequestId{3ca5a64e992d87a27f207e6020eea047}] and
> profile ResourceProfile{UNKNOWN} from resource manager.
> 2020-07-27 17:08:22,182 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] -
> Requesting new slot [SlotRequestId{4945a0b0f9dcfe7547cfefab3ee59be7}] and
> profile ResourceProfile{UNKNOWN} from resource manager.
> 2020-07-27 17:08:22,190 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Requesting new TaskExecutor container with resource
> WorkerResourceSpec {cpuCores=1.0, taskHeapSize=384.000mb (402653174 bytes),
> taskOffHeapSize=0 bytes, networkMemSize=128.000mb (134217730 bytes),
> managedMemSize=512.000mb (536870920 bytes)}. Number pending workers of this
> resource is 1.
> 2020-07-27 17:08:22,192 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Request slot with profile ResourceProfile{UNKNOWN}
> for job 9f074e66a0f70274c7a7af42e71525fb with allocation id
> f29dfd0f0ba7639992b6fe90ca0f3524.
> 2020-07-27 17:08:22,192 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Requesting new TaskExecutor container with resource
> WorkerResourceSpec {cpuCores=1.0, taskHeapSize=384.000mb (402653174 bytes),
> taskOffHeapSize=0 bytes, networkMemSize=128.000mb (134217730 bytes),
> managedMemSize=512.000mb (536870920 bytes)}. Number pending workers of this
> resource is 2.
> 2020-07-27 17:08:22,193 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Request slot with profile ResourceProfile{UNKNOWN}
> for job 9f074e66a0f70274c7a7af42e71525fb with allocation id
> 8a9e009c987a09bf4e36843f7a6eab62.
> 2020-07-27 17:08:22,193 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Requesting new TaskExecutor container with resource
> WorkerResourceSpec {cpuCores=1.0, taskHeapSize=384.000mb (402653174 bytes),
> taskOffHeapSize=0 bytes, networkMemSize=128.000mb (134217730 bytes),
> managedMemSize=512.000mb (536870920 bytes)}. Number pending workers of this
> resource is 3.
> 2020-07-27 17:08:27,253 INFO
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl        [] - Received
> new token for : x.x.6.54:5006
> 2020-07-27 17:08:27,254 INFO
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl        [] - Received
> new token for : x.x.4.230:5006
> 2020-07-27 17:08:27,254 INFO
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl        [] - Received
> new token for : x.x.5.229:5006
> 2020-07-27 17:08:27,256 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Received 3 containers.
> 2020-07-27 17:08:27,259 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Received 3 containers with resource <memory:2048,
> vCores:1>, 0 pending container requests.
> 2020-07-27 17:08:27,261 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Returning excess container
> container_1568724479991_18850539_01_000002.
> 2020-07-27 17:08:27,262 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Returning excess container
> container_1568724479991_18850539_01_000003.
> 2020-07-27 17:08:27,262 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Returning excess container
> container_1568724479991_18850539_01_000004.
> 2020-07-27 17:08:27,262 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Accepted 0 requested containers, returned 3 excess
> containers, 0 pending container requests of resource <memory:2048,
> vCores:1>.
> 2020-07-27 17:08:45,630 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:09:15,629 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:09:45,629 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:10:15,629 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:10:45,629 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:11:15,629 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:11:45,629 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:12:15,629 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:12:45,629 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:13:15,629 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:13:22,167 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3)
> (8bb9f7b4bcc93895851ec47123d2213a) switched from SCHEDULED to FAILED on not
> deployed.
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Could not allocate the required slot within slot request timeout. Please
> make sure that the cluster has enough resources.
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.lambda$assignResourceOrHandleError$6(DefaultScheduler.java:422)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl.lambda$internalAllocateSlot$0(SchedulerImpl.java:168)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$SingleTaskSlot.release(SlotSharingManager.java:726)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.release(SlotSharingManager.java:537)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.lambda$new$0(SlotSharingManager.java:432)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$forwardTo$21(FutureUtils.java:1120)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:1036)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> ~[test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [test.jar:?]
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> [test.jar:?]
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [test.jar:?]
> at akka.actor.ActorCell.invoke(ActorCell.scala:561) [test.jar:?]
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [test.jar:?]
> at akka.dispatch.Mailbox.run(Mailbox.scala:225) [test.jar:?]
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> [test.jar:?]
> Caused by: java.util.concurrent.CompletionException:
> java.util.concurrent.TimeoutException
> at
>
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> ~[?:1.8.0_191]
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> ~[?:1.8.0_191]
> ... 25 more
> Caused by: java.util.concurrent.TimeoutException
> ... 23 more
> 2020-07-27 17:13:22,177 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - Calculating tasks to restart to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_0.
> 2020-07-27 17:13:22,178 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - 1 tasks should be restarted to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_0.
> 2020-07-27 17:13:22,179 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job
> RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) switched from state RUNNING to
> RESTARTING.
> 2020-07-27 17:13:22,181 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] -
> Discarding the results produced by task execution
> 8bb9f7b4bcc93895851ec47123d2213a.
> 2020-07-27 17:13:22,183 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Pending
> slot request [SlotRequestId{de40e772bd7366814b7ed234f5cdfc53}] timed out.
> 2020-07-27 17:13:22,185 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (2/3)
> (647da02fb921931e1a35ba4265d95c04) switched from SCHEDULED to FAILED on not
> deployed.
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Could not allocate the required slot within slot request timeout. Please
> make sure that the cluster has enough resources.
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.lambda$assignResourceOrHandleError$6(DefaultScheduler.java:422)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl.lambda$internalAllocateSlot$0(SchedulerImpl.java:168)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$SingleTaskSlot.release(SlotSharingManager.java:726)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.release(SlotSharingManager.java:537)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.lambda$new$0(SlotSharingManager.java:432)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$forwardTo$21(FutureUtils.java:1120)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:1036)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> ~[test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [test.jar:?]
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> [test.jar:?]
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [test.jar:?]
> at akka.actor.ActorCell.invoke(ActorCell.scala:561) [test.jar:?]
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [test.jar:?]
> at akka.dispatch.Mailbox.run(Mailbox.scala:225) [test.jar:?]
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> [test.jar:?]
> Caused by: java.util.concurrent.CompletionException:
> java.util.concurrent.TimeoutException
> at
>
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> ~[?:1.8.0_191]
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> ~[?:1.8.0_191]
> ... 25 more
> Caused by: java.util.concurrent.TimeoutException
> ... 23 more
> 2020-07-27 17:13:22,188 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - Calculating tasks to restart to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_1.
> 2020-07-27 17:13:22,188 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - 1 tasks should be restarted to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_1.
> 2020-07-27 17:13:22,188 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] -
> Discarding the results produced by task execution
> 647da02fb921931e1a35ba4265d95c04.
> 2020-07-27 17:13:22,189 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Pending
> slot request [SlotRequestId{3ca5a64e992d87a27f207e6020eea047}] timed out.
> 2020-07-27 17:13:22,190 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (3/3)
> (78211c4a866e216b6c821b743b2bf52d) switched from SCHEDULED to FAILED on not
> deployed.
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Could not allocate the required slot within slot request timeout. Please
> make sure that the cluster has enough resources.
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.lambda$assignResourceOrHandleError$6(DefaultScheduler.java:422)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl.lambda$internalAllocateSlot$0(SchedulerImpl.java:168)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$SingleTaskSlot.release(SlotSharingManager.java:726)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.release(SlotSharingManager.java:537)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.lambda$new$0(SlotSharingManager.java:432)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$forwardTo$21(FutureUtils.java:1120)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:1036)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> ~[test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [test.jar:?]
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> [test.jar:?]
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [test.jar:?]
> at akka.actor.ActorCell.invoke(ActorCell.scala:561) [test.jar:?]
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [test.jar:?]
> at akka.dispatch.Mailbox.run(Mailbox.scala:225) [test.jar:?]
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> [test.jar:?]
> Caused by: java.util.concurrent.CompletionException:
> java.util.concurrent.TimeoutException
> at
>
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> ~[?:1.8.0_191]
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> ~[?:1.8.0_191]
> ... 25 more
> Caused by: java.util.concurrent.TimeoutException
> ... 23 more
> 2020-07-27 17:13:22,192 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - Calculating tasks to restart to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_2.
> 2020-07-27 17:13:22,192 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - 1 tasks should be restarted to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_2.
> 2020-07-27 17:13:22,192 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] -
> Discarding the results produced by task execution
> 78211c4a866e216b6c821b743b2bf52d.
> 2020-07-27 17:13:22,193 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Pending
> slot request [SlotRequestId{4945a0b0f9dcfe7547cfefab3ee59be7}] timed out.
> 2020-07-27 17:13:32,184 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3)
> (de96a2863fbb58f21cf987db0cbe380d) switched from CREATED to SCHEDULED.
> 2020-07-27 17:13:32,185 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] -
> Requesting new slot [SlotRequestId{f8186c1e488ec15065a2f844d97c895c}] and
> profile ResourceProfile{UNKNOWN} from resource manager.
> 2020-07-27 17:13:32,185 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Request slot with profile ResourceProfile{UNKNOWN}
> for job 9f074e66a0f70274c7a7af42e71525fb with allocation id
> 19be4edc7eadca2c14dd9ad6d1a87dcd.
> 2020-07-27 17:13:32,189 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (2/3)
> (c8afff8ed430d1e5caf02d3d8383723c) switched from CREATED to SCHEDULED.
> 2020-07-27 17:13:32,189 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] -
> Requesting new slot [SlotRequestId{15c4647c462886ed5cfd9e146f9a0562}] and
> profile ResourceProfile{UNKNOWN} from resource manager.
> 2020-07-27 17:13:32,189 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Request slot with profile ResourceProfile{UNKNOWN}
> for job 9f074e66a0f70274c7a7af42e71525fb with allocation id
> 60ef8787a8c67ed7cbc198d71838ba22.
> 2020-07-27 17:13:32,192 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job
> RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) switched from state RESTARTING
> to RUNNING.
> 2020-07-27 17:13:32,193 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (3/3)
> (08889c02dcc65774fe7ea42522f89ca9) switched from CREATED to SCHEDULED.
> 2020-07-27 17:13:32,193 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] -
> Requesting new slot [SlotRequestId{1ee4efd530c32169b68de993ea4b4460}] and
> profile ResourceProfile{UNKNOWN} from resource manager.
> 2020-07-27 17:13:32,194 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Request slot with profile ResourceProfile{UNKNOWN}
> for job 9f074e66a0f70274c7a7af42e71525fb with allocation id
> 3122df3dc780c5924fbd479a886d435b.
> 2020-07-27 17:13:57,064 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:14:27,023 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:14:57,023 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:15:27,023 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:15:57,023 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:16:27,023 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:16:57,023 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:17:27,023 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:17:57,023 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:18:27,023 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] -
> Checkpoint triggering task Source: rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3) of job
> 9f074e66a0f70274c7a7af42e71525fb is not in state RUNNING but SCHEDULED
> instead. Aborting checkpoint.
> 2020-07-27 17:18:32,187 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (1/3)
> (de96a2863fbb58f21cf987db0cbe380d) switched from SCHEDULED to FAILED on not
> deployed.
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Could not allocate the required slot within slot request timeout. Please
> make sure that the cluster has enough resources.
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.lambda$assignResourceOrHandleError$6(DefaultScheduler.java:422)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl.lambda$internalAllocateSlot$0(SchedulerImpl.java:168)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$SingleTaskSlot.release(SlotSharingManager.java:726)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.release(SlotSharingManager.java:537)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.lambda$new$0(SlotSharingManager.java:432)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$forwardTo$21(FutureUtils.java:1120)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:1036)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> ~[test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [test.jar:?]
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> [test.jar:?]
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [test.jar:?]
> at akka.actor.ActorCell.invoke(ActorCell.scala:561) [test.jar:?]
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [test.jar:?]
> at akka.dispatch.Mailbox.run(Mailbox.scala:225) [test.jar:?]
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> [test.jar:?]
> Caused by: java.util.concurrent.CompletionException:
> java.util.concurrent.TimeoutException
> at
>
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> ~[?:1.8.0_191]
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> ~[?:1.8.0_191]
> ... 25 more
> Caused by: java.util.concurrent.TimeoutException
> ... 23 more
> 2020-07-27 17:18:32,189 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - Calculating tasks to restart to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_0.
> 2020-07-27 17:18:32,189 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - 1 tasks should be restarted to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_0.
> 2020-07-27 17:18:32,189 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job
> RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) switched from state RUNNING to
> RESTARTING.
> 2020-07-27 17:18:32,191 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] -
> Discarding the results produced by task execution
> de96a2863fbb58f21cf987db0cbe380d.
> 2020-07-27 17:18:32,192 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Pending
> slot request [SlotRequestId{f8186c1e488ec15065a2f844d97c895c}] timed out.
> 2020-07-27 17:18:32,193 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (2/3)
> (c8afff8ed430d1e5caf02d3d8383723c) switched from SCHEDULED to FAILED on not
> deployed.
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Could not allocate the required slot within slot request timeout. Please
> make sure that the cluster has enough resources.
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.lambda$assignResourceOrHandleError$6(DefaultScheduler.java:422)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl.lambda$internalAllocateSlot$0(SchedulerImpl.java:168)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$SingleTaskSlot.release(SlotSharingManager.java:726)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.release(SlotSharingManager.java:537)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.lambda$new$0(SlotSharingManager.java:432)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$forwardTo$21(FutureUtils.java:1120)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:1036)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> ~[test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [test.jar:?]
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> [test.jar:?]
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [test.jar:?]
> at akka.actor.ActorCell.invoke(ActorCell.scala:561) [test.jar:?]
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [test.jar:?]
> at akka.dispatch.Mailbox.run(Mailbox.scala:225) [test.jar:?]
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> [test.jar:?]
> Caused by: java.util.concurrent.CompletionException:
> java.util.concurrent.TimeoutException
> at
>
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> ~[?:1.8.0_191]
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> ~[?:1.8.0_191]
> ... 25 more
> Caused by: java.util.concurrent.TimeoutException
> ... 23 more
> 2020-07-27 17:18:32,194 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - Calculating tasks to restart to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_1.
> 2020-07-27 17:18:32,194 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - 1 tasks should be restarted to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_1.
> 2020-07-27 17:18:32,199 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] -
> Discarding the results produced by task execution
> c8afff8ed430d1e5caf02d3d8383723c.
> 2020-07-27 17:18:32,199 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Pending
> slot request [SlotRequestId{15c4647c462886ed5cfd9e146f9a0562}] timed out.
> 2020-07-27 17:18:32,200 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source:
> rtsc_test -> Filter -> Map ->
> SourceConversion(table=[default_catalog.default_database.test], fields=[a,
> b, record_timestamp, proctime]) -> Calc(select=[a, b, record_timestamp,
> PROCTIME_MATERIALIZE(proctime) AS proctime]) -> SinkConversionToTuple2 ->
> Filter -> Sink: sink kafka topic: rtsc_test2 (3/3)
> (08889c02dcc65774fe7ea42522f89ca9) switched from SCHEDULED to FAILED on not
> deployed.
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Could not allocate the required slot within slot request timeout. Please
> make sure that the cluster has enough resources.
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.lambda$assignResourceOrHandleError$6(DefaultScheduler.java:422)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl.lambda$internalAllocateSlot$0(SchedulerImpl.java:168)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$SingleTaskSlot.release(SlotSharingManager.java:726)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.release(SlotSharingManager.java:537)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.lambda$new$0(SlotSharingManager.java:432)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$forwardTo$21(FutureUtils.java:1120)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:1036)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> ~[test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [test.jar:?]
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> [test.jar:?]
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [test.jar:?]
> at akka.actor.ActorCell.invoke(ActorCell.scala:561) [test.jar:?]
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [test.jar:?]
> at akka.dispatch.Mailbox.run(Mailbox.scala:225) [test.jar:?]
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> [test.jar:?]
> Caused by: java.util.concurrent.CompletionException:
> java.util.concurrent.TimeoutException
> at
>
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> ~[?:1.8.0_191]
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> ~[?:1.8.0_191]
> ... 25 more
> Caused by: java.util.concurrent.TimeoutException
> ... 23 more
> 2020-07-27 17:18:32,203 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - Calculating tasks to restart to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_2.
> 2020-07-27 17:18:32,204 INFO
>
> org.apache.flink.runtime.executiongraph.failover.flip1.RestartPipelinedRegionFailoverStrategy
> [] - 1 tasks should be restarted to recover the failed task
> 66774974224095ed80fefb1f583d9fb9_2.
> 2020-07-27 17:18:32,204 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job
> RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) switched from state RESTARTING
> to FAILING.
> org.apache.flink.runtime.JobException: Recovery is suppressed by
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=5,
> backoffTimeMS=10000)
> at
>
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.UpdateSchedulerNgOnInternalFailuresListener.notifyTaskFailure(UpdateSchedulerNgOnInternalFailuresListener.java:49)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.ExecutionGraph.notifySchedulerNgAboutInternalTaskFailure(ExecutionGraph.java:1710)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1287)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1255)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.Execution.markFailed(Execution.java:1086)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.ExecutionVertex.markFailed(ExecutionVertex.java:748)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultExecutionVertexOperations.markFailed(DefaultExecutionVertexOperations.java:41)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskDeploymentFailure(DefaultScheduler.java:435)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.lambda$assignResourceOrHandleError$6(DefaultScheduler.java:422)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl.lambda$internalAllocateSlot$0(SchedulerImpl.java:168)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$SingleTaskSlot.release(SlotSharingManager.java:726)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.release(SlotSharingManager.java:537)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.lambda$new$0(SlotSharingManager.java:432)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$forwardTo$21(FutureUtils.java:1120)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:1036)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> ~[test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [test.jar:?]
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> [test.jar:?]
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [test.jar:?]
> at akka.actor.ActorCell.invoke(ActorCell.scala:561) [test.jar:?]
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [test.jar:?]
> at akka.dispatch.Mailbox.run(Mailbox.scala:225) [test.jar:?]
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> [test.jar:?]
> Caused by:
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Could not allocate the required slot within slot request timeout. Please
> make sure that the cluster has enough resources.
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
> ~[test.jar:?]
> ... 45 more
> Caused by: java.util.concurrent.CompletionException:
> java.util.concurrent.TimeoutException
> at
>
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> ~[?:1.8.0_191]
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> ~[?:1.8.0_191]
> ... 25 more
> Caused by: java.util.concurrent.TimeoutException
> ... 23 more
> 2020-07-27 17:18:32,208 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Job
> RTC_TEST (9f074e66a0f70274c7a7af42e71525fb) switched from state FAILING to
> FAILED.
> org.apache.flink.runtime.JobException: Recovery is suppressed by
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=5,
> backoffTimeMS=10000)
> at
>
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.UpdateSchedulerNgOnInternalFailuresListener.notifyTaskFailure(UpdateSchedulerNgOnInternalFailuresListener.java:49)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.ExecutionGraph.notifySchedulerNgAboutInternalTaskFailure(ExecutionGraph.java:1710)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1287)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1255)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.Execution.markFailed(Execution.java:1086)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.executiongraph.ExecutionVertex.markFailed(ExecutionVertex.java:748)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultExecutionVertexOperations.markFailed(DefaultExecutionVertexOperations.java:41)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskDeploymentFailure(DefaultScheduler.java:435)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.lambda$assignResourceOrHandleError$6(DefaultScheduler.java:422)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl.lambda$internalAllocateSlot$0(SchedulerImpl.java:168)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$SingleTaskSlot.release(SlotSharingManager.java:726)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.release(SlotSharingManager.java:537)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.lambda$new$0(SlotSharingManager.java:432)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$forwardTo$21(FutureUtils.java:1120)
> ~[test.jar:?]
> at
>
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> ~[?:1.8.0_191]
> at
>
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:1036)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> ~[test.jar:?]
> at
>
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> ~[test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> [test.jar:?]
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> [test.jar:?]
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [test.jar:?]
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> [test.jar:?]
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [test.jar:?]
> at akka.actor.ActorCell.invoke(ActorCell.scala:561) [test.jar:?]
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [test.jar:?]
> at akka.dispatch.Mailbox.run(Mailbox.scala:225) [test.jar:?]
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> [test.jar:?]
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> [test.jar:?]
> at
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> [test.jar:?]
> Caused by:
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Could not allocate the required slot within slot request timeout. Please
> make sure that the cluster has enough resources.
> at
>
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeWrapWithNoResourceAvailableException(DefaultScheduler.java:441)
> ~[test.jar:?]
> ... 45 more
> Caused by: java.util.concurrent.CompletionException:
> java.util.concurrent.TimeoutException
> at
>
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> ~[?:1.8.0_191]
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> ~[?:1.8.0_191]
> at
>
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> ~[?:1.8.0_191]
> ... 25 more
> Caused by: java.util.concurrent.TimeoutException
> ... 23 more
> 2020-07-27 17:18:32,210 INFO
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator    [] - Stopping
> checkpoint coordinator for job 9f074e66a0f70274c7a7af42e71525fb.
> 2020-07-27 17:18:32,210 INFO
> org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore []
> - Shutting down
> 2020-07-27 17:18:32,210 INFO
> org.apache.flink.runtime.executiongraph.ExecutionGraph       [] -
> Discarding the results produced by task execution
> 08889c02dcc65774fe7ea42522f89ca9.
> 2020-07-27 17:18:32,211 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Pending
> slot request [SlotRequestId{1ee4efd530c32169b68de993ea4b4460}] timed out.
> 2020-07-27 17:18:32,216 INFO
> org.apache.flink.runtime.dispatcher.MiniDispatcher           [] - Job
> 9f074e66a0f70274c7a7af42e71525fb reached globally terminal state FAILED.
> 2020-07-27 17:18:32,217 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Stopping the JobMaster for job
> RTC_TEST(9f074e66a0f70274c7a7af42e71525fb).
> 2020-07-27 17:18:32,224 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] -
> Suspending SlotPool.
> 2020-07-27 17:18:32,225 INFO  org.apache.flink.runtime.jobmaster.JobMaster
> [] - Close ResourceManager connection
> 4ee5ce2ea08809b1cca2745fa12ee663: JobManager is shutting down..
> 2020-07-27 17:18:32,225 INFO
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl     [] - Stopping
> SlotPool.
> 2020-07-27 17:18:32,225 INFO  org.apache.flink.yarn.YarnResourceManager
> [] - Disconnect job manager
> 00000000000000000000000000000...@akka.tcp://flink@x.x.5.60
> :36696/user/rpc/jobmanager_2
> for job 9f074e66a0f70274c7a7af42e71525fb from the resource manager.
>
>
>
>

回复