回复: flink1.9 on yarn
问题1 ./bin/flink run -m yarn-cluster每次提交是都会产生一个新的APP-ID如:application_1567067657620_0254 当yarn application -kill application_1567067657620_0254后, 在提交./bin/flink run -m yarn-cluster如何不让这个appid递增? 问题2 ./bin/flink run -m yarn-cluster提交任务后。cancel掉job,如何在提交到这个appid上?
Re: flink1.9 on yarn
hi,guanyq 你这种提交方式属于 Flink On YARN 的 per job 模式,机制是这样的,当新提一个作业的时候,AppID 是会变化的。 Best! zhisheng Yangze Guo 于2020年6月28日周日 上午9:59写道: > 我理解你需要使用session模式,即./bin/yarn-session.sh [1] > > [1] > https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/yarn_setup.html#flink-yarn-session > > Best, > Yangze Guo > > On Sun, Jun 28, 2020 at 9:10 AM guanyq wrote: > > > > 问题1 > > > > ./bin/flink run -m > yarn-cluster每次提交是都会产生一个新的APP-ID如:application_1567067657620_0254 > > > > 当yarn application -kill application_1567067657620_0254后, > > > > 在提交./bin/flink run -m yarn-cluster如何不让这个appid递增? > > > > 问题2 > > > > ./bin/flink run -m yarn-cluster提交任务后。cancel掉job,如何在提交到这个appid上? > > > > >
Re: flink1.9 on yarn
我理解你需要使用session模式,即./bin/yarn-session.sh [1] [1] https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/yarn_setup.html#flink-yarn-session Best, Yangze Guo On Sun, Jun 28, 2020 at 9:10 AM guanyq wrote: > > 问题1 > > ./bin/flink run -m > yarn-cluster每次提交是都会产生一个新的APP-ID如:application_1567067657620_0254 > > 当yarn application -kill application_1567067657620_0254后, > > 在提交./bin/flink run -m yarn-cluster如何不让这个appid递增? > > 问题2 > > ./bin/flink run -m yarn-cluster提交任务后。cancel掉job,如何在提交到这个appid上? > >
Re: flink1.9 on yarn
Hi guanyq, 你为什么希望 app id 不变呢? Best, LakeShen guanyq 于2020年6月28日周日 上午9:10写道: > 问题1 > > ./bin/flink run -m > yarn-cluster每次提交是都会产生一个新的APP-ID如:application_1567067657620_0254 > > 当yarn application -kill application_1567067657620_0254后, > > 在提交./bin/flink run -m yarn-cluster如何不让这个appid递增? > > 问题2 > > ./bin/flink run -m yarn-cluster提交任务后。cancel掉job,如何在提交到这个appid上? > >
flink1.9 on yarn
问题1 ./bin/flink run -m yarn-cluster每次提交是都会产生一个新的APP-ID如:application_1567067657620_0254 当yarn application -kill application_1567067657620_0254后, 在提交./bin/flink run -m yarn-cluster如何不让这个appid递增? 问题2 ./bin/flink run -m yarn-cluster提交任务后。cancel掉job,如何在提交到这个appid上?
Re: flink1.9 on yarn 运行二个多月之后出现错误
Hi guanyq, 从日志中,我看到 TaskManager 所在机器的本地存储几乎快用完了。 看下是否因为 TaskManager 所在机器的存储不够导致 Best, LakeShen xueaohui_...@163.com 于2020年6月20日周六 上午9:57写道: > 不知道有没有yarn上面的详细日志。 > > hdfs是否有权限问题 > > > > xueaohui_...@163.com > > 发件人: guanyq > 发送时间: 2020-06-20 08:48 > 收件人: user-zh > 主题: flink1.9 on yarn 运行二个多月之后出现错误 > 附件为错误日志。哪位大佬帮忙分析下。 > > > >
回复: flink1.9 on yarn 运行二个多月之后出现错误
不知道有没有yarn上面的详细日志。 hdfs是否有权限问题 xueaohui_...@163.com 发件人: guanyq 发送时间: 2020-06-20 08:48 收件人: user-zh 主题: flink1.9 on yarn 运行二个多月之后出现错误 附件为错误日志。哪位大佬帮忙分析下。
flink1.9 on yarn 运行二个多月之后出现错误
附件为错误日志。哪位大佬帮忙分析下。2020-06-20 08:39:47,829 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - 2020-06-20 08:39:47,830 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - Starting YARN TaskExecutor runner (Version: 1.9.2, Rev:c9d2c90, Date:24.01.2020 @ 08:44:30 CST) 2020-06-20 08:39:47,830 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - OS current user: ocdc 2020-06-20 08:39:48,235 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - Current Hadoop/Kerberos user: ocdp 2020-06-20 08:39:48,235 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.152-b16 2020-06-20 08:39:48,235 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - Maximum heap size: 5300 MiBytes 2020-06-20 08:39:48,235 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - JAVA_HOME: /usr/local/jdk1.8.0_152 2020-06-20 08:39:48,236 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - Hadoop version: 2.7.3.2.6.0.3-8 2020-06-20 08:39:48,236 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - JVM Options: 2020-06-20 08:39:48,236 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Xms5529m 2020-06-20 08:39:48,236 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Xmx5529m 2020-06-20 08:39:48,236 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -XX:MaxDirectMemorySize=2663m 2020-06-20 08:39:48,236 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Dfile.encoding=UTF-8 2020-06-20 08:39:48,236 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Dlog.file=/data09/hadoop/yarn/log/application_1567067657620_0251/container_e07_1567067657620_0251_01_05/taskmanager.log 2020-06-20 08:39:48,236 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Dlogback.configurationFile=file:./logback.xml 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Dlog4j.configuration=file:./log4j.properties 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - Program Arguments: 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - --configDir 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - . 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Dweb.port=0 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Djobmanager.rpc.address=audit-dp04 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Dtaskmanager.memory.size=4058744064b 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Dweb.tmpdir=/tmp/flink-web-3ead1dd7-b12c-4a61-9a5d-793743c58302 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Djobmanager.rpc.port=57053 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - -Drest.address=audit-dp04 2020-06-20 08:39:48,237 INFO org.apache.flink.yarn.YarnTaskExecutorRunner - Classpath:
flink1.9 on yarn 消费kafka数据中文乱码
kafka 0.11版本 首先kafka source topic数据是正常的,kafka客户端消费出来无中文乱码问题 1.本地idea debug运行,无中文乱码问题 2.服务器Standalone模式运行,无中文乱码问题 3.服务器on yarn提交方式,就出现中文乱码问题 flink 消费kafka的api用的是这个 new FlinkKafkaConsumer<>(topicList, new SimpleStringSchema(), props); 根据1,2,3分析问题可能和yarn有关系。请教一下大佬们,还需要怎么调查,才能解决这个问题。
??????flink1.9 ????job??yarn?? flink??ui??????????????
Hi ---- ??:"guanyq"