答复: How to sink invalid data from flatmap

2020-08-24 Thread
Thanks a lot Jake for the quick response 发件人: Jake [mailto:ft20...@qq.com] 发送时间: 2020年8月25日 星期二 11:31 收件人: 范超 抄送: user 主题: Re: How to sink invalid data from flatmap Hi fanchao Yes. I suggest that. Jake On Aug 25, 2020, at 11:20 AM, 范超 mailto:fanc...@mgtv.com>> wrote: Thanks Jake. B

答复: How to sink invalid data from flatmap

2020-08-24 Thread
: 2020年8月25日 星期二 11:06 收件人: 范超 抄送: user 主题: Re: How to sink invalid data from flatmap Hi fanchao use side output, see[1] [1]https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/side_output.html Jake On Aug 25, 2020, at 10:54 AM, 范超 mailto:fanc...@mgtv.com>> wrote: H

答复: How to sink invalid data from flatmap

2020-08-24 Thread
Thanks , Using the ctx.output() inside the process method solved my problem, but my custom flatmap function has to be retired? 发件人: Yun Tang [mailto:myas...@live.com] 发送时间: 2020年8月25日 星期二 10:58 收件人: 范超 ; user 主题: Re: How to sink invalid data from flatmap Hi Chao I think side output [1] might

How to sink invalid data from flatmap

2020-08-24 Thread
Hi, I’m using the custom flatmap function to validate the kafka json string message, if the kafka message is valid to transform to a pojo (using GSON), then go on with the next sink step. If it can not be parsed as a POJO, the GSON will throw the “com.google.gson.JsonSyntaxException”, and in my

答复: How to specify the number of TaskManagers in Yarn Cluster using Per-Job Mode

2020-08-17 Thread
Thanks Yangze for providing these links I'll try it ! -邮件原件- 发件人: Yangze Guo [mailto:karma...@gmail.com] 发送时间: 2020年8月18日 星期二 12:57 收件人: 范超 抄送: user (user@flink.apache.org) 主题: Re: How to specify the number of TaskManagers in Yarn Cluster using Per-Job Mode The number of TM m

答复: How to specify the number of TaskManagers in Yarn Cluster using Per-Job Mode

2020-08-17 Thread
人: Yangze Guo [mailto:karma...@gmail.com] 发送时间: 2020年8月18日 星期二 11:31 收件人: 范超 抄送: user (user@flink.apache.org) 主题: Re: How to specify the number of TaskManagers in Yarn Cluster using Per-Job Mode Hi, Flink can control how many TM to start, but where to start the TMs depends on Yarn. Do you

答复: How to specify the number of TaskManagers in Yarn Cluster using Per-Job Mode

2020-08-17 Thread
some best practice for deploying on yarn please? I read the [1] and still don't very clear [1] https://www.ververica.com/blog/how-to-size-your-apache-flink-cluster-general-guidelines -邮件原件- 发件人: Yangze Guo [mailto:karma...@gmail.com] 发送时间: 2020年8月18日 星期二 10:50 收件人: 范超 抄送:

答复: How to specify the number of TaskManagers in Yarn Cluster using Per-Job Mode

2020-08-17 Thread
Thanks Yangze All 3 machines NodeManager is started. I just don't know why not three machines each running a Flink TaskManager and how to achieve this -邮件原件- 发件人: Yangze Guo [mailto:karma...@gmail.com] 发送时间: 2020年8月18日 星期二 10:10 收件人: 范超 抄送: user (user@flink.apache.org) 主题: Re

How to specify the number of TaskManagers in Yarn Cluster using Per-Job Mode

2020-08-17 Thread
Hi, Dev and Users I’ve 3 machines each one is 8 cores and 16GB memory. Following it’s my Resource Manager screenshot the cluster have 36GB total. I specify the paralism to 3 or even up to 12, But the task manager is always running on two nodes not all three machine, the third node does not start

答复: Flink Session TM Logs

2020-07-27 Thread
Hi Rechard Maybe you can try using cli “yarn logs –applicationId yourYarnAppId” to check your logs or just to find your app logs in the yarn webui 发件人: Richard Moorhead [mailto:richard.moorh...@gmail.com] 发送时间: 2020年7月24日 星期五 23:55 收件人: user 主题: Flink Session TM Logs When running a flink session

答复: How to get CLI parameters when deploy on yarn cluster

2020-07-27 Thread
Thanks Jake , I’ll try it out. It worked! 发件人: Jake [mailto:ft20...@qq.com] 发送时间: 2020年7月27日 星期一 18:33 收件人: 范超 抄送: user (user@flink.apache.org) 主题: Re: How to get CLI parameters when deploy on yarn cluster Hi fanchao You can use params after jar file. /usr/local/flink/bin/flink run -m yarn

How to get CLI parameters when deploy on yarn cluster

2020-07-27 Thread
Hi, Flink community I’m starter at Flink ,and don’t know how to passing parameters to my jar file, where I want to start the job in detached mode on the yarn cluster. Here is my shell code: /usr/local/flink/bin/flink run -m yarn-cluster -d -p 3 ~/project/test/app/test.jar -runat=test 2>&1 In m