Hi jieluo,
I think you need to check the network connectivity between Flink
client(local machine) and
JobManager rest endpoint(YARN cluster). Usually, you could use the "telnet"
to test. Moreover,
it will be easier for others the help with debugging if you could provide
the JobManager logs.
BTW,
Thanks, looks well, nice job!
Best,
Jingsong Lee
On Fri, Apr 10, 2020 at 5:56 PM wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:
>
> https://issues.apache.org/jira/browse/FLINK-17086
>
> It is my first time to create a flink jira issue.
> Just point it out and correct it if I write
jobtaskManager??job,??
2020-04-13 06:20:31.379 ERROR 1 --- [ent-IO-thread-3]
org.apache.flink.runtime.rest.RestClient.parseResponse:393 : Received response
was neither of the
Hi Mitch,
Have you configured 'state.backend.rocksdb.memory.managed'? The default
should be 'true' and if you have set it to 'false', the RocksDB memory
footprint might grow to more than configured task manager memory size.
Besides, by any chance your UDFs use any native memory? E.g., launch
Hi Junzhong ,
图片没有显示,能否把图片重新上传一下。
Best,
LakeShen
Junzhong Qin 于2020年4月11日周六 上午10:38写道:
> 在跑Flink任务时,遇到了operator反压问题,任务执行图如下,source(读Kafka),
> KeyBy(抽取数据字段供keyBy操作使用),Parser(业务处理逻辑),Sink(写Kafka),除了KeyBy->Parser使用hash(KeyBy操作)链接,其他都使用RESCALE链接。(并发度仅供参考,这个是解决问题后的并发度,最初的并发度为
>
Hi Anuj,
It seems that you are using hadoop version 2.4.1. I think "L" could not be
supported in
this version. Could you upgrade your hadoop version to 2.8 and have a try?
If your
YARN cluster version is 2.8+, then you could directly remove the
flink-shaded-hadoop
in your lib directory.
This is a stateful stream join application using RocksDB state backend with
incremental checkpoint enabled.
-
JVM heap usage is pretty similar. Main difference is in non-heap usage,
probably related to RocksDB state.
-
Also observed cgroup memory failure count showing up in the
Hey Jark, thank you so much for confirming!
Out of curiosity, even though I agree that having too many config classes are
confusing, not knowing when the config values are used during pipeline setup is
also pretty confusing. For example, the name of 'TableConfig' makes me feel
it's global to
Hi,
I have a quick question about the "EventTimeTrigger". I notice it's based
on TimeWindow instead of Window. Is there any reason why this cannot apply
to GlobalWindow?
Thanks,
Jiawei
Thank you for the quick response
Your answer related to the checkpoint folder that contains the _metadata file
e.g. chk-1829
What about the "shared" folder , how do I know which files in that folder are
still relevant and which are left over from a failed checkpoint , they are not
directly
10 matches
Mail list logo