Re: Proper way to modify log4j config file for kubernetes-session

2024-05-14 Thread Biao Geng
Hi Vararu, Does this document meet your requirements? https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#logging Best, Biao Geng Vararu, Vadim 于2024年5月14日周二 01:39写道: > Hi, > > > > Trying to configure loggers in the *log4j-console.properti

Re: Proper way to modify log4j config file for kubernetes-session

2024-05-14 Thread Vararu, Vadim
Yes, the dynamic log level modification worked great for me. Thanks a lot, Vadim From: Biao Geng Date: Tuesday, 14 May 2024 at 10:07 To: Vararu, Vadim Cc: user@flink.apache.org Subject: Re: Proper way to modify log4j config file for kubernetes-session Hi Vararu, Does this document meet your r

Flink JDK17 Prod Ready Suppot

2024-05-14 Thread Tripathi, Rateesh via user
Hi, Do anyone know about the time lines for Flink version for JDK17 prod ready support rollout ? -- Thanks Rateesh Tripathi

Checkpointing while loading causing issues

2024-05-14 Thread Lars Skjærven
Hello, When restarting jobs (e.g. after upgrade) with "large" state a task can take some time to "initialize" (depending on the state size). During this time I noticed that Flink attempts to checkpoint. In many cases checkpointing will fail repeatedly, and cause the job to hit the tolerable-failed

how to reduce read times when many jobs read the same kafka topic?

2024-05-14 Thread longfeng Xu
hi there are many flink jobs read one kafka topic in this scenario, therefore CPU resources waste in serialization/deserialization and network load is too heavy . Can you recommend a solution to avoid this situation? e.g it can be more effectively using one large stream job with multi branchs ?

Job is Failing for every 2hrs - Out of Memory Exception

2024-05-14 Thread Madan D via user
Hello Team, Good morning! We have been running a flink job with Kafka  where it gets restarted every 2 hours with an Out of Memory Exception. We tried to increase task manager memory and reduce parallelism and  having rate limit to reduce consumption rate, but irrespectively, it restarts every

Re: how to reduce read times when many jobs read the same kafka topic?

2024-05-14 Thread Sachin Mittal
Each separate job would have its own consumer group hence they will read independently from the same topic and when checkpointing they will commit their own offsets. So if any job fails, it will not affect the progress of other jobs when reading from Kafka. I am not sure of the impact of network l

Re: Checkpointing while loading causing issues

2024-05-14 Thread gongzhongqiang
Hi Lars, Currently, there is no configuration available to trigger a checkpoint immediately after the job starts in Flink. But we can address this issue from multiple perspectives using the insights provided in this document [1]. [1] https://nightlies.apache.org/flink/flink-docs-release-1.19/

Re: how to reduce read times when many jobs read the same kafka topic?

2024-05-14 Thread longfeng Xu
Thanks for you explanation. I'll give it a try. :) Sachin Mittal 于2024年5月15日周三 10:39写道: > Each separate job would have its own consumer group hence they will read > independently from the same topic and when checkpointing they will commit > their own offsets. > So if any job fails, it will not a

Re: Job is Failing for every 2hrs - Out of Memory Exception

2024-05-14 Thread Biao Geng
Hi Madan, The error shows that it cannot create new threads. One common reason is that the physical machine does not configure a large enough thread limit(check this SO fo