Hi Madan,
Perhaps you can filter out inactive topics in the client first and then
pass the filtered list of topics to KafkaConsumer.
Best,
Feng
On Tue, Nov 7, 2023 at 10:42 AM Madan D via user
wrote:
> Hello Hang/Lee,
> Thanks!
> In my usecase we listen from multiple topics but in few cases
Hello Hang/Lee,Thanks!In my usecase we listen from multiple topics but in few cases one of the topic may become inactive if producer decides to shutdown one of the topic but other topics still will be receiving data but what we observe is that if there’s one of the topic is getting in-active
Hi, Madan.
This error seems like that there are some problems when the consumer tries
to read the topic metadata. If you use the same source for these topics,
the kafka connector cannot skip one of them. As you say, you need to modify
the connector's default behavior.
Maybe you should read the
Hi Madan,
Do you mean you want to restart only the failed tasks, rather than
restarting the entire pipeline region? As far as I know, currently Flink
does not support task-level restart, but requires restarting the pipeline
region.
Best,
Junrui
Madan D via user 于2023年10月11日周三 12:37写道:
> Hello
Hello Team, We are running the Flink pipeline by consuming data from multiple
topics, but we recently encountered that if there's one topic having issues
with participation, etc., the whole Flink pipeline is failing, which is
affecting topics. Is there a way we can make Flink Piplein keep
Hi, Kenan.
Maybe you should set the `client.id.prefix` to avoid the conflict.
Best,
Hang
liu ron 于2023年7月31日周一 19:36写道:
> Hi, Kenan
>
> After studying the source code and searching google for related
> information, I think this should be caused by duplicate client_id [1], you
> can check if
Hi, Kenan
After studying the source code and searching google for related
information, I think this should be caused by duplicate client_id [1], you
can check if there are other jobs using the same group_id in consuming this
topic. group_id is used in Flink to assemble client_id [2], if there are
Any help is appreciated about the exception below.
Also my Kafkasource code is below. The parallelism is 16 for this task.
KafkaSource sourceStationsPeriodic = KafkaSource.<
String>builder()
.setBootstrapServers(parameter.get(
flink大神,你们好。flink sink kafka
遇到这个异常,不影响job运行,不影响结果,偶尔抛出。向你们请教一下,希望获取些思路。2019-02-20 10:08:46.889 +0800
[Source: rn -> Flat Map -> async wait operator -> async wait operator -> Sink:
Unnamed (17/20)] ERROR [org.apache.flink.streaming.runtime.tasks.StreamTask]
[StreamTask.java:481] - Error during
Hi Shannon,
Some questions:
which Flink version are you using?
Can you provide me with some more logs, in particular the log entries
before this event from the Kafka connector.
Also, it is possible that the Kafka broker was in an erroneous state?
Did the error happen after weeks of data
Does anyone have a guess what might cause this exception?
java.lang.RuntimeException: Unable to find a leader for partitions:
[FetchPartition {topic=usersignals, partition=1, offset=2825838}]
at
11 matches
Mail list logo