Hi Ben,

Flink's Kafka consumers track their progress independent of any worker.
They keep track of the reading offset for themselves (committing progress
to Kafka is optional and only necessary to have progress monitoring in
Kafka's metrics).
As soon as a consumer reads and forwards an event, it is considered to be
read. This means, the progress of the downstream worker does not influence
the progress tracking at all.

In case of a topic with a single partition, you can use a consumer with
parallelism 1 and connect a worker task with a higher parallelism to it.
The single consumer task will send the read events round-robin to the
worker tasks.

Best, Fabian

Am Fr., 21. Juni 2019 um 05:48 Uhr schrieb wang xuchen <ben....@gmail.com>:

>
> Dear Flink experts,
>
> I am experimenting Flink for a use case where there is a tight latency
> requirements.
>
> A stackoverflow article suggests that I can use setParallism(n) to process
> a Kafka partition in a multi-threaded way. My understanding is there is
> still one kafka consumer per partition, but by using setParallelism, I can
> spin up multiple worker threads to process the messages read from the
> consumer.
>
> And according to Fabian`s comments in this link:
>
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Does-Flink-Kafka-connector-has-max-pending-offsets-concept-td28119.html
> Flink is able to manage the offset correctly (commit in the right order).
>
> Here is my questions, let`s say there is a Kafka topic with only one
> partition, and I setup a consumer with setParallism(2). Hypothetically,
> worker threads call out to a REST service which may get slow or stuck
> periodically. If I want to make sure that the consumer overall is making
> progress even in face of a 'slow woker'. In other words, I`d like to have
> multiple pending but uncommitted offsets by the fast worker even when the
> other worker is stuck. Is there such a knob  to tune in Flink?
>
> From my own experiment, I use Kafka consume group tool to to monitor the
> offset lag,  soon as one worker thread is stuck, the other cannot make any
> progress either. I really want the fast worker still progress to certain
> extend. For this use case, exactly once processing is not required.
>
> Thanks for helping.
> Ben
>
>
>

Reply via email to