Thanks for the KIP!

I personally think, that it might be sufficient to just report offsets
of assigned tasks. Similar to metrics what are also reported only
locally, users can roll-up/aggregate the information across instances
manually.

What I also don't understand is, what "idling" means?


-Matthias

On 2/22/21 3:11 PM, Boyang Chen wrote:
> Thanks Walker for the proposed KIP! This should definitely empower KStream
> users with better visibility.
> 
> Meanwhile I got a couple of questions/suggestions:
> 
> 
> 1. typo "repost/report" in the motivation section.
> 
> 2. What offsets do we report when the task is under restoration or
> rebalancing?
> 
> 3. IIUC, we should clearly state that our reported metrics are based off
> locally assigned tasks for each instance.
> 
> 4. In the meantime, what’s our strategy to report tasks that are not local
> to the instance? Users would normally try to monitor all the possible
> tasks, and it’s unfortunate we couldn’t determine whether we have lost
> tasks. My brainstorming was whether it makes sense for the leader instance
> to report the task progress as -1 for all “supposed to be running” tasks,
> so that on the metrics collector side it could catch any missing tasks.
> 
> 5. It seems not clear how users should use `isTaskIdling`. Why not report a
> map/set for idling tasks just as what we did for committed offsets?
> 
> 6. Why do we use TopicPartition instead of TaskId as the key in the
> returned map?
> 7. Could we include some details in where we got the commit offsets for
> each task? Is it through consumer offset fetch, or the stream processing
> progress based on the records fetched?
> 
> 
> On Mon, Feb 22, 2021 at 3:00 PM Walker Carlson <wcarl...@confluent.io>
> wrote:
> 
>> Hello all,
>>
>> I would like to start discussion on KIP-715. This kip aims to make it
>> easier to monitor Kafka Streams progress by exposing the committed offset
>> in a similar way as the consumer client does.
>>
>> Here is the KIP: https://cwiki.apache.org/confluence/x/aRRRCg
>>
>> Best,
>> Walker
>>
> 

Reply via email to