[ https://issues.apache.org/jira/browse/KAFKA-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15324755#comment-15324755 ]
aarti gupta commented on KAFKA-3811: ------------------------------------ [~gfodor] Can you please share and outline the profiling and analysis you did around metrics overhead. I just ran the 'WordProcessorDemo' with actual data (continuously being published) on the input stream and profiled the streams example using both Java mission control Flight recorder and Yourkit profiler (evaluation version), but see a 5 % CPU overhead for the entire process. How are you isolating the time taken to stamp metrics? > Introduce Kafka Streams metrics recording levels > ------------------------------------------------ > > Key: KAFKA-3811 > URL: https://issues.apache.org/jira/browse/KAFKA-3811 > Project: Kafka > Issue Type: Improvement > Components: streams > Reporter: Greg Fodor > Assignee: aarti gupta > > Follow-up from the discussions here: > https://github.com/apache/kafka/pull/1447 > https://issues.apache.org/jira/browse/KAFKA-3769 > The proposal is to introduce configuration to control the granularity/volumes > of metrics emitted by Kafka Streams jobs, since the per-record level metrics > introduce non-trivial overhead and are possibly less useful once a job has > been optimized. > Proposal from guozhangwang: > level0 (stream thread global): per-record process / punctuate latency, commit > latency, poll latency, etc > level1 (per processor node, and per state store): IO latency, per-record .. > latency, forward throughput, etc. > And by default we only turn on level0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)