[
https://issues.apache.org/jira/browse/KAFKA-3811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15324794#comment-15324794
]
Jay Kreps commented on KAFKA-3811:
----------------------------------
[~aartigupta] Two ways: (1) with hprof, which is not very reliable, (2) I did
some before and after testing just deleting the per-message metrics. I do think
it's possible hprof tends to overcount the impact of the metrics due to the way
it measures. If we have some simple benchmark code for streams a good way to
measure is just delete those metrics and see how throughput changes.
> Introduce Kafka Streams metrics recording levels
> ------------------------------------------------
>
> Key: KAFKA-3811
> URL: https://issues.apache.org/jira/browse/KAFKA-3811
> Project: Kafka
> Issue Type: Improvement
> Components: streams
> Reporter: Greg Fodor
> Assignee: aarti gupta
>
> Follow-up from the discussions here:
> https://github.com/apache/kafka/pull/1447
> https://issues.apache.org/jira/browse/KAFKA-3769
> The proposal is to introduce configuration to control the granularity/volumes
> of metrics emitted by Kafka Streams jobs, since the per-record level metrics
> introduce non-trivial overhead and are possibly less useful once a job has
> been optimized.
> Proposal from guozhangwang:
> level0 (stream thread global): per-record process / punctuate latency, commit
> latency, poll latency, etc
> level1 (per processor node, and per state store): IO latency, per-record ..
> latency, forward throughput, etc.
> And by default we only turn on level0.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)