[ 
https://issues.apache.org/jira/browse/KAFKA-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14369623#comment-14369623
 ] 

Jun Rao commented on KAFKA-1930:
--------------------------------

Another thing that we need to think through is the histogram support in the new 
metrics package. So far, we haven't been using histogram in the clients because 
(1) it needs to know the range of the value and (2) it needs to pick a 
reasonable BinScheme. For example, in the broker side, we use a histogram to 
measure the request time. The range of the time can be large since it depends 
on the request timeout set by different clients. Typically, those values will 
be small, say just a few milli secs. However, sometimes those values can be 
really large, say 10s of seconds. So, it's not clear if the current histogram 
support in the new metrics can measure this effectively.

The way that histogram in Coda Hale works is to use a variant of reservoir 
sampling. It works reasonably well with data of different ranges and 
distributions. The downside is that it can potentially use more memory.

> Move server over to new metrics library
> ---------------------------------------
>
>                 Key: KAFKA-1930
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1930
>             Project: Kafka
>          Issue Type: Improvement
>            Reporter: Jay Kreps
>            Assignee: Aditya Auradkar
>
> We are using org.apache.kafka.common.metrics on the clients, but using Coda 
> Hale metrics on the server. We should move the server over to the new metrics 
> package as well. This will help to make all our metrics self-documenting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to