[ 
https://issues.apache.org/jira/browse/KAFKA-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566770#comment-14566770
 ] 

Tim Brooks commented on KAFKA-2101:
-----------------------------------

As I mentioned earlier, this overlaps some with the changes in Kafka-2102. 
However, if that patch is not accepted, the patch I submitted here will resolve 
this ticket.

A second long is created for last successful update. This is used in the 
metrics. And it is used when considering when next to update based on 
metadataExpireMs. Whereas right now, if there is a failed update - the backoff 
will be for the entire metadataExpireMs opposed to just the refreshBackoffMs 
time period. 

> Metric metadata-age is reset on a failed update
> -----------------------------------------------
>
>                 Key: KAFKA-2101
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2101
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Tim Brooks
>            Assignee: Tim Brooks
>         Attachments: KAFKA-2101.patch
>
>
> In org.apache.kafka.clients.Metadata there is a lastUpdate() method that 
> returns the time the metadata was lasted updated. This is only called by 
> metadata-age metric. 
> However the lastRefreshMs is updated on a failed update (when 
> MetadataResponse has not valid nodes). This is confusing since the metric's 
> name suggests that it is a true reflection of the age of the current 
> metadata. But the age might be reset by a failed update. 
> Additionally, lastRefreshMs is not reset on a failed update due to no node 
> being available. This seems slightly inconsistent, since one failure 
> condition resets the metrics, but another one does not. Especially since both 
> failure conditions do trigger the backoff (for the next attempt).
> I have not implemented a patch yet, because I am unsure what expected 
> behavior is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to