Hi Ion,

Which version of Hadoop are you using? The problem you reported about 
safeModeTime and fsImageLoadTime keep growing was fixed in 0.18 (or trunk)

Thanks,
Lohit

----- Original Message ----
From: Ion Badita <[EMAIL PROTECTED]>
To: core-user@hadoop.apache.org
Sent: Friday, May 30, 2008 8:10:52 AM
Subject: Re: About Metrics update

Hi,

I found (because of the Metrics behavior reported in the previous 
e-mail) some errors in the metrics reported by the NameNodeMetrics:
safeModeTime and fsImageLoadTime keep growing (they should be the same 
over time). The mentioned metrics use MetricsIntValue for the values, on 
MetricsIntValue .pushMetric() if "changed" field is marked true then the 
value is "published" in MetricsRecod else the method does nothing.

public synchronized void pushMetric(final MetricsRecord mr) {
    if (changed)
      mr.incrMetric(name, value);
    changed = false;
  }

The problem is in AbstractMetricsContext.update() method, because the 
metricUpdates are not cleared after been merged in the "record's 
internal data".

Ion



Ion Badita wrote:
> Hi,
>
> A looked over the class 
> org.apache.hadoop.metrics.spi.AbstractMetricsContext and i have a 
> question:
> why in the update(MetricsRecordImpl record) metricUpdates Map is not 
> cleared after the updates are merged in metricMap. Because of this on 
> every update() "old" increments are merged in metricMap. Is this the 
> right behavior?
> If i want to increment only one metric in the record using current 
> implementation is not possible without modifying other metrics that 
> are incremented rare.
>
>
> Thanks
> Ion

Reply via email to