Carlo said:
> Martin said:
>>  There  is a fix in  linux/metrics.c r1029 that  bz#180 . This should
go
>into 3.0.8. The fix is relatively nice in trunk, but more ugly in 3.0.x
due
>to the code duplication  in the networking metrics. I see two possible
>ways:
>>
>> a) we backport only r1029, even if if it is ugly
>
>probably better to minimize the risk, there are other changes as well
which
>could be cherry-pick into stable independently, like r860, or some that
>might need more testing like r1008.

I like r860 and r1008 is #ifdef'd off by default.  It looks reasonable
for my hosts though, and so I am tempted to enable it to go ahead and
get rid of my petabyte/second spikes in 3.0.

>> b) we take the current state of linux/metrics.c minus r1010 /float
>conversion for memory metrics)

This is the option I am currently testing on a couple hosts and am
hoping to deploy in production.  Seems to be working well, but I haven't
had it running long or on many hosts.

>this will pull also the fix in r860, but also other potentially
>destabilizing
>patches which are not critical bugfixes, like r986, r1013, r1146, r1161
and
>r1162.

It is hard to say whether they are needed in stable.  But I am tempted
to give it all a try.

I like the networking rewrite; thanks Martin!  I agree with your FIXME
comments that similar rewrites are needed where other files are read
multiple times.  I occasionally see 100+% spikes in my CPU graphs and I
think if we did this for /proc/stat, then we could fix this too, as
documented in bz#131.

-twitham


-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
Ganglia-developers mailing list
Ganglia-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-developers

Reply via email to