Hi,

We recently implemented Ganglia on a large scale cluster (2500+ nodes) and 
found out that we have hit a limit in the memory summing. As soon as we hit the 
4TB limit, the counter reseted to 0 and started summing again.
This is not that important as we now read it modulo 4TB but how about, in the 
future releases, storing that kind of info in MB (I think it is currently 
stored in KB) which will move the limit to 4EB?

Just my 2 cents...

Regards,
Laurent

Reply via email to