Hi,

       I am quite new to hadoop and hbase, and I am having a hard time here
figuring out some issues with my cluster, and I am pretty sure many of you
have gone through many of the problems I am facing right now. I need some
help in figuring out what exactly are the bottlenecks in my system. I have
set up regular ganglia on my cluster (simple ganglia, not able to track
hadoop/hbase metrics yet.. that's another issue). What are the stats that
matter the most? How to go about making inferences from these reports? I
know that swapping is a very important parameter to monitor. What are the
other important parameters, what is their significance, and what should be
their values ideally be, approximately? Mainly memory cached, cpu loads,
memory buffered, Total memory, Network usage etc. and also any other
parameter that you found to be useful in these cases. I think this would be
very helpful for many people in figuring out many issues. Thanks a ton,

hari

Reply via email to