The problem I've run into more than memory is having the system CPU
time get out of control. My guess is that the threshold for what is
considered overloaded is going to be dependent on your system setup,
what you're running on it, and what bounds your jobs.
On Tue, Jan 17, 2012 at 22:06,
Hi,
Memory loading in most linux distro's is not readily available from
top or the usual suspects, in fact looking at top is rather
misleading. Linux can run just fine with committed memory greater than
100%, what you want to look at is the % of committed memory relative to
the total memery.
Hi,
The significant factor in cluster loading is memory, not CPU. Hadoop views the
cluster only with respect to memory and cares not about CPU utilization or Disk
saturation. If you run too many TaskTrackers, you risk memory overcommit where
the Linux OOM will come out of the closet and
Guys !
So can i say that if memory usage is more than say 90 % the node is
overloaded.
If so, what can be that threshold percent value or how can we find it ?
Arun
--
View this message in context:
Arun,
I don't think you'll hear a fixed number. Having said that, I have seen CPU
being pegged at 95% during jobs and the cluster working perfectly fine. On
the slaves, if you have nothing else going on, Hadoop only has TaskTrackers
and DataNodes. Those two daemons are relatively light weight in