Arun,

I don't think you'll hear a fixed number. Having said that, I have seen CPU
being pegged at 95% during jobs and the cluster working perfectly fine. On
the slaves, if you have nothing else going on, Hadoop only has TaskTrackers
and DataNodes. Those two daemons are relatively light weight in terms of
CPU for the most part. So, you can afford to let your tasks take up a high
%.

Hope that helps.

-Amandeep

On Tue, Jan 17, 2012 at 2:16 PM, ArunKumar <arunk...@gmail.com> wrote:

> Hi  Guys !
>
> When we get CPU utilization value of a node  in hadoop cluster, what
> percent
> value can be considered as overloaded ?
> Say for eg.
>
>        CPU utilization            Node Status
>             85%                      Overloaded
>              20%                        Normal
>
>
> Arun
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/How-to-find-out-whether-a-node-is-Overloaded-from-Cpu-utilization-tp3665289p3665289.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
>

Reply via email to