1. Yes the job would fail
   2. Yes any new job would fail until local disk space is made available
   3. If there are too many failures from a particular node, after a few
   failures that node would be blacklisted.

Is that slave node being more utliized due to a particular job, or is just
a general phenomenon?

Take a look at
http://hadoop.apache.org/common/docs/r0.20.2/hdfs_user_guide.html#Rebalancer
.

Thanks,

Prashant


On Sun, Feb 12, 2012 at 9:36 PM, jagaran das <jagaran_...@yahoo.co.in>wrote:

>
>
>
> ----- Forwarded Message -----
> From: jagaran das <jagaran_...@yahoo.co.in>
> To: "common-u...@hadoop.apache.org" <common-u...@hadoop.apache.org>
> Sent: Sunday, 12 February 2012 9:33 PM
> Subject: Hadoop Cluster Question
>
>
> Hi,
> A. If One of the Slave Node local disc space is full in a cluster ?
>
> 1. Would a already started running Pig job fail ?
> 2. Any new started pig job would fail ?
> 3. How would the Hadoop Cluster Behave ? Would that be a dead node ?
>
> B. In our production cluster we are seeing one of the slave node is being
> more utilized than the others.
> By Utilization I mean the %DFS is always more in it. How can we balance it
> ?
>
> Thanks,
> Jagaran

Reply via email to