Is this MapReduce application?

MR has a concept of blacklisting nodes where a lot of tasks fail. The configs 
that control it are
 - yarn.app.mapreduce.am.job.node-blacklisting.enable: True by default
 - mapreduce.job.maxtaskfailures.per.tracker: Default is 3, meaning a node is 
blacklisted if it fails 3 tasks
 - yarn.app.mapreduce.am.job.node-blacklisting.ignore-threshold-node-percent: 
33% by default, meaning blacklists will be ignored if 33% of cluster is already 
blacklisted 

+Vinod

On Dec 10, 2014, at 12:59 AM, scwf <wangf...@huawei.com> wrote:

> It seems there is a blacklist in yarn when all containers of one NM lost, it 
> will add this NM to blacklist? Then when will the NM go out of blacklist?
> 
> On 2014/12/10 13:39, scwf wrote:
>> Hi, all
>>   Here is my question: is there a mechanisms that when one container exit 
>> abnormally, yarn will prefer to dispatch the container on other NM?
>> 
>> We have a cluster with 3 NMs(each NM 135g mem) and 1 RM, and we running a 
>> job which start 13 container(= 1 AM + 12 executor containers).
>> 
>> Each NM has 4 executor container and the mem configured for each executor 
>> container is 30g. There is a interesting test, when we killed
>> 
>> 4 containers in one NM1, only 2 containers restarted on NM1, other 2 
>> containers reserved on the NM2 and NM3.
>> 
>>   Any idea?
>> 
>> Fei.
>> 
>> 
>> 
> 
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Reply via email to