Containers of required size are not being allocated.

2014-11-03 Thread Smita Deshpande
Hi All, I have a single YARN application running on a cluster of 1 master node(Resourcemanager) and 3 slave nodes (Nodemanagers) - total memory= 24GB, total vcores= 12. I am using Hadoop 2.4.1 and scheduler is Capacity-Scheduler with DominantResourceCalculator. This application submits 10

RE: How to set job-priority on a hadoop job

2014-11-03 Thread Rohith Sharma K S
Hi Sunil In MR2v, there is no job priority. There is open Jira for ApplicationPriority that is still in progress. https://issues.apache.org/jira/browse/YARN-1963 https://issues.apache.org/jira/browse/MAPREDUCE-5870 You need to wait untill this feature comes up!! Thanks Regards Rohith Sharma K

Questions about Capacity scheduler behavior

2014-11-03 Thread Fabio
Hi guys, I'm posting this in the user mailing list since I got no reply in the Yarn-dev. I have to model as well as possible the capacity scheduler behavior and I have some questions, hope someone can help me with this. In the following I will consider all containers to be equal for

Understanding mapreduce.admin.user.env

2014-11-03 Thread Steven Willis
I want to make sure that the native libraries installed on the nodemanagers get used by all yarn containers. I first found the mapreduce.admin.{map,reduce}.child.java.opts config property and set it to: '-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN

Re: Containers of required size are not being allocated.

2014-11-03 Thread Vinod Kumar Vavilapalli
I bet you are not setting different Resource-Request-priorities on your requests. It is a current limitation (https://issues.apache.org/jira/browse/YARN-314) of not being able to support resources of different sizes against a single priority. +Vinod On Nov 3, 2014, at 1:23 AM, Smita