It should be yarn-site.xml not yarn.xml.
yarn.xml will not be added to $CLASSPATH

Thanks,
Wangda

On Wed, May 28, 2014 at 8:56 AM, hari <harib...@gmail.com> wrote:

> The issue was not related the configuration related to containers. Due to
> misconfiguration, the Application master was not able to contact
> resourcemanager
> causing in the 1 container problem.
>
> However, the total containers allocated still is not as expected. The
> configuration settings
> should have resulted in 16 containers per node, but it is allocating 64
> containers per node.
>
> Reiterating the config parameters here again:
>
> mapred-site.xml
> mapreduce.map.cpu.vcores = 1
> mapreduce.reduce.cpu.vcores = 1
> mapreduce.map.memory.mb = 1024
> mapreduce.reduce.memory.mb = 1024
> mapreduce.map.java.opts = -Xmx1024m
> mapreduce.reduce.java.opts = -Xmx1024m
>
> yarn.xml
> yarn.nodemanager.resource.memory-mb = 65536
>  yarn.nodemanager.resource.cpu-vcores = 16
> yarn.scheduler.minimum-allocation-mb = 1024
> yarn.scheduler.maximum-allocation-mb  = 2048
> yarn.scheduler.minimum-allocation-vcores = 1
> yarn.scheduler.maximum-allocation-vcores = 1
>
> Is there anything else that might be causing this problem ?
>
> thanks,
> hari
>
>
>
>
>
> On Tue, May 27, 2014 at 3:31 AM, hari <harib...@gmail.com> wrote:
>
>> Hi,
>>
>> When using YARN 2.2.0 version, only 1 container is created
>> for an application in the entire cluster.
>> The single container is created at an arbitrary node
>> for every run. This happens when running any application from
>> the examples jar (e.g., wordcount). Currently only one application is
>> run at a time. The input datasize is > 200GB.
>>
>> I am setting custom values that affect concurrent container count.
>> These config parameters were mostly taken from:
>>
>> http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/
>> These wasn't much description elsewhere on how the container count would
>> be
>> decided.
>>
>> The settings are:
>>
>> mapred-site.xml
>> mapreduce.map.cpu.vcores = 1
>> mapreduce.reduce.cpu.vcores = 1
>> mapreduce.map.memory.mb = 1024
>> mapreduce.reduce.memory.mb = 1024
>> mapreduce.map.java.opts = -Xmx1024m
>> mapreduce.reduce.java.opts = -Xmx1024m
>>
>> yarn.xml
>> yarn.nodemanager.resource.memory-mb = 65536
>> yarn.nodemanager.resource.cpu-vcores = 16
>>
>> From these settings, each node should be running 16 containers.
>>
>> Let me know if there might be something else affecting the container
>> count.
>>
>> thanks,
>> hari
>>
>>
>>
>>
>>
>>
>

Reply via email to