Can you please copy paste the cluster metrics from RM dashboard.
Its under http://rmhost:port/cluster/cluster

In this page, check under Memory Total vs Memory Used and VCores Total vs
VCores Used

On 6 November 2015 at 18:21, sandeep das <yarnhad...@gmail.com> wrote:

> HI Laxman,
>
> Thanks for your response. I had already configured a very high value for 
> yarn.nodemanager.resource.cpu-vcores
> e.g. 40 but still its not increasing more number of parallel tasks to
> execute but if this value is reduced then it runs less number of parallel
> tasks.
>
> As of now yarn.nodemanager.resource.memory-mb is configured to 16 GB and 
> yarn.nodemanager.resource.cpu-vcores
> configured to 40.
>
> Still its not spawning more tasks than 31.
>
> Let me know if more information is required to debug it. I believe there
> is upper limit after which yarn stops spawning tasks. I may be wrong here.
>
>
> Regards,
> Sandeep
>
> On Fri, Nov 6, 2015 at 6:15 PM, Laxman Ch <laxman....@gmail.com> wrote:
>
>> Hi Sandeep,
>>
>> Please configure the following items to the cores and memory per node you
>> wanted to allocate for Yarn containers.
>> Their defaults are 8 cores and 8GB. So that's the reason you were stuck
>> at 31 (4nodes * 8cores - 1 AppMaster)
>>
>>
>> http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
>> yarn.nodemanager.resource.cpu-vcores
>> yarn.nodemanager.resource.memory-mb
>>
>>
>> On 6 November 2015 at 17:59, sandeep das <yarnhad...@gmail.com> wrote:
>>
>>> May be to naive to ask but How do I check that?
>>> Sometimes there are almost 200 map tasks pending to run but at a time
>>> only 31 runs.
>>>
>>> On Fri, Nov 6, 2015 at 5:57 PM, Chris Mawata <chris.maw...@gmail.com>
>>> wrote:
>>>
>>>> Also check that you have more than 31 blocks to process.
>>>> On Nov 6, 2015 6:54 AM, "sandeep das" <yarnhad...@gmail.com> wrote:
>>>>
>>>>> Hi Varun,
>>>>>
>>>>> I tried to increase this parameter but it did not increase number of
>>>>> parallel tasks but if It is decreased then YARN reduces number of parallel
>>>>> tasks. I'm bit puzzled why its not increasing more than 31 tasks even 
>>>>> after
>>>>> its value is increased.
>>>>>
>>>>> Is there any other configuration as well which controls on how many
>>>>> maximum tasks can execute in parallel?
>>>>>
>>>>> Regards,
>>>>> Sandeep
>>>>>
>>>>> On Tue, Nov 3, 2015 at 7:29 PM, Varun Vasudev <vvasu...@apache.org>
>>>>> wrote:
>>>>>
>>>>>> The number of parallel tasks that are run depends on the amount of
>>>>>> memory and vcores on your machines and the amount of memory and vcores
>>>>>> required by your mappers and reducers. The amount of memory can be set
>>>>>> via yarn.nodemanager.resource.memory-mb(the default is 8G). The amount of
>>>>>> vcores can be set via yarn.nodemanager.resource.cpu-vcores(the
>>>>>> default is 8 vcores).
>>>>>>
>>>>>> -Varun
>>>>>>
>>>>>> From: sandeep das <yarnhad...@gmail.com>
>>>>>> Reply-To: <user@hadoop.apache.org>
>>>>>> Date: Monday, November 2, 2015 at 3:56 PM
>>>>>> To: <user@hadoop.apache.org>
>>>>>> Subject: Max Parallel task executors
>>>>>>
>>>>>> Hi Team,
>>>>>>
>>>>>> I've a cloudera cluster of 4 nodes. Whenever i submit a job my only
>>>>>> 31 parallel tasks are executed whereas my machines have more CPU 
>>>>>> available
>>>>>> but still YARN/AM does not create more task.
>>>>>>
>>>>>> Is there any configuration which I can change to start more
>>>>>> MAP/REDUCER task in parallel?
>>>>>>
>>>>>> Each machine in my cluster has 24 CPUs.
>>>>>>
>>>>>> Regards,
>>>>>> Sandeep
>>>>>>
>>>>>
>>>>>
>>>
>>
>>
>> --
>> Thanks,
>> Laxman
>>
>
>


-- 
Thanks,
Laxman

Reply via email to