I'm glad to hear it helped.
Thanks & Regards
Brahma Reddy Battula
From: sandeep das [yarnhad...@gmail.com]
Sent: Monday, November 09, 2015 11:54 AM
To: user@hadoop.apache.org
Subject: Re: Max Parallel task executors
After i
ode*. *Then total should
>>> be 16*4=64 (63+1AM)..
>>>
>>> I am thinking, Two Nodemanger's are unhealthy *(OR)* you might have
>>> configured mapreduce.map/reduce.memory.mb=2GB(or 5 core).
>>>
>>> As laxman pointed you can post RMUI or you can
; As laxman pointed you can post RMUI or you can cross check like above.
>>
>> Hope this helps.
>>
>>
>>
>> Thanks & Regards
>>
>> Brahma Reddy Battula
>>
>>
>>
>>
>> --
>> *From:*
> As laxman pointed you can post RMUI or you can cross check like above.
>
> Hope this helps.
>
>
>
> Thanks & Regards
>
> Brahma Reddy Battula
>
>
>
>
> ----------
> *From:* Laxman Ch [laxman@gmail.com]
> *Sent:* Friday, November 06
Laxman Ch [laxman@gmail.com]
Sent: Friday, November 06, 2015 6:31 PM
To: user@hadoop.apache.org
Subject: Re: Max Parallel task executors
Can you please copy paste the cluster metrics from RM dashboard.
Its under http://rmhost:port/cluster/cluster
In this page, check under Memory Total vs Memory
Can you please copy paste the cluster metrics from RM dashboard.
Its under http://rmhost:port/cluster/cluster
In this page, check under Memory Total vs Memory Used and VCores Total vs
VCores Used
On 6 November 2015 at 18:21, sandeep das wrote:
> HI Laxman,
>
> Thanks for your response. I had al
HI Laxman,
Thanks for your response. I had already configured a very high value
for yarn.nodemanager.resource.cpu-vcores
e.g. 40 but still its not increasing more number of parallel tasks to
execute but if this value is reduced then it runs less number of parallel
tasks.
As of now yarn.nodemanage
Hi Sandeep,
Please configure the following items to the cores and memory per node you
wanted to allocate for Yarn containers.
Their defaults are 8 cores and 8GB. So that's the reason you were stuck at
31 (4nodes * 8cores - 1 AppMaster)
http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-
May be to naive to ask but How do I check that?
Sometimes there are almost 200 map tasks pending to run but at a time only
31 runs.
On Fri, Nov 6, 2015 at 5:57 PM, Chris Mawata wrote:
> Also check that you have more than 31 blocks to process.
> On Nov 6, 2015 6:54 AM, "sandeep das" wrote:
>
>>
Also check that you have more than 31 blocks to process.
On Nov 6, 2015 6:54 AM, "sandeep das" wrote:
> Hi Varun,
>
> I tried to increase this parameter but it did not increase number of
> parallel tasks but if It is decreased then YARN reduces number of parallel
> tasks. I'm bit puzzled why its
Hi Varun,
I tried to increase this parameter but it did not increase number of
parallel tasks but if It is decreased then YARN reduces number of parallel
tasks. I'm bit puzzled why its not increasing more than 31 tasks even after
its value is increased.
Is there any other configuration as well wh
The number of parallel tasks that are run depends on the amount of memory and
vcores on your machines and the amount of memory and vcores required by your
mappers and reducers. The amount of memory can be set via
yarn.nodemanager.resource.memory-mb(the default is 8G). The amount of vcores
can b
12 matches
Mail list logo