Re: Max Parallel task executors

2015-11-08 Thread sandeep das
After increasing yarn.nodemanager.resource.memory-mb to 24 GB more number
of parallel map tasks are being spawned. Its resolved now.
Thanks a lot for your input.

Regards,
Sandeep

On Mon, Nov 9, 2015 at 9:49 AM, sandeep das  wrote:

> BTW Laxman according to the formula that you had provided it turns out
> that only 8 jobs per node will be initiated which is matching with what i'm
> seeing on my setup.
>
> *min *(*yarn.nodemanager.resource.memory-mb /
> mapreduce.[map|reduce].memory.mb*,
>  *yarn.nodemanager.resource.cpu-vcores /
> mapreduce.[map|reduce].cpu.vcores*)
>
>
>
> *yarn.nodemanager.resource.memory-mb: 16 GB*
>
> *mapreduce.map.memory.mb: 2 GB*
>
> *yarn.nodemanager.resource.cpu-vcores: 80*
>
>
> *mapreduce.map.cpu.vcores: 1*
> So if apply the formula then min(16/2, 80/1) -> min(8,80) -> 8
>
>
> *Should i reduce memory per map operation or increase memory for resource
> manager?*
>
> On Mon, Nov 9, 2015 at 9:43 AM, sandeep das  wrote:
>
>> Thanks Brahma and Laxman for your valuable input.
>>
>> Following are the statistics available on YARN RM GUI.
>>
>> Memory Used : 0 GB
>> Memory Total : 64 GB (16*4 = 64 GB)
>> VCores Used: 0
>> VCores Total: 320 (Earlier I had mentioned that I've configured 40 Vcores
>> but recently I increased to 80 that's why its appearing 80*4 = 321)
>>
>> Note: These statistics were captured when there was no job running in
>> background.
>>
>> Let me know whether it was sufficient to nail the issue. If more
>> information is required please let me know.
>>
>> Regards,
>> Sandeep
>>
>>
>> On Fri, Nov 6, 2015 at 7:04 PM, Brahma Reddy Battula <
>> brahmareddy.batt...@huawei.com> wrote:
>>
>>>
>>> The formula for determining the number of concurrently running tasks per
>>> node is:
>>>
>>> *min *(*yarn.nodemanager.resource.memory-mb /
>>> mapreduce.[map|reduce].memory.mb*,
>>>  *yarn.nodemanager.resource.cpu-vcores /
>>> mapreduce.[map|reduce].cpu.vcores*) .
>>>
>>>
>>> *For you scenario :*
>>>
>>> As you told yarn.nodemanager.resource.memory-mb is configured to *16 GB*
>>> and yarn.nodemanager.resource.cpu-vcores configured to *40*. and I am
>>> thinking
>>> mapreduce.map/reduce.memory.mb, mapreduce.map/reduce.cpu.vcores default
>>> values.
>>>
>>> min (16GB/1GB,40Core/1Core )=*16* tasks for Node*. *Then total should
>>> be 16*4=64  (63+1AM)..
>>>
>>> I am thinking, Two Nodemanger's are unhealthy *(OR)* you might have
>>> configured mapreduce.map/reduce.memory.mb=2GB(or 5 core).
>>>
>>> As laxman pointed you can post RMUI or you can cross check like above.
>>>
>>> Hope this helps.
>>>
>>>
>>>
>>> Thanks & Regards
>>>
>>>  Brahma Reddy Battula
>>>
>>>
>>>
>>>
>>> --
>>> *From:* Laxman Ch [laxman@gmail.com]
>>> *Sent:* Friday, November 06, 2015 6:31 PM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Re: Max Parallel task executors
>>>
>>> Can you please copy paste the cluster metrics from RM dashboard.
>>> Its under http://rmhost:port/cluster/cluster
>>>
>>> In this page, check under Memory Total vs Memory Used and VCores Total
>>> vs VCores Used
>>>
>>> On 6 November 2015 at 18:21, sandeep das  wrote:
>>>
 HI Laxman,

 Thanks for your response. I had already configured a very high value
 for yarn.nodemanager.resource.cpu-vcores e.g. 40 but still its not
 increasing more number of parallel tasks to execute but if this value is
 reduced then it runs less number of parallel tasks.

 As of now yarn.nodemanager.resource.memory-mb is configured to 16 GB
 and yarn.nodemanager.resource.cpu-vcores configured to 40.

 Still its not spawning more tasks than 31.

 Let me know if more information is required to debug it. I believe
 there is upper limit after which yarn stops spawning tasks. I may be wrong
 here.


 Regards,
 Sandeep

 On Fri, Nov 6, 2015 at 6:15 PM, Laxman Ch  wrote:

> Hi Sandeep,
>
> Please configure the following items to the cores and memory per node
> you wanted to allocate for Yarn containers.
> Their defaults are 8 cores and 8GB. So that's the reason you were
> stuck at 31 (4nodes * 8cores - 1 AppMaster)
>
>
> http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
> yarn.nodemanager.resource.cpu-vcores
> yarn.nodemanager.resource.memory-mb
>
>
> On 6 November 2015 at 17:59, sandeep das  wrote:
>
>> May be to naive to ask but How do I check that?
>> Sometimes there are almost 200 map tasks pending to run but at a time
>> only 31 runs.
>>
>> On Fri, Nov 6, 2015 at 5:57 PM, Chris Mawata 
>> wrote:
>>
>>> Also check that you have more than 31 blocks to process.
>>> On Nov 6, 2015 6:54 AM, "sandeep das"  wrote:
>>>
 Hi Varun,

 I tried to increase this parameter but it did not increase number
 of parallel tasks but if It is decreased then YARN reduces number of
 parallel task

Re: Max Parallel task executors

2015-11-08 Thread sandeep das
BTW Laxman according to the formula that you had provided it turns out that
only 8 jobs per node will be initiated which is matching with what i'm
seeing on my setup.

*min *(*yarn.nodemanager.resource.memory-mb /
mapreduce.[map|reduce].memory.mb*,
 *yarn.nodemanager.resource.cpu-vcores /
mapreduce.[map|reduce].cpu.vcores*)



*yarn.nodemanager.resource.memory-mb: 16 GB*

*mapreduce.map.memory.mb: 2 GB*

*yarn.nodemanager.resource.cpu-vcores: 80*


*mapreduce.map.cpu.vcores: 1*
So if apply the formula then min(16/2, 80/1) -> min(8,80) -> 8


*Should i reduce memory per map operation or increase memory for resource
manager?*

On Mon, Nov 9, 2015 at 9:43 AM, sandeep das  wrote:

> Thanks Brahma and Laxman for your valuable input.
>
> Following are the statistics available on YARN RM GUI.
>
> Memory Used : 0 GB
> Memory Total : 64 GB (16*4 = 64 GB)
> VCores Used: 0
> VCores Total: 320 (Earlier I had mentioned that I've configured 40 Vcores
> but recently I increased to 80 that's why its appearing 80*4 = 321)
>
> Note: These statistics were captured when there was no job running in
> background.
>
> Let me know whether it was sufficient to nail the issue. If more
> information is required please let me know.
>
> Regards,
> Sandeep
>
>
> On Fri, Nov 6, 2015 at 7:04 PM, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
>>
>> The formula for determining the number of concurrently running tasks per
>> node is:
>>
>> *min *(*yarn.nodemanager.resource.memory-mb /
>> mapreduce.[map|reduce].memory.mb*,
>>  *yarn.nodemanager.resource.cpu-vcores /
>> mapreduce.[map|reduce].cpu.vcores*) .
>>
>>
>> *For you scenario :*
>>
>> As you told yarn.nodemanager.resource.memory-mb is configured to *16 GB*
>> and yarn.nodemanager.resource.cpu-vcores configured to *40*. and I am
>> thinking
>> mapreduce.map/reduce.memory.mb, mapreduce.map/reduce.cpu.vcores default
>> values.
>>
>> min (16GB/1GB,40Core/1Core )=*16* tasks for Node*. *Then total should be
>> 16*4=64  (63+1AM)..
>>
>> I am thinking, Two Nodemanger's are unhealthy *(OR)* you might have
>> configured mapreduce.map/reduce.memory.mb=2GB(or 5 core).
>>
>> As laxman pointed you can post RMUI or you can cross check like above.
>>
>> Hope this helps.
>>
>>
>>
>> Thanks & Regards
>>
>>  Brahma Reddy Battula
>>
>>
>>
>>
>> --
>> *From:* Laxman Ch [laxman@gmail.com]
>> *Sent:* Friday, November 06, 2015 6:31 PM
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: Max Parallel task executors
>>
>> Can you please copy paste the cluster metrics from RM dashboard.
>> Its under http://rmhost:port/cluster/cluster
>>
>> In this page, check under Memory Total vs Memory Used and VCores Total vs
>> VCores Used
>>
>> On 6 November 2015 at 18:21, sandeep das  wrote:
>>
>>> HI Laxman,
>>>
>>> Thanks for your response. I had already configured a very high value for 
>>> yarn.nodemanager.resource.cpu-vcores
>>> e.g. 40 but still its not increasing more number of parallel tasks to
>>> execute but if this value is reduced then it runs less number of parallel
>>> tasks.
>>>
>>> As of now yarn.nodemanager.resource.memory-mb is configured to 16 GB and 
>>> yarn.nodemanager.resource.cpu-vcores
>>> configured to 40.
>>>
>>> Still its not spawning more tasks than 31.
>>>
>>> Let me know if more information is required to debug it. I believe there
>>> is upper limit after which yarn stops spawning tasks. I may be wrong here.
>>>
>>>
>>> Regards,
>>> Sandeep
>>>
>>> On Fri, Nov 6, 2015 at 6:15 PM, Laxman Ch  wrote:
>>>
 Hi Sandeep,

 Please configure the following items to the cores and memory per node
 you wanted to allocate for Yarn containers.
 Their defaults are 8 cores and 8GB. So that's the reason you were stuck
 at 31 (4nodes * 8cores - 1 AppMaster)


 http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
 yarn.nodemanager.resource.cpu-vcores
 yarn.nodemanager.resource.memory-mb


 On 6 November 2015 at 17:59, sandeep das  wrote:

> May be to naive to ask but How do I check that?
> Sometimes there are almost 200 map tasks pending to run but at a time
> only 31 runs.
>
> On Fri, Nov 6, 2015 at 5:57 PM, Chris Mawata 
> wrote:
>
>> Also check that you have more than 31 blocks to process.
>> On Nov 6, 2015 6:54 AM, "sandeep das"  wrote:
>>
>>> Hi Varun,
>>>
>>> I tried to increase this parameter but it did not increase number of
>>> parallel tasks but if It is decreased then YARN reduces number of 
>>> parallel
>>> tasks. I'm bit puzzled why its not increasing more than 31 tasks even 
>>> after
>>> its value is increased.
>>>
>>> Is there any other configuration as well which controls on how many
>>> maximum tasks can execute in parallel?
>>>
>>> Regards,
>>> Sandeep
>>>
>>> On Tue, Nov 3, 2015 at 7:29 PM, Varun Vasudev 
>>> wrote:
>>>
 The num

Re: Max Parallel task executors

2015-11-08 Thread sandeep das
Thanks Brahma and Laxman for your valuable input.

Following are the statistics available on YARN RM GUI.

Memory Used : 0 GB
Memory Total : 64 GB (16*4 = 64 GB)
VCores Used: 0
VCores Total: 320 (Earlier I had mentioned that I've configured 40 Vcores
but recently I increased to 80 that's why its appearing 80*4 = 321)

Note: These statistics were captured when there was no job running in
background.

Let me know whether it was sufficient to nail the issue. If more
information is required please let me know.

Regards,
Sandeep


On Fri, Nov 6, 2015 at 7:04 PM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

>
> The formula for determining the number of concurrently running tasks per
> node is:
>
> *min *(*yarn.nodemanager.resource.memory-mb /
> mapreduce.[map|reduce].memory.mb*,
>  *yarn.nodemanager.resource.cpu-vcores /
> mapreduce.[map|reduce].cpu.vcores*) .
>
>
> *For you scenario :*
>
> As you told yarn.nodemanager.resource.memory-mb is configured to *16 GB*
> and yarn.nodemanager.resource.cpu-vcores configured to *40*. and I am
> thinking
> mapreduce.map/reduce.memory.mb, mapreduce.map/reduce.cpu.vcores default
> values.
>
> min (16GB/1GB,40Core/1Core )=*16* tasks for Node*. *Then total should be
> 16*4=64  (63+1AM)..
>
> I am thinking, Two Nodemanger's are unhealthy *(OR)* you might have
> configured mapreduce.map/reduce.memory.mb=2GB(or 5 core).
>
> As laxman pointed you can post RMUI or you can cross check like above.
>
> Hope this helps.
>
>
>
> Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>
>
> --
> *From:* Laxman Ch [laxman@gmail.com]
> *Sent:* Friday, November 06, 2015 6:31 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Max Parallel task executors
>
> Can you please copy paste the cluster metrics from RM dashboard.
> Its under http://rmhost:port/cluster/cluster
>
> In this page, check under Memory Total vs Memory Used and VCores Total vs
> VCores Used
>
> On 6 November 2015 at 18:21, sandeep das  wrote:
>
>> HI Laxman,
>>
>> Thanks for your response. I had already configured a very high value for 
>> yarn.nodemanager.resource.cpu-vcores
>> e.g. 40 but still its not increasing more number of parallel tasks to
>> execute but if this value is reduced then it runs less number of parallel
>> tasks.
>>
>> As of now yarn.nodemanager.resource.memory-mb is configured to 16 GB and 
>> yarn.nodemanager.resource.cpu-vcores
>> configured to 40.
>>
>> Still its not spawning more tasks than 31.
>>
>> Let me know if more information is required to debug it. I believe there
>> is upper limit after which yarn stops spawning tasks. I may be wrong here.
>>
>>
>> Regards,
>> Sandeep
>>
>> On Fri, Nov 6, 2015 at 6:15 PM, Laxman Ch  wrote:
>>
>>> Hi Sandeep,
>>>
>>> Please configure the following items to the cores and memory per node
>>> you wanted to allocate for Yarn containers.
>>> Their defaults are 8 cores and 8GB. So that's the reason you were stuck
>>> at 31 (4nodes * 8cores - 1 AppMaster)
>>>
>>>
>>> http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
>>> yarn.nodemanager.resource.cpu-vcores
>>> yarn.nodemanager.resource.memory-mb
>>>
>>>
>>> On 6 November 2015 at 17:59, sandeep das  wrote:
>>>
 May be to naive to ask but How do I check that?
 Sometimes there are almost 200 map tasks pending to run but at a time
 only 31 runs.

 On Fri, Nov 6, 2015 at 5:57 PM, Chris Mawata 
 wrote:

> Also check that you have more than 31 blocks to process.
> On Nov 6, 2015 6:54 AM, "sandeep das"  wrote:
>
>> Hi Varun,
>>
>> I tried to increase this parameter but it did not increase number of
>> parallel tasks but if It is decreased then YARN reduces number of 
>> parallel
>> tasks. I'm bit puzzled why its not increasing more than 31 tasks even 
>> after
>> its value is increased.
>>
>> Is there any other configuration as well which controls on how many
>> maximum tasks can execute in parallel?
>>
>> Regards,
>> Sandeep
>>
>> On Tue, Nov 3, 2015 at 7:29 PM, Varun Vasudev 
>> wrote:
>>
>>> The number of parallel tasks that are run depends on the amount of
>>> memory and vcores on your machines and the amount of memory and vcores
>>> required by your mappers and reducers. The amount of memory can be set
>>> via yarn.nodemanager.resource.memory-mb(the default is 8G). The amount 
>>> of
>>> vcores can be set via yarn.nodemanager.resource.cpu-vcores(the
>>> default is 8 vcores).
>>>
>>> -Varun
>>>
>>> From: sandeep das 
>>> Reply-To: 
>>> Date: Monday, November 2, 2015 at 3:56 PM
>>> To: 
>>> Subject: Max Parallel task executors
>>>
>>> Hi Team,
>>>
>>> I've a cloudera cluster of 4 nodes. Whenever i submit a job my only
>>> 31 parallel tasks are executed whereas my machines have more CPU 
>>> available
>>> but still YARN/AM does not create more tas

Re: Unsubscribe footer for user@h.a.o messages

2015-11-08 Thread Ted Yu
The INFRA JIRA was closed 2 days ago.

But the following post from today still doesn't carry footer:
http://search-hadoop.com/m/uOzYthBKLf2YvP0O1

FYI

On Thu, Nov 5, 2015 at 7:33 PM, Arpit Agarwal 
wrote:

> Created https://issues.apache.org/jira/browse/INFRA-10725
>
>
> From: Vinayakumar B 
> Reply-To: "user@hadoop.apache.org" 
> Date: Thursday, November 5, 2015 at 5:15 PM
> To: "user@hadoop.apache.org" 
> Subject: RE: Unsubscribe footer for user@h.a.o messages
>
> +1,
>
>
>
> Thanks Arpit
>
>
>
> -Vinay
>
>
>
> *From:* Brahma Reddy Battula [mailto:brahmareddy.batt...@hotmail.com
> ]
> *Sent:* Friday, November 06, 2015 8:27 AM
> *To:* user@hadoop.apache.org
> *Subject:* RE: Unsubscribe footer for user@h.a.o messages
>
>
>
> + 1 ( non-binding)..
>
>
>
> Nice thought,Arpit..
>
>
>
>
>
> Thanks And Regards
>
> Brahma Reddy Battula
>
>
> --
>
> Subject: Re: Unsubscribe footer for user@h.a.o messages
> From: m...@hortonworks.com
> To: user@hadoop.apache.org
> Date: Thu, 5 Nov 2015 21:23:41 +
>
> +1 (non-binding)
>
>
>
> On Nov 5, 2015, at 12:50 PM, Arpit Agarwal 
> wrote:
>
>
>
> Apache project mailing lists can add unsubscribe footers to messages. E.g.
> From spark-user.
>
>
> https://mail-archives.apache.org/mod_mbox/spark-user/201511.mbox/%3C5637830F.3070702%40uib.no%3E
> 
>
>
>
> If no one objects I will file an INFRA ticket to add the footer to
> user@h.a.o. Unsubscribe requests are less frequent on the dev mailing
> lists so we can leave those alone.
>
>
>


want to unsubscribe

2015-11-08 Thread Aditya Vyas
unsubscribe