Re: Container is running beyond physical memory limits

2015-10-13 Thread Gopal Vijayaraghavan


> is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB
>physical memory used; 6.6 GB of 8 GB virtual memory used. Killing
>container.

You need to change the yarn.nodemanager.vmem-check-enabled=false on
*every* machine on your cluster & restart all NodeManagers.

The VMEM check made a lot of sense in the 32 bit days when the CPU forced
a maximum of 4Gb of VMEM per process (even with PAE).

Similarly it was a way to punish processes which swap out to disk, since
the pmem only tracks the actual RSS.

In the large RAM 64bit world, vmem is not a significant issue yet - I
think the addressing limit is 128 TB per process.

> 
> mapreduce.reduce.memory.mb
> 4096
> 
...
 
> 
> mapreduce.reduce.java.opts
> -Xmx6144m
> 
 

That's the next failure point. 4Gb container with 6Gb limits. To produce
an immediate failure when checking configs, add

-XX:+AlwaysPreTouch -XX:+UseNUMA

to the java.opts.

Cheers,
Gopal
 




Re: Container is running beyond physical memory limits

2015-10-13 Thread hadoop hive
http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/

On Wed, Oct 14, 2015 at 1:42 AM, Mich Talebzadeh <m...@peridale.co.uk>
wrote:

> Thank you all.
>
>
>
> Hi Gopal,
>
>
>
> My understanding is that the parameter below specifies the max size of 4GB
> for each contain. That seems to work for me
>
>
>
> 
>
> mapreduce.map.memory.mb
>
> 4096
>
> 
>
>
>
> Now I am rather confused about the following parameters (for example
> mapreduce.reduce versus mapreduce.map) and their correlation to each other
>
>
>
>
>
> 
>
> mapreduce.reduce.memory.mb
>
> 8192
>
> 
>
>
>
> 
>
> mapreduce.map.java.opts
>
> -Xmx3072m
>
> 
>
>
>
> 
>
> mapreduce.reduce.java.opts
>
> -Xmx6144m
>
> 
>
>
>
> Can you please verify if these settings are correct and how they relate to
> each other?
>
>
>
> Thanks
>
>
>
>
>
> Mich Talebzadeh
>
>
>
> Sybase ASE 15 Gold Medal Award 2008
>
> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>
>
> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>
> Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE
> 15", ISBN 978-0-9563693-0-7.
>
> co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
> 978-0-9759693-0-4
>
> Publications due shortly:
>
> Complex Event Processing in Heterogeneous Environments, ISBN:
> 978-0-9563693-3-8
>
> Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
> one out shortly
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Technology
> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
> the responsibility of the recipient to ensure that this email is virus
> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
> accept any responsibility.
>
>
>
> -Original Message-
> From: Gopal Vijayaraghavan [mailto:go...@hortonworks.com] On Behalf Of
> Gopal Vijayaraghavan
> Sent: 13 October 2015 20:55
> To: user@hive.apache.org
> Cc: Mich Talebzadeh <m...@peridale.co.uk>
> Subject: Re: Container is running beyond physical memory limits
>
>
>
>
>
>
>
> > is running beyond physical memory limits. Current usage: 2.0 GB of 2
>
> >GB physical memory used; 6.6 GB of 8 GB virtual memory used. Killing
>
> >container.
>
>
>
> You need to change the yarn.nodemanager.vmem-check-enabled=false on
>
> *every* machine on your cluster & restart all NodeManagers.
>
>
>
> The VMEM check made a lot of sense in the 32 bit days when the CPU forced
> a maximum of 4Gb of VMEM per process (even with PAE).
>
>
>
> Similarly it was a way to punish processes which swap out to disk, since
> the pmem only tracks the actual RSS.
>
>
>
> In the large RAM 64bit world, vmem is not a significant issue yet - I
> think the addressing limit is 128 TB per process.
>
>
>
> > 
>
> > mapreduce.reduce.memory.mb
>
> > 4096
>
> > 
>
> ...
>
> > 
>
> > mapreduce.reduce.java.opts
>
> > -Xmx6144m
>
> > 
>
>
>
> That's the next failure point. 4Gb container with 6Gb limits. To produce
> an immediate failure when checking configs, add
>
>
>
> -XX:+AlwaysPreTouch -XX:+UseNUMA
>
>
>
> to the java.opts.
>
>
>
> Cheers,
>
> Gopal
>
>


Re: Container is running beyond physical memory limits

2015-10-13 Thread Gopal Vijayaraghavan

 
> Now I am rather confused about the following parameters (for example
> mapreduce.reduce versus
> mapreduce.map) and their correlation to each other

They have no relationship with each other. They are meant for two
different task types in MapReduce.

In general you run fewer reducers than mappers, so they are given more
memory per-task than mapppers - most commonly it's ~2x of the other, but
they are not related in any way.

The ideal numbers to use for both are exact multiples of
yarn.scheduler.minimum-allocation-mb (since YARN rounds up to that
quantum).

For example, with a 1536 min-alloc, you're better off allocating 4608 &
getting -Xmx3686, since the 4096 ask will anyway pad up to 4608, losing
500Mb in the process.

This is very annoying & complex, so with Tez there's exactly 1 config &
you can just skip the -Xmx param for hive.tez.java.opts. Tez will inject
an Xmx after a container alloc returns (so that re-adjustment is
automatic).

> 
> mapreduce.map.memory.mb
> 4096
> 
>  
> 
> mapreduce.reduce.memory.mb
> 8192
> 
>  
> 
> mapreduce.map.java.opts
> -Xmx3072m
> 
>  
> 
> mapreduce.reduce.java.opts
> -Xmx6144m
> 

Those configs are correct, the GC heap is approximately 80% of the
allocated container (the JVM uses non-GC buffers for operations like Zlib
decompression).


Cheers,
Gopal




Re: Container is running beyond physical memory limits

2015-10-13 Thread Muni Chada
Reduce yarn.nodemanager.vmem-pmem-ratio to 2.1 and lower.

On Tue, Oct 13, 2015 at 2:32 PM, hadoop hive  wrote:

> 
>
> mapreduce.reduce.memory.mb
>
> 4096
> 
>
>
> change this to 8 G
>
>
> On Wed, Oct 14, 2015 at 12:52 AM, Ranjana Rajendran <
> ranjana.rajend...@gmail.com> wrote:
>
>> Here is Altiscale's documentation about the topic. Do let me know if you
>> have any more questions.
>>
>> http://documentation.altiscale.com/heapsize-for-mappers-and-reducers
>>
>> On Tue, Oct 13, 2015 at 9:31 AM, Mich Talebzadeh 
>> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> I have been having some issues with loading data into hive from one
>>> table to another for 1,767,886 rows. I was getting the following error
>>>
>>>
>>>
>>> Task with the most failures(4):
>>>
>>> -
>>>
>>> Task ID:
>>>
>>>   task_1444731612741_0001_r_00
>>>
>>>
>>>
>>> URL:
>>>
>>>
>>> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1444731612741_0001=task_1444731612741_0001_r_00
>>>
>>> -
>>>
>>> Diagnostic Messages for this Task:
>>>
>>> Container
>>> [pid=16238,containerID=container_1444731612741_0001_01_19] is
>>> running beyond physical memory limits. Current usage: 2.0 GB of 2 GB
>>> physical memory used; 6.6 GB of 8 GB virtual memory used. Killing
>>> container.
>>>
>>>
>>>
>>>
>>>
>>> Changed parameters in yarn-site.xml  and mapred-site.xml files few times
>>> but no joy.
>>>
>>>
>>>
>>> Finally the following changes in mapred-site.xml worked for me
>>>
>>>
>>>
>>> 
>>>
>>> mapreduce.job.tracker.reserved.physicalmemory.mb
>>>
>>> 1024
>>>
>>> 
>>>
>>>
>>>
>>> 
>>>
>>> mapreduce.map.memory.mb
>>>
>>> 4096
>>>
>>> 
>>>
>>>
>>>
>>> 
>>>
>>> mapreduce.reduce.memory.mb
>>>
>>> 4096
>>>
>>> 
>>>
>>>
>>>
>>> 
>>>
>>> mapreduce.map.java.opts
>>>
>>> -Xmx3072m
>>>
>>> 
>>>
>>>
>>>
>>> 
>>>
>>> mapreduce.reduce.java.opts
>>>
>>> -Xmx6144m
>>>
>>> 
>>>
>>>
>>>
>>> And the following changes to yarn-site.xml
>>>
>>>
>>>
>>> 
>>>
>>>yarn.nodemanager.vmem-check-enabled
>>>
>>>false
>>>
>>> 
>>>
>>> 
>>>
>>>   yarn.nodemanager.resource.memory-mb
>>>
>>>   8192
>>>
>>>  Amount of physical memory, in MB, that can be allocated
>>> for containers.
>>>
>>> 
>>>
>>> 
>>>
>>>yarn.nodemanager.vmem-pmem-ratio
>>>
>>> 4
>>>
>>> Ratio between virtual memory to physical memory when
>>> setting memory limits for containers
>>>
>>>   
>>>
>>>
>>>
>>> I did a lot of web search but most resolution to this issue seems to be
>>> cryptic or anecdotal. Anyone has better explanation I would be interested.
>>>
>>>
>>>
>>> Mich Talebzadeh
>>>
>>>
>>>
>>> *Sybase ASE 15 Gold Medal Award 2008*
>>>
>>> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>>>
>>>
>>> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>>>
>>> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase ASE
>>> 15", ISBN 978-0-9563693-0-7*.
>>>
>>> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
>>> 978-0-9759693-0-4*
>>>
>>> *Publications due shortly:*
>>>
>>> *Complex Event Processing in Heterogeneous Environments*, ISBN:
>>> 978-0-9563693-3-8
>>>
>>> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
>>> one out shortly
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>> NOTE: The information in this email is proprietary and confidential.
>>> This message is for the designated recipient only, if you are not the
>>> intended recipient, you should destroy it immediately. Any information in
>>> this message shall not be understood as given or endorsed by Peridale
>>> Technology Ltd, its subsidiaries or their employees, unless expressly so
>>> stated. It is the responsibility of the recipient to ensure that this email
>>> is virus free, therefore neither Peridale Ltd, its subsidiaries nor their
>>> employees accept any responsibility.
>>>
>>>
>>>
>>
>>
>


RE: Container is running beyond physical memory limits

2015-10-13 Thread Mich Talebzadeh
Thank you all.

 

Hi Gopal,

 

My understanding is that the parameter below specifies the max size of 4GB
for each contain. That seems to work for me

 



mapreduce.map.memory.mb

4096



 

Now I am rather confused about the following parameters (for example
mapreduce.reduce versus mapreduce.map) and their correlation to each other

 

 



mapreduce.reduce.memory.mb

8192



 



mapreduce.map.java.opts

-Xmx3072m



 



mapreduce.reduce.java.opts

-Xmx6144m



 

Can you please verify if these settings are correct and how they relate to
each other?

 

Thanks

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

http://talebzadehmich.wordpress.com

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 

-Original Message-
From: Gopal Vijayaraghavan [mailto:go...@hortonworks.com] On Behalf Of Gopal
Vijayaraghavan
Sent: 13 October 2015 20:55
To: user@hive.apache.org
Cc: Mich Talebzadeh <m...@peridale.co.uk>
Subject: Re: Container is running beyond physical memory limits

 

 

 

> is running beyond physical memory limits. Current usage: 2.0 GB of 2 

>GB physical memory used; 6.6 GB of 8 GB virtual memory used. Killing 

>container.

 

You need to change the yarn.nodemanager.vmem-check-enabled=false on

*every* machine on your cluster & restart all NodeManagers.

 

The VMEM check made a lot of sense in the 32 bit days when the CPU forced a
maximum of 4Gb of VMEM per process (even with PAE).

 

Similarly it was a way to punish processes which swap out to disk, since the
pmem only tracks the actual RSS.

 

In the large RAM 64bit world, vmem is not a significant issue yet - I think
the addressing limit is 128 TB per process.

 

> 

> mapreduce.reduce.memory.mb

> 4096

> 

...

> 

> mapreduce.reduce.java.opts

> -Xmx6144m

> 

 

That's the next failure point. 4Gb container with 6Gb limits. To produce an
immediate failure when checking configs, add

 

-XX:+AlwaysPreTouch -XX:+UseNUMA

 

to the java.opts.

 

Cheers,

Gopal



Re: Container is running beyond physical memory limits

2015-10-13 Thread Ranjana Rajendran
Here is Altiscale's documentation about the topic. Do let me know if you
have any more questions.

http://documentation.altiscale.com/heapsize-for-mappers-and-reducers

On Tue, Oct 13, 2015 at 9:31 AM, Mich Talebzadeh 
wrote:

> Hi,
>
>
>
> I have been having some issues with loading data into hive from one table
> to another for 1,767,886 rows. I was getting the following error
>
>
>
> Task with the most failures(4):
>
> -
>
> Task ID:
>
>   task_1444731612741_0001_r_00
>
>
>
> URL:
>
>
> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1444731612741_0001=task_1444731612741_0001_r_00
>
> -
>
> Diagnostic Messages for this Task:
>
> Container [pid=16238,containerID=container_1444731612741_0001_01_19] is
> running beyond physical memory limits. Current usage: 2.0 GB of 2 GB
> physical memory used; 6.6 GB of 8 GB virtual memory used. Killing
> container.
>
>
>
>
>
> Changed parameters in yarn-site.xml  and mapred-site.xml files few times
> but no joy.
>
>
>
> Finally the following changes in mapred-site.xml worked for me
>
>
>
> 
>
> mapreduce.job.tracker.reserved.physicalmemory.mb
>
> 1024
>
> 
>
>
>
> 
>
> mapreduce.map.memory.mb
>
> 4096
>
> 
>
>
>
> 
>
> mapreduce.reduce.memory.mb
>
> 4096
>
> 
>
>
>
> 
>
> mapreduce.map.java.opts
>
> -Xmx3072m
>
> 
>
>
>
> 
>
> mapreduce.reduce.java.opts
>
> -Xmx6144m
>
> 
>
>
>
> And the following changes to yarn-site.xml
>
>
>
> 
>
>yarn.nodemanager.vmem-check-enabled
>
>false
>
> 
>
> 
>
>   yarn.nodemanager.resource.memory-mb
>
>   8192
>
>  Amount of physical memory, in MB, that can be allocated for
> containers.
>
> 
>
> 
>
>yarn.nodemanager.vmem-pmem-ratio
>
> 4
>
> Ratio between virtual memory to physical memory when
> setting memory limits for containers
>
>   
>
>
>
> I did a lot of web search but most resolution to this issue seems to be
> cryptic or anecdotal. Anyone has better explanation I would be interested.
>
>
>
> Mich Talebzadeh
>
>
>
> *Sybase ASE 15 Gold Medal Award 2008*
>
> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>
>
> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>
> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase ASE
> 15", ISBN 978-0-9563693-0-7*.
>
> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
> 978-0-9759693-0-4*
>
> *Publications due shortly:*
>
> *Complex Event Processing in Heterogeneous Environments*, ISBN:
> 978-0-9563693-3-8
>
> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
> one out shortly
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Technology
> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
> the responsibility of the recipient to ensure that this email is virus
> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
> accept any responsibility.
>
>
>


Re: Container is running beyond physical memory limits

2015-10-13 Thread hadoop hive


mapreduce.reduce.memory.mb

4096



change this to 8 G


On Wed, Oct 14, 2015 at 12:52 AM, Ranjana Rajendran <
ranjana.rajend...@gmail.com> wrote:

> Here is Altiscale's documentation about the topic. Do let me know if you
> have any more questions.
>
> http://documentation.altiscale.com/heapsize-for-mappers-and-reducers
>
> On Tue, Oct 13, 2015 at 9:31 AM, Mich Talebzadeh 
> wrote:
>
>> Hi,
>>
>>
>>
>> I have been having some issues with loading data into hive from one table
>> to another for 1,767,886 rows. I was getting the following error
>>
>>
>>
>> Task with the most failures(4):
>>
>> -
>>
>> Task ID:
>>
>>   task_1444731612741_0001_r_00
>>
>>
>>
>> URL:
>>
>>
>> http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1444731612741_0001=task_1444731612741_0001_r_00
>>
>> -
>>
>> Diagnostic Messages for this Task:
>>
>> Container [pid=16238,containerID=container_1444731612741_0001_01_19] is
>> running beyond physical memory limits. Current usage: 2.0 GB of 2 GB
>> physical memory used; 6.6 GB of 8 GB virtual memory used. Killing
>> container.
>>
>>
>>
>>
>>
>> Changed parameters in yarn-site.xml  and mapred-site.xml files few times
>> but no joy.
>>
>>
>>
>> Finally the following changes in mapred-site.xml worked for me
>>
>>
>>
>> 
>>
>> mapreduce.job.tracker.reserved.physicalmemory.mb
>>
>> 1024
>>
>> 
>>
>>
>>
>> 
>>
>> mapreduce.map.memory.mb
>>
>> 4096
>>
>> 
>>
>>
>>
>> 
>>
>> mapreduce.reduce.memory.mb
>>
>> 4096
>>
>> 
>>
>>
>>
>> 
>>
>> mapreduce.map.java.opts
>>
>> -Xmx3072m
>>
>> 
>>
>>
>>
>> 
>>
>> mapreduce.reduce.java.opts
>>
>> -Xmx6144m
>>
>> 
>>
>>
>>
>> And the following changes to yarn-site.xml
>>
>>
>>
>> 
>>
>>yarn.nodemanager.vmem-check-enabled
>>
>>false
>>
>> 
>>
>> 
>>
>>   yarn.nodemanager.resource.memory-mb
>>
>>   8192
>>
>>  Amount of physical memory, in MB, that can be allocated for
>> containers.
>>
>> 
>>
>> 
>>
>>yarn.nodemanager.vmem-pmem-ratio
>>
>> 4
>>
>> Ratio between virtual memory to physical memory when
>> setting memory limits for containers
>>
>>   
>>
>>
>>
>> I did a lot of web search but most resolution to this issue seems to be
>> cryptic or anecdotal. Anyone has better explanation I would be interested.
>>
>>
>>
>> Mich Talebzadeh
>>
>>
>>
>> *Sybase ASE 15 Gold Medal Award 2008*
>>
>> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>>
>>
>> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>>
>> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase ASE
>> 15", ISBN 978-0-9563693-0-7*.
>>
>> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
>> 978-0-9759693-0-4*
>>
>> *Publications due shortly:*
>>
>> *Complex Event Processing in Heterogeneous Environments*, ISBN:
>> 978-0-9563693-3-8
>>
>> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
>> one out shortly
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> NOTE: The information in this email is proprietary and confidential. This
>> message is for the designated recipient only, if you are not the intended
>> recipient, you should destroy it immediately. Any information in this
>> message shall not be understood as given or endorsed by Peridale Technology
>> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
>> the responsibility of the recipient to ensure that this email is virus
>> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
>> accept any responsibility.
>>
>>
>>
>
>


RE: Container is running beyond physical memory limits

2015-10-13 Thread Mich Talebzadeh
Thank you. Very helpful

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", 
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Technology Ltd, its 
subsidiaries or their employees, unless expressly so stated. It is the 
responsibility of the recipient to ensure that this email is virus free, 
therefore neither Peridale Ltd, its subsidiaries nor their employees accept any 
responsibility.

 

From: hadoop hive [mailto:hadooph...@gmail.com] 
Sent: 13 October 2015 21:20
To: user@hive.apache.org
Subject: Re: Container is running beyond physical memory limits

 

http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/

 

On Wed, Oct 14, 2015 at 1:42 AM, Mich Talebzadeh <m...@peridale.co.uk 
<mailto:m...@peridale.co.uk> > wrote:

Thank you all.

 

Hi Gopal,

 

My understanding is that the parameter below specifies the max size of 4GB for 
each contain. That seems to work for me

 



mapreduce.map.memory.mb

4096



 

Now I am rather confused about the following parameters (for example 
mapreduce.reduce versus mapreduce.map) and their correlation to each other

 

 



mapreduce.reduce.memory.mb

8192



 



mapreduce.map.java.opts

-Xmx3072m



 



mapreduce.reduce.java.opts

-Xmx6144m



 

Can you please verify if these settings are correct and how they relate to each 
other?

 

Thanks

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", 
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

http://talebzadehmich.wordpress.com

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Technology Ltd, its 
subsidiaries or their employees, unless expressly so stated. It is the 
responsibility of the recipient to ensure that this email is virus free, 
therefore neither Peridale Ltd, its subsidiaries nor their employees accept any 
responsibility.

 

-Original Message-
From: Gopal Vijayaraghavan [mailto:go...@hortonworks.com 
<mailto:go...@hortonworks.com> ] On Behalf Of Gopal Vijayaraghavan
Sent: 13 October 2015 20:55
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Cc: Mich Talebzadeh <m...@peridale.co.uk <mailto:m...@peridale.co.uk> >
Subject: Re: Container is running beyond physical memory limits

 

 

 

> is running beyond physical memory limits. Current usage: 2.0 GB of 2 

>GB physical memory used; 6.6 GB of 8 GB virtual memory used. Killing 

>container.

 

You need to change the yarn.nodemanager.vmem-check-enabled=false on

*every* machine on your cluster & restart all NodeManagers.

 

The VMEM check made a lot of sense in the 32 bit days when the CPU forced a 
maximum of 4Gb of VMEM per process (even with PAE).

 

Similarly it was a way to punish processes which swap out to disk, since the 
pmem only tracks the actual RSS.

 

In the large RAM 64bit world, vmem is not a significant issue yet - I think the 
addressing limit is 128 TB per process.

 

> 

> mapreduce.reduce.memory.mb

> 4096

> 

...

> 

> mapreduce.reduce.java.opts

> -Xmx6144m

> 

 

That's the next failure point. 4Gb container with 6Gb limits. To produce an 
immediate failure when checking configs, add

 

-XX:+AlwaysPreTouch -XX:+UseNUMA

 

to the java.opts.

 

Cheers,

Gopal

 



RE: Container is running beyond physical memory limits

2015-10-13 Thread Mich Talebzadeh
Many thanks Gopal.

Mich Talebzadeh

Sybase ASE 15 Gold Medal Award 2008
A Winning Strategy: Running the most Critical Financial Data on ASE 15
http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf
Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4
Publications due shortly:
Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8
Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

http://talebzadehmich.wordpress.com

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

-Original Message-
From: Gopal Vijayaraghavan [mailto:go...@hortonworks.com] On Behalf Of Gopal
Vijayaraghavan
Sent: 13 October 2015 21:37
To: user@hive.apache.org
Cc: Mich Talebzadeh <m...@peridale.co.uk>
Subject: Re: Container is running beyond physical memory limits


 
> Now I am rather confused about the following parameters (for example 
> mapreduce.reduce versus
> mapreduce.map) and their correlation to each other

They have no relationship with each other. They are meant for two different
task types in MapReduce.

In general you run fewer reducers than mappers, so they are given more
memory per-task than mapppers - most commonly it's ~2x of the other, but
they are not related in any way.

The ideal numbers to use for both are exact multiples of
yarn.scheduler.minimum-allocation-mb (since YARN rounds up to that quantum).

For example, with a 1536 min-alloc, you're better off allocating 4608 &
getting -Xmx3686, since the 4096 ask will anyway pad up to 4608, losing
500Mb in the process.

This is very annoying & complex, so with Tez there's exactly 1 config & you
can just skip the -Xmx param for hive.tez.java.opts. Tez will inject an Xmx
after a container alloc returns (so that re-adjustment is automatic).

> 
> mapreduce.map.memory.mb
> 4096
> 
>  
> 
> mapreduce.reduce.memory.mb
> 8192
> 
>  
> 
> mapreduce.map.java.opts
> -Xmx3072m
> 
>  
> 
> mapreduce.reduce.java.opts
> -Xmx6144m
> 

Those configs are correct, the GC heap is approximately 80% of the allocated
container (the JVM uses non-GC buffers for operations like Zlib
decompression).


Cheers,
Gopal