[ 
https://issues.apache.org/jira/browse/WHIRR-146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated WHIRR-146:
----------------------------

    Status: Patch Available  (was: Open)

> Changing the mapred.child.java.opts value does not change the heap size from 
> a default one.
> -------------------------------------------------------------------------------------------
>
>                 Key: WHIRR-146
>                 URL: https://issues.apache.org/jira/browse/WHIRR-146
>             Project: Whirr
>          Issue Type: Bug
>         Environment: Amazon EC2, Amazon Linux images.
>            Reporter: Tibor Kiss
>            Assignee: Tibor Kiss
>         Attachments: whirr-146.patch
>
>
> Even if I change the value for mapred.child.java.opts the task is started 
> with -Xmx200m.
> Since the mapred.child.java.opts and mapred.child.ulimit has been deprecated, 
> we need to set the mapred.map.child.java.opts, mapred.reduce.child.java.opts 
> respectively the mapred.map.child.ulimit and mapred.reduce.child.ulimit in 
> order to have any effect.
> Unfortunately the /scripts/cdh/install and /scripts/apache/install which 
> generates the /etc/hadoop/conf.dist/hadoop-site.xml is not synchronized with 
> this deprecation as a result we are not able to use mappers and reducers 
> which does not fit in 200M heap size.
> How to reproduce: 
> 1. Start a cluster on large instances where we are using 64bit jvm and run a 
> simple distcp, you will experience Child jvm crash.
> 2. Or run a job with mappers or reducers which does not fit in 200M heap, it 
> will experience OutOfMemoryError in child processes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to