> but those new settings have not yet been
> added to mapred-default.xml.
>
It's intentionally left out.
If set in mapred-default.xml, user's mapred.child.java.opts would be ignored
since mapred.{map,reduce}.child.java.opts would always win.

Koji

On 1/11/12 9:34 PM, "George Datskos" <george.dats...@jp.fujitsu.com> wrote:

> Koji, Harsh
> 
> mapred-478 seems to be in v1, but those new settings have not yet been
> added to mapred-default.xml.  (for backwards compatibility?)
> 
> 
> George
> 
> On 2012/01/12 13:50, Koji Noguchi wrote:
>> Hi Harsh,
>> 
>> Wasn't MAPREDUCE-478 in 1.0 ?  Maybe the Jira is not up to date.
>> 
>> Koji
>> 
>> 
>> On 1/11/12 8:44 PM, "Harsh J"<ha...@cloudera.com>  wrote:
>> 
>>> These properties are not available on Apache Hadoop 1.0 (Formerly
>>> known as 0.20.x). This was a feature introduced in 0.21
>>> (https://issues.apache.org/jira/browse/MAPREDUCE-478), and is
>>> available today on 0.22 and 0.23 line of releases.
>>> 
>>> For 1.0/0.20, use "mapred.child.java.opts", that applies to both map
>>> and reduce commonly.
>>> 
>>> Would also be helpful if you can tell us what doc guided you to use
>>> these property names instead of the proper one, so we can fix it.
>>> 
>>> On Thu, Jan 12, 2012 at 8:44 AM, T Vinod Gupta<tvi...@readypulse.com>
>>> wrote:
>>>> Hi,
>>>> Can someone help me asap? when i run my mapred job, it fails with this
>>>> error -
>>>> 12/01/12 02:58:36 INFO mapred.JobClient: Task Id :
>>>> attempt_201112151554_0050_m_000071_0, Status : FAILED
>>>> Error: Java heap space
>>>> attempt_201112151554_0050_m_000071_0: log4j:ERROR Failed to flush writer,
>>>> attempt_201112151554_0050_m_000071_0: java.io.IOException: Stream closed
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> sun.nio.cs.StreamEncoder.ensureOpen(StreamEncoder.java:44)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:139)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> org.apache.hadoop.mapred.TaskLogAppender.flush(TaskLogAppender.java:94)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:260)
>>>> attempt_201112151554_0050_m_000071_0:   at
>>>> org.apache.hadoop.mapred.Child$2.run(Child.java:142)
>>>> 
>>>> 
>>>> so i updated my mapred-site.xml with these settings -
>>>> 
>>>>   <property>
>>>>     <name>mapred.map.child.java.opts</name>
>>>>     <value>-Xmx2048M</value>
>>>>   </property>
>>>> 
>>>>   <property>
>>>>     <name>mapred.reduce.child.java.opts</name>
>>>>     <value>-Xmx2048M</value>
>>>>   </property>
>>>> 
>>>> also, when i run my jar, i provide -
>>>> "-Dmapred.map.child.java.opts="-Xmx4000m" at the end.
>>>> inspite of this, the task is not getting the max heap size im setting.
>>>> 
>>>> where did i go wrong?
>>>> 
>>>> after changing mapred-site.xml, i restarted jobtracker and tasktracker.. is
>>>> that not good enough?
>>>> 
>>>> thanks
>> 
> 
> 

Reply via email to