[ 
https://issues.apache.org/jira/browse/HADOOP-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12853717#action_12853717
 ] 

Vinod K V commented on HADOOP-5881:
-----------------------------------

Are you using *all* the configuration items properly as documented 
[here|https://issues.apache.org/jira/browse/HADOOP-5881?focusedCommentId=12712919&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12712919]
 ? You should be setting all of them with appropriate values.

The first error you mentioned seems to happening because you only configured 
the max limits but not the cluster wide slot size and the per-job configuration.

No idea about the second problem. If you give more details, we can see.

We have been using this feature with the mentioned configuration items so far 
without any issues, so I don't think there is any problem of using bytes vs MB 
at all.

> Simplify configuration related to task-memory-monitoring and memory-based 
> scheduling
> ------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5881
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5881
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Vinod K V
>            Assignee: Vinod K V
>             Fix For: 0.20.1
>
>         Attachments: HADOOP-5881-20090526-branch-20.1.txt, 
> HADOOP-5881-20090526.1.txt
>
>
> The configuration we have now is pretty complicated. Besides everything else, 
> the mechanism of not specifying parameters separately for maps and reduces 
> leads to problems like HADOOP-5811. This JIRA should address simplifying 
> things and at the same time solving these problems.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to