[ 
https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12634113#action_12634113
 ] 

Amar Kamat commented on HADOOP-4018:
------------------------------------

Few comments :

_Main :_

1. There is no need to do the cleanup in init tasks. Simply throwing an 
exception in init tasks should do the cleanup and kill/fail the job. There 
seems to be a bug in the framework and I have opened HADOOP-4261 to address it. 
2. mapred.max.tasks.per.job should be set at the jobtracker level and not at 
the job level. It should be something like an expert-final parameter

_Testcase :_
1. Remove the commented assert statement
2. Most of the imports are not needed. 
3. Rewrite the test to reflect the changes

> limit memory usage in jobtracker
> --------------------------------
>
>                 Key: HADOOP-4018
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4018
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: maxSplits.patch, maxSplits2.patch, maxSplits3.patch, 
> maxSplits4.patch, maxSplits5.patch, maxSplits6.patch, maxSplits7.patch, 
> maxSplits8.patch
>
>
> We have seen instances when a user submitted a job with many thousands of 
> mappers. The JobTracker was running with 3GB heap, but it was still not 
> enough to prevent memory trashing from Garbage collection; effectively the 
> Job Tracker was not able to serve jobs and had to be restarted.
> One simple proposal would be to limit the maximum number of tasks per job. 
> This can be a configurable parameter. Is there other things that eat huge 
> globs of memory in job Tracker?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to