[
https://issues.apache.org/jira/browse/HADOOP-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12633570#action_12633570
]
dhruba borthakur commented on HADOOP-4018:
------------------------------------------
Hi Amar, do you agree with Owen's view (and mine too) that the global limits
are too complex to be worthwhile? Also, it might not even be feasible for an
admin to determine what the global limit should be. If you agree, then the
attached maxSplits2.patch is the code we need.
> limit memory usage in jobtracker
> --------------------------------
>
> Key: HADOOP-4018
> URL: https://issues.apache.org/jira/browse/HADOOP-4018
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Attachments: maxSplits.patch, maxSplits2.patch, maxSplits3.patch,
> maxSplits4.patch, maxSplits5.patch, maxSplits6.patch, maxSplits7.patch
>
>
> We have seen instances when a user submitted a job with many thousands of
> mappers. The JobTracker was running with 3GB heap, but it was still not
> enough to prevent memory trashing from Garbage collection; effectively the
> Job Tracker was not able to serve jobs and had to be restarted.
> One simple proposal would be to limit the maximum number of tasks per job.
> This can be a configurable parameter. Is there other things that eat huge
> globs of memory in job Tracker?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.