[
https://issues.apache.org/jira/browse/HADOOP-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12514076
]
Michael Bieniosek edited comment on HADOOP-1636 at 7/19/07 7:02 PM:
--------------------------------------------------------------------
This patch creates a new configurable variable
mapred.jobtracker.completeuserjobs.maximum, which defaults to 100 (the current
hard-coded value).
When this many jobs are completed (failed or succeeded), hadoop deletes
finished jobs from memory, making them accessible only through the
information-poor jobhistory page. This limit is supposedly per user, but I
submit all jobs as the same user. (this is the current behavior, which is
unchanged by my patch)
I have tested this patch, and it seems to work.
was:
This patch creates a new configurable variable
mapred.jobtracker.completeuserjobs.maximum, which defaults to 100 (the current
hard-coded value).
When this many jobs are completed (failed or succeeded), hadoop deletes
finished jobs from memory, making them accessible only through the
information-poor jobhistory page. This limit is supposedly per user, but I
submit all jobs as the same user.
I have tested this patch, and it seems to work.
> constant should be user-configurable: MAX_COMPLETE_USER_JOBS_IN_MEMORY
> ----------------------------------------------------------------------
>
> Key: HADOOP-1636
> URL: https://issues.apache.org/jira/browse/HADOOP-1636
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.13.0
> Reporter: Michael Bieniosek
> Attachments: configure-max-completed-jobs.patch
>
>
> In JobTracker.java: static final int MAX_COMPLETE_USER_JOBS_IN_MEMORY = 100;
> This should be configurable.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.