[ 
http://issues.apache.org/jira/browse/HADOOP-239?page=comments#action_12426324 ] 
            
eric baldeschwieler commented on HADOOP-239:
--------------------------------------------

+1 to the gist of this (Sanjay's latest suggestions and yoram's point about 
startup).

Putting the log in HDFS is interesting, but perhaps a distraction short term.

I think it would be worth trying to use the actual log infrastructure to store 
this information.  Rolling, compression, removal after a fixed time, no lost 
state when the sever fails...  all of this sounds like logging.


> job tracker WI drops jobs after 24 hours
> ----------------------------------------
>
>                 Key: HADOOP-239
>                 URL: http://issues.apache.org/jira/browse/HADOOP-239
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Yoram Arnon
>         Assigned To: Sanjay Dahiya
>            Priority: Minor
>
> The jobtracker's WI, keeps track of jobs executed in the past 24 hours.
> if the cluster was idle for a day (say Sunday) it drops all its history.
> Monday morning, the page is empty.
> Better would be to store a fixed number of jobs (say 10 each of succeeded and 
> failed jobs).
> Also, if the job tracker is restarted, it loses all its history.
> The history should be persistent, withstanding restarts and upgrades.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to