[ 
http://issues.apache.org/jira/browse/HADOOP-489?page=comments#action_12441230 ] 
            
Owen O'Malley commented on HADOOP-489:
--------------------------------------

I'm confused by your proposal in that a and b sound like alterantives. I think 
we should do:

1. write the user logs to a file until it reaches 25% of the limit and then 
roll.
2. maintain the last 4 spills
3. when the task completes, it is concatenated onto the local job log on local 
disk. the job log should have an index from the tasks to the offset where they 
start
4. the jetty servlet on the task tracker serves up the task logs regardless of 
whether it is in the still running state or complete.
5. the task tracker cleans up the storage after N hours after the job completes.

I'd leave the save to dfs for a second pass, and it can only be done once the 
job is complete. I'd still leave the job log fragmented by the task tracker the 
tasks ran on.

> Seperating user logs from system logs in map reduce
> ---------------------------------------------------
>
>                 Key: HADOOP-489
>                 URL: http://issues.apache.org/jira/browse/HADOOP-489
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Mahadev konar
>         Assigned To: Arun C Murthy
>            Priority: Minor
>
> Currently the user logs are a part of system logs in mapreduce. Anything 
> logged by the user is logged into the tasktracker log files. This create two 
> issues-
> 1) The system log files get cluttered with user output. If the user outputs a 
> large amount of logs, the system logs need to be cleaned up pretty often.
> 2) For the user, it is difficult to get to each of the machines and look for 
> the logs his/her job might have generated.
> I am proposing three solutions to the problem. All of them have issues with 
> it -
> Solution 1.
> Output the user logs on the user screen as part of the job submission 
> process. 
> Merits- 
> This will prevent users from printing large amount of logs and the user can 
> get runtime feedback on what is wrong with his/her job.
> Issues - 
> This proposal will use the framework bandwidth while running jobs for the 
> user. The user logs will need to pass from the tasks to the tasktrackers, 
> from the tasktrackers to the jobtrackers and then from the jobtrackers to the 
> jobclient using a lot of framework bandwidth if the user is printing out too 
> much data.
> Solution 2.
> Output the user logs onto a dfs directory and then concatenate these files. 
> Each task can create a file for the output in the log direcotyr for a given 
> user and jobid.
> Issues -
> This will create a huge amount of small files in DFS which later can be 
> concatenated into a single file. Also there is this issue that who would 
> concatenate these files into a single file? This could be done by the 
> framework (jobtracker) as part of the cleanup for the jobs - might stress the 
> jobtracker.
>  
> Solution 3.
> Put the user logs into a seperate user log file in the log directory on each 
> tasktrackers. We can provide some tools to query these local log files. We 
> could have commands like for jobid j and for taskid t get me the user log 
> output. These tools could run as a seperate map reduce program with each map 
> grepping the user log files and a single recude aggregating these logs in to 
> a single dfs file.
> Issues-
> This does sound like more work for the user. Also, the output might not be 
> complete since a tasktracker might have went down after it ran the job. 
> Any thoughts?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to