[ 
https://issues.apache.org/jira/browse/HADOOP-953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12470657
 ] 

Doug Cutting commented on HADOOP-953:
-------------------------------------

> We really should make it a default rather than hard coding it in the script.

Rather I think we should perhaps convert many INFO level messages to DEBUG.

The contract with developers is that INFO messages are archived in log files.  
FATAL propagates to the user promptly, WARN may be summarized to the user 
(e.g., a count of warnings, w/ option to view), and INFO is available in log 
files.  DEBUG and TRACE are not normally viewed.  So, e.g., important state 
transitions should be logged at INFO level, so that folks can see when they 
occurred.  We don't want to silence all INFO-level messages.  We may however 
wish to reduce their number dramatically.

> huge log files
> --------------
>
>                 Key: HADOOP-953
>                 URL: https://issues.apache.org/jira/browse/HADOOP-953
>             Project: Hadoop
>          Issue Type: Improvement
>    Affects Versions: 0.10.1
>         Environment: N/A
>            Reporter: Andrew McNabb
>
> On our system, it's not uncommon to get 20 MB of logs with each MapReduce 
> job.  It would be very helpful if it were possible to configure Hadoop 
> daemons to write logs only when major things happen, but the only conf 
> options I could find are for increasing the amount of output.  The disk is 
> really a bottleneck for us, and I believe that short jobs would run much more 
> quickly with less disk usage.  We also believe that the high disk usage might 
> be triggering a kernel bug on some of our machines, causing them to crash.  
> If the 20 MB of logs went down to 20 KB, we would probably still have all of 
> the information we needed.
> Thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to