Re: How to change logging from DRFA to RFA? Is it a good idea?

2010-09-29 Thread Leo Alekseyev
For the benefit of the list archives: the log4j properties are being set inside the hadoop daemon shell script (here is the relevant line, as pointed out to me by Boris) bin/hadoop-daemon.sh:export HADOOP_ROOT_LOGGER="INFO,DRFA" On Tue, Sep 28, 2010 at 4:12 PM, Alex Kozlov wrote: > Hi Leo, > > W

Re: How to change logging from DRFA to RFA? Is it a good idea?

2010-09-29 Thread Steve Loughran
On 29/09/10 00:12, Alex Kozlov wrote: Hi Leo, What distribution are you using? Sometimes the log4j.properties is packed inside .jar file, which is picked first, so you need to explicitly give a java option '-Dlog4j.configuration=' in the corresponding daemon flags. You find the JAR which has

Re: How to change logging from DRFA to RFA? Is it a good idea?

2010-09-28 Thread Alex Kozlov
Hi Leo, What distribution are you using? Sometimes the log4j.properties is packed inside .jar file, which is picked first, so you need to explicitly give a java option '-Dlog4j.configuration=' in the corresponding daemon flags. Alex K On Tue, Sep 28, 2010 at 2:13 PM, Leo Alekseyev wrote: > I

Re: How to change logging from DRFA to RFA? Is it a good idea?

2010-09-28 Thread Leo Alekseyev
I have all of the above in my log4j.properties; every line that mentions DRFA is commented out. And yet, I still get the following errors: log4j:ERROR Could not find value for key log4j.appender.DRFA log4j:ERROR Could not instantiate appender named "DRFA". Is there another config file?.. Is DR

Re: How to change logging from DRFA to RFA? Is it a good idea?

2010-09-27 Thread Boris Shkolnik
log4j.appender.RFA=org.apache.log4j.RollingFileAppender log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} log4j.appender.RFA.MaxFileSize=1MB log4j.appender.RFA.MaxBackupIndex=30 hadoop.root.logger=INFO,RFA On 9/27/10 4:12 PM, "Leo Alekseyev" wrote: We are looking for ways to preven

How to change logging from DRFA to RFA? Is it a good idea?

2010-09-27 Thread Leo Alekseyev
We are looking for ways to prevent Hadoop daemon logs from piling up (over time they can reach several tens of GB and become a nuisance). Unfortunately, the log4j DRFA class doesn't seem to provide an easy way to limit the number of files it creates. I would like to try switching to RFA with set M