For the benefit of the list archives: the log4j properties are being
set inside the hadoop daemon shell script (here is the relevant line,
as pointed out to me by Boris)
bin/hadoop-daemon.sh:export HADOOP_ROOT_LOGGER="INFO,DRFA"
On Tue, Sep 28, 2010 at 4:12 PM, Alex Kozlov wrote:
> Hi Leo,
>
> W
On 29/09/10 00:12, Alex Kozlov wrote:
Hi Leo,
What distribution are you using? Sometimes the log4j.properties is packed
inside .jar file, which is picked first, so you need to explicitly give a
java option '-Dlog4j.configuration=' in the
corresponding daemon flags.
You find the JAR which has
Hi Leo,
What distribution are you using? Sometimes the log4j.properties is packed
inside .jar file, which is picked first, so you need to explicitly give a
java option '-Dlog4j.configuration=' in the
corresponding daemon flags.
Alex K
On Tue, Sep 28, 2010 at 2:13 PM, Leo Alekseyev wrote:
> I
I have all of the above in my log4j.properties; every line that
mentions DRFA is commented out. And yet, I still get the following
errors:
log4j:ERROR Could not find value for key log4j.appender.DRFA
log4j:ERROR Could not instantiate appender named "DRFA".
Is there another config file?.. Is DR
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
log4j.appender.RFA.MaxFileSize=1MB
log4j.appender.RFA.MaxBackupIndex=30
hadoop.root.logger=INFO,RFA
On 9/27/10 4:12 PM, "Leo Alekseyev" wrote:
We are looking for ways to preven
We are looking for ways to prevent Hadoop daemon logs from piling up
(over time they can reach several tens of GB and become a nuisance).
Unfortunately, the log4j DRFA class doesn't seem to provide an easy
way to limit the number of files it creates. I would like to try
switching to RFA with set M