Yes, but the problem with the DailyRollingFileAppender is it doesn't
roll based on size as well as date.  I seem to have to choose between
the 2 when I want them both.

-----Original Message-----
From: Samir Shaikh [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 29, 2004 9:45 AM
To: Log4J Users List
Subject: RE: An odd configuration need...

Hi Alan & all,

I have a similar situation with one of my products. I am considering
logging to a local database.
BTW, there is a DailyRollingFileAppender already in there also.

for e.g. I have the following configuration for one of my loggers: -

log4j.logger.site=DEBUG, S2

log4j.appender.S2=org.apache.log4j.DailyRollingFileAppender
log4j.appender.S2.File=/usr/home/samirs/logs/site.log
log4j.appender.S2.DatePattern='.'yyyyMMdd
log4j.appender.S2.Append=true
log4j.appender.S2.Threshold=DEBUG
log4j.appender.S2.layout=org.apache.log4j.PatternLayout
log4j.appender.S2.layout.ConversionPattern=%d [%t] [%-5p] %c.%m%n

hth.

best regards,

-Samir
WorldRes, Inc.

PlacesToStay.com
"Online Hotel Reservations Worldwide".

-----Original Message-----
From: Alan Brown [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 29, 2004 9:36 AM
To: Log4J Users List
Subject: RE: An odd configuration need...


Our problem is that our log files grow horribly (and unavoidably)
quickly.  About a gig a day.  So I'd like to put the logs in a more
readable structure.

I don't want to zip the logs at the end of each day because, well,
because they aren't archives yet and they'll still get potentially
plenty of reading from multiple sources. 

My concern with having a second process do the pseudo-archiving (ie.
Copying older files to a directory that categorizes it - probably by
day), is that the processes might be trying to manipulate the files at
the same time and potentially throw errors on production (even the older
files could still be in immediate use by the production process, if it's
currently rolling over the log file and therefore renaming all the old
files).

So I'm still leaning toward writing my own subclass of FileAppender that
merges the features of Daily and Rolling, with the Daily part of the
functionality creating new Directories for the 'rolling' functionality
to 'roll' within.

In an ideal world there would be hooks inside FileAppender to enable me
to define my own rolling point and my own archiving method.  Perhaps it
could take a 'strategy' class as an argument that would enable this
functionality, but I can see that would not be easy to design within the
current framework.

alan

-----Original Message-----
From: Robert Pepersack [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 29, 2004 5:11 AM
To: Log4J Users List
Subject: Re: An odd configuration need...

You don't have to manage your disk space from within log4j.  To conserve

hard-drive space, you can use classes in the java.util.zip package to
zip 
up your log files.  You can schedule a nightly batch job that kicks off
a 
Java class that uses java.util.zip.  The book "Java i/O" by O'Reilly
gives 
excellent coverage of the subject (along with the classes in package 
java.io).  The Java class can also look at the dates of your zip files
and 
delete files greater than x days old.

I hope this helps.

Bob


At 03:56 PM 04/28/2004 -0700, you wrote:
>I have to have per-client logfiles for my environment.  We have about
>100 concurrent users so I am going to have RollingFileAppenders for
each
>logger.  My problem is going to be disk space.  Some of the clients
will
>do a lot of logging, others will be very sparse.  My fear is that I
>won't be able to use the maxFiles and fileSize values to make sure the
>logs don't use too much space.  Assuming all clients will log heavily
>and capping the maxFiles and fileSize accordingly will be prohibitively
>sub-optimal.  Any other usage of those 2 values to limit log
>proliferation is inherently dangerous.
>
>I was wondering whether anyone else has had this problem and how they
>fixed it.  My current plan is to subclass RollingFileAppender and have
>it delete the last touched file when the total log-space-used gets
above
>a certain size.  After a while, this will entail the overhead of
finding
>which is the oldest log-file (in order to delete it) every time a new
>file is created.
>
>Does anyone have a better idea?
>
>alan
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: [EMAIL PROTECTED]
>For additional commands, e-mail: [EMAIL PROTECTED]

Bob Pepersack
410-468-2054


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to