Remko Popma created LOG4J2-354:
----------------------------------

             Summary: log4j2 + FastFileAppender + Tomcat logrotate problem
                 Key: LOG4J2-354
                 URL: https://issues.apache.org/jira/browse/LOG4J2-354
             Project: Log4j 2
          Issue Type: Bug
    Affects Versions: 2.0-beta8
         Environment: log4j2-beta8
disruptor 3.1.1
tomcat 7 with Java 1.6
We use the SLF4J library as well.
            Reporter: Remko Popma


(from log4j-user mailing list)

Kamil Mroczek wrote:

Hello,

We decided to try out log4j2-beta8 to see if we could improve our logging 
performance.  We were testing with the disruptor 3.1.1. library to make all our 
appenders async.

{{-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector}}

We are running tomcat 7 with Java 1.6.  We use the SLF4J library as well.

The appender that we were using in this case was the Fast File Appender with a 
definition like:

{code}
<FastFile name="RequestLog" fileName="requests.log"
          immediateFlush="false" append="true">
  <PatternLayout>
    <pattern>%m%n</pattern>
  </PatternLayout>
</FastFile>
{code}

And logger was..

{code}
<logger name="a.namespace" level="info" additivity="false">
  <appender-ref ref="RequestLog"/>
</logger>
{code}

So the system was designed to allow log4j to do the logging and then have 
logrotate rotate the log files from the host to an external destination.

We rotate the logs every 5 minutes with these params (with LZO compression).

compress
compresscmd /bin/lzop
compressoptions "-U"
compressext .lzo

What we were seeing was that after a log rotation happened the new file would 
start with a massive chunk of binary data at the start. Many times on the order 
of 100-200MB.  This would turn the logs from being on the order of 50-100MB to 
200-350MB.

My guess was that it had something to do with the byte buffer flushing 
mid-rotate since these chunks always come at the start of the file. But I also 
saw LOG4J2-295 (Fast(Rolling)FileAppender now correctly handles messages 
exceeding the buffer size. ) which was fixed in beta8 which my discredit that 
idea.

We were able to fix the issue by using the regular FileAppender like this:

{code}
<File name="RequestLog" fileName="requests.log" immediateFlush="true"
append="true" bufferedIO="false">
      <PatternLayout>
        <pattern>%m%n</pattern>
      </PatternLayout>
</File>
{code}

I can't remember for certain, but I am pretty sure that even if we had 
bufferedIO="true" on the FileAppender everything worked okay as well.

We could reproduce it pretty consistently.  I wanted to post to the group to 
see if anyone has seen anything like this before.  Any ideas on what the issue 
could be?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org

Reply via email to