[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-12 Thread Remko Popma (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13793572#comment-13793572
 ] 

Remko Popma commented on LOG4J2-414:


Yiru, any update? Anything I can do to help?

 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FTPProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{HostName}, 
 STACBatch-FTPProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender Email --
   RollingFile name=Email 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-Email.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Email-%d{MM-dd-}-%i.log.gz
   append=true 

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-05 Thread Remko Popma (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13787439#comment-13787439
 ] 

Remko Popma commented on LOG4J2-414:


Yiru, did you have any luck finding out where the memory is going?
In addition to Noel's suggestion, Java 7u40 comes with MissionControl, which 
has an (experimental) plugin called JOverflow that does heap dump analysis. 
(http://hirt.se/blog/?p=343) This may be useful.

 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FTPProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{HostName}, 
 STACBatch-FTPProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender Email --
   RollingFile name=Email 
 

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-03 Thread Noel Grandin (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785051#comment-13785051
 ] 

Noel Grandin commented on LOG4J2-414:
-

Just a suggestion, but are we sure we know where the memory is going?

Perhaps it might be an idea to try adding the -XX:+HeapDumpOnOutOfMemoryError 
option, triggering a heap dump and then running the heap dump through an 
analysis tool like Eclipse MAT.

It might be that the leak is somewhere non-obvious.

 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FTPProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{HostName}, 
 STACBatch-FTPProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender Email --
   

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-03 Thread Remko Popma (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785231#comment-13785231
 ] 

Remko Popma commented on LOG4J2-414:


Good point.

I've done some back-of-the-napkin calculations, and assuming that every log 
message is 50 characters long and is parameterized with two integers, each 
ParameterizedMessage object in the RingBufferLogEvent would take up about 419 
bytes.
The FQN, logger name and thread name (attributes of RingBufferLogEvent ) are 
all cached (references to the same single String object) so I think we can 
discount the memory they occupy.

For the default ring buffer size of 256*1024 slots, I calculate 30MB for the 
(empty) RingBufferLogEvents, and an additional 110MB if the ring buffer is 
fully filled with these 50-char + 2 integers log messages. This assumes:
* none of the log events have any exceptions (Exceptions and the stack trace 
they contain will take up a lot of memory)
* the application does *not* use the ThreadContext map or stack (otherwise 
every RingBufferEvent will have a copy of the map/stack)

Yiru, can you confirm that your log files have (almost) no exceptions and that 
your application does not use the ThreadContext map/stack?

What puzzles me is the error message:   SEVERE: Exception processing: 781134 / 
775221 org.apache.logging.log4j.core.async.RingBufferLogEvent@xx
java.lang.OutOfMemoryError: Java heap space / GC overhead limit exceeded

This sounds like RingBufferLogEvents are being allocated and garbage collected. 
This should not happen!
There is only one static ring buffer, and it is fully populated with 
RingBufferLogEvent objects when Log4J initializes. After initialization new 
RingBufferLogEvents are never created and the existing RingBufferLogEvents are 
never released. So I don't understand why the error message mentions 
RingBufferLogEvent...

 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-02 Thread Yiru Li (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783878#comment-13783878
 ] 

Yiru Li commented on LOG4J2-414:


with -DAsyncLogger.RingBufferSize=8192, a large file (30k-row long) can run
through our app without OutOfMemory error. The problem is: the performance
is worse than log4j2 all loggers synchronous, and log4j1.2 all loggers
sync. The data is below:
log4j-2.0  all loggers Async:  110 mins
log4j-2.0  all loggers Sync: 95 mins
log4j-1.2 all loggers Sync:  58 mins

I will increase AsyncLogger.RingBufferSize to 24*1024 to see if performance
is improved.
thanks






 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FTPProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{HostName}, 
 STACBatch-FTPProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-02 Thread Remko Popma (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783890#comment-13783890
 ] 

Remko Popma commented on LOG4J2-414:


Ok. I would recommend the RandomAccessFile appender and setting 
immediateFlush=false. 

 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FTPProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{HostName}, 
 STACBatch-FTPProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender Email --
   RollingFile name=Email 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-Email.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Email-%d{MM-dd-}-%i.log.gz
   

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-02 Thread Remko Popma (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13784042#comment-13784042
 ] 

Remko Popma commented on LOG4J2-414:


If the ring buffer is very small, and the application logs events faster than 
the underlying appender can keep up with, the ring buffer will fill up, and 
your application will slow down to the throughput speed of the appender plus 
some overhead for the buffering. If  you are logging a lot, it is important to 
use the fastest appender possible. This is why I recommend using 
RandomAccessFile appender and not flushing to disk on every event.

The Asynchronous Loggers give the most benefit in situations where you have 
bursts of activity. The ring buffer should be large enough to hold all log 
events generated by such a burst. The exact calculation depends on how fast the 
appender is that takes events out of the buffer and how fast events are added 
to the buffer during such a burst.

If your application does not have bursts, but instead has a very high sustained 
logging rate, especially if the logging rate is faster than the appender can 
keep up with, any queue that sits between the application and the appender will 
fill up and asynchronous logging may not give you much benefit.

How big are your log files? (Based on the times you posted it sounds like you 
are logging a lot...)
Is your application multi-threaded or single-threaded?

If  your machine has enough memory, you could try -Xms=3G -Xmx=6G and the 
default ring buffer size of 256*1024.
If that does not give good results, I would try synchronous logging with all 
RandomAccessFile appenders and immediateFlush=false.
When your application is finished, this line will ensure that any data 
remaining in the RandomAccessFile buffers is flushed to disk:
{{((LifeCycle) LogManager.getContext()).stop();}}

 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile 

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-02 Thread Remko Popma (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13784084#comment-13784084
 ] 

Remko Popma commented on LOG4J2-414:


I just thought of something else: it could be that the rollover 
SizeBasedTriggeringPolicy of 10MB is too small. If you are logging so much that 
the logging slows down your app, it must be quite a lot, and 10 MB is not 
much... Perhaps frequent rollovers are slowing down your application.

I keep feeling I'm missing something... I would be good if you could tell us a 
bit more about your app (number of threads, CPU/memory usage, how big the log 
files are, any other I/O the app does?)
Did you ever try running your app with logging OFF, and how long did that take?

 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FTPProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{HostName}, 
 STACBatch-FTPProcessor, %m %t %n/
 

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-02 Thread Yiru Li (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13784334#comment-13784334
 ] 

Yiru Li commented on LOG4J2-414:


Thank you so much for your help.
I did testing with setting -DAsyncLogger.RingBufferSize=24576 and
Xmx=4048m, using RandomAccessRollingFile. Again, it caused OutOfMemory
errors.
I just started another testing with  -DAsyncLogger.RingBufferSize=12000,
and SizeBasedTriggeringPolicy of 100 MB.


About our app:
The number of threads is varied during the period of processing a file. At
the moment OutOfMemory error was caused, the number of threads is 15 at
least.
The environment I am running tests is:   Linux, 8G memory,  CPU 2900MHZ  2
cores
Logging rate: for the most busy processing period, at least 3M per minute.
(The number could be much larger than this. I will give an accurate number
later.)
I have not tried running our app with logging off. I will let you know this
later.

thanks again.







 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 {code}
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-01 Thread Remko Popma (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783053#comment-13783053
 ] 

Remko Popma commented on LOG4J2-414:


Thanks for the detailed report. I'm not sure what is going on, you may have 
found a memory leak (although I thought that was fixed in beta9). 

Two things to try: the initial memory size of 64MB seems too small, and will 
need to be resized multiple times while your app is running. Can you increase 
the start heap size to half the max heap size (or equal to the max heap size)?

Also, the default ring buffer size for Async Loggers is quite large: 256*1024 
slots, where each slot is a reference to a RingBufferLogEvent, which itself 
contains a bunch of fields. So the ring buffer alone will take up a significant 
amount of memory. You can try reducing this memory by specifying a smaller ring 
buffer size, perhaps something like 8 * 1024?
-DAsyncLogger.RingBufferSize=8192

Can you give these a try?


 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-01 Thread Remko Popma (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783068#comment-13783068
 ] 

Remko Popma commented on LOG4J2-414:


Just FYI, with the default size I calculate that the empty ring buffer alone 
will take up about 30 MB of memory. 

 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FTPProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{HostName}, 
 STACBatch-FTPProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender Email --
   RollingFile name=Email 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-Email.log 
   
 

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-01 Thread Remko Popma (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783084#comment-13783084
 ] 

Remko Popma commented on LOG4J2-414:


One more question: does the problem happen at startup or after running for some 
time?

 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FTPProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{HostName}, 
 STACBatch-FTPProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender Email --
   RollingFile name=Email 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-Email.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Email-%d{MM-dd-}-%i.log.gz

[jira] [Commented] (LOG4J2-414) Async all loggers cause OutOfMemory error in log4j-2.0-beta9

2013-10-01 Thread Yiru Li (JIRA)

[ 
https://issues.apache.org/jira/browse/LOG4J2-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783145#comment-13783145
 ] 

Yiru Li commented on LOG4J2-414:


It happens after running for some time. More specifically, it happens some
time after a specific event handler starts processing. (The app is composed
of a number of event handlers.) That specific event handler is
multi-threading and the most computation extensive.

I will give a try on a smaller size of ring buffer later of today. I will
let you know the results.

thanks a lot
Yiru





 Async all loggers cause OutOfMemory error in log4j-2.0-beta9
 

 Key: LOG4J2-414
 URL: https://issues.apache.org/jira/browse/LOG4J2-414
 Project: Log4j 2
  Issue Type: Bug
  Components: API, Core, log4j 1.2 emulation, SLF4J Bridge
Affects Versions: 2.0-beta9
 Environment: linux core-4.0, java version: 1.7.0_17,   memory: 8G,
 CPU: 2 cores, Intel (R) Xeon(R), startup options:   -Xms64m -Xmx2048m 
 -XX:MaxPermSixe=256m
Reporter: Yiru Li

 1. Problem description:
 The main function of my company's system is to read a file and then do 
 calculation. The system has been using log4j1.2. We intend to switch to 
 log4j2. The problem is found when we are doing evaluation on log4j2.
  
 Using the log4j2.xml described below , setting all loggers being synchronous, 
 there is no problem to ran a 30k-row-long file through the system. 
 With setting all loggers  being asynchronous, there is no problem to run a 
 small file (10-row-long) through the system, but the OutOfMemory error (heap 
 space) is caused when the system runs a little larger file (3k-row-long). The 
 error message is: SEVERE: Exception processing: 781134 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@4f6f399c
 java.lang.OutOfMemoryError: Java heap space
 Then I increased Xmx to 4048m, the error message shown is little different: 
 SEVERE: Exception processing: 775221 
 org.apache.logging.log4j.core.async.RingBufferLogEvent@1c6b80a9
 java.lang.OutOfMemoryError: GC overhead limit exceeded
 The same issue is repeated whenever the system runs a 3k-long file. I don't 
 know how you can repeat this issue in your site. 
  
 2. start-up options
  -Xms64m -Xmx4048m -XX:MaxPermSize=256m -server $PARAM 
 -Djava.security.egd=file:/dev/./urandom 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -Dlog4j.debug 
 3. The relevant jars which is deployed to our system:
 slf4j-api-1.6.6.jar
 log4j-slf4j-impl-2.0-beta9.jar
 log4j-1.2-api-2.0-beta9.jar
 log4j-api-2.0-beta9.jar
 log4j-core-2.0-beta9.jar
 disruptor-3.2.0.jar
 4.  the content of log4j2.xml is copied below:
 ?xml version=1.0 encoding=UTF-8?
 Configuration
   Appenders
   !-- Appender R --
   RollingFile name=R 
 fileName=/logs4j2/tpaeventsystem/log4j/STACBatch-Common.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-Common-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true 
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-Common, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile
   
   !-- Appender FileProcessor --
   RollingFile name=FileProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACBatch-FileProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FileProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{FILEID}, %X{HostName}, 
 STACBatch-FileProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   
   /RollingFile
   
   !-- Appender FTPProcessor --
   RollingFile name=FTPProcessor 
 fileName=/logs-4j2/tpaeventsystem/log4j/STACatch-FTPProcessor.log 
   
 filePattern=/logs-4j2/tpaeventsystem/log4j/$${date:-MM}/STACBatch-FTPProcessor-%d{MM-dd-}-%i.log.gz
   append=true bufferedIO=true immediateFlush=true
   PatternLayout pattern=(%d), %X{HostName}, 
 STACBatch-FTPProcessor, %m %t %n/
   Policies
   SizeBasedTriggeringPolicy size=10 MB/
   /Policies
   DefaultRolloverStrategy max=50/
   /RollingFile