[jira] [Commented] (LOG4J2-1318) LoggerContext#getLogger causes heavy GC overhead
[ https://issues.apache.org/jira/browse/LOG4J2-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15213482#comment-15213482 ] Remko Popma commented on LOG4J2-1318: - Fixed in master: I've refactored the LoggerContext to use a different data structure that does not need temporary objects to be created during common operations. I did not make functional changes: the above question regarding SLF4JLoggerContext still remains, so I'm leaving this ticket open. > LoggerContext#getLogger causes heavy GC overhead > > > Key: LOG4J2-1318 > URL: https://issues.apache.org/jira/browse/LOG4J2-1318 > Project: Log4j 2 > Issue Type: Bug > Components: Core >Affects Versions: 2.5 >Reporter: Rodrigo Merino >Assignee: Remko Popma > Labels: performance > > With the changes from LOG4J2-1180, when running performance testing > scenarios, we are experiencing some throughput degradations due to an > increase in the GC stall time. > With the changes from LOG4J2-1180, the GC stall went from ~6% to ~11%, having > an impact on the application of a reduction of ~32% of the operations > throughput. The memory allocation rate both before and after the change is > 4GB/s and 12GB/s respectively. > In both cases, the relevant jvm configuration params were: > {code} > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCDateStamps > -XX:+PrintTenuringDistribution -XX:ErrorFile=%MHOME%/logs/err.log > -Xloggc:%HOME%/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError > -XX:+AlwaysPreTouch -Xms2048m -XX:PermSize=256m -XX:MaxPermSize=256m > -XX:+UseParNewGC -XX:NewSize=1024m -XX:+UseNUMA > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org
[jira] [Commented] (LOG4J2-1318) LoggerContext#getLogger causes heavy GC overhead
[ https://issues.apache.org/jira/browse/LOG4J2-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15213472#comment-15213472 ] Remko Popma commented on LOG4J2-1318: - Looking further, SLF4JLoggerContext is implemented a bit differently from the other LoggerContext implementations (SimpleLoggerContext, log4j-core LoggerContext and Log4jTaglibLoggerContext): the others all check if the previously registered logger matches the MessageFactory by calling {{AbstractLogger.checkMessageFactory(logger, messageFactory)}}, but SLF4JLoggerContext does not. Same question: is this a bug or is SLF4JLoggerContext special for some reason and is this difference by design? > LoggerContext#getLogger causes heavy GC overhead > > > Key: LOG4J2-1318 > URL: https://issues.apache.org/jira/browse/LOG4J2-1318 > Project: Log4j 2 > Issue Type: Bug > Components: Core >Affects Versions: 2.5 >Reporter: Rodrigo Merino >Assignee: Remko Popma > Labels: performance > > With the changes from LOG4J2-1180, when running performance testing > scenarios, we are experiencing some throughput degradations due to an > increase in the GC stall time. > With the changes from LOG4J2-1180, the GC stall went from ~6% to ~11%, having > an impact on the application of a reduction of ~32% of the operations > throughput. The memory allocation rate both before and after the change is > 4GB/s and 12GB/s respectively. > In both cases, the relevant jvm configuration params were: > {code} > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCDateStamps > -XX:+PrintTenuringDistribution -XX:ErrorFile=%MHOME%/logs/err.log > -Xloggc:%HOME%/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError > -XX:+AlwaysPreTouch -Xms2048m -XX:PermSize=256m -XX:MaxPermSize=256m > -XX:+UseParNewGC -XX:NewSize=1024m -XX:+UseNUMA > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org
[jira] [Commented] (LOG4J2-1318) LoggerContext#getLogger causes heavy GC overhead
[ https://issues.apache.org/jira/browse/LOG4J2-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15213443#comment-15213443 ] Remko Popma commented on LOG4J2-1318: - [~garydgregory], [~ralph.go...@dslextreme.com] can you take a look at SLF4JLoggerContext? It looks like the {{getLogger(String, MessageFactory)}} method currently ignores the MessageFactory, and when I try to fix that in the spirit of LOG4J2-1180, the test in log4j-to-slf4j org.a.l.slf4j.LoggerTest breaks... (getLogger_String_MessageFactoryMismatchNull and getLogger_String_MessageFactoryMismatch) Is there something special about SLF4J Loggers that they must be unique by name only, not by name/MessageFactory? Or are the current implementation and the test simply wrong - LOG4J2-1180 was not addressed correctly here? > LoggerContext#getLogger causes heavy GC overhead > > > Key: LOG4J2-1318 > URL: https://issues.apache.org/jira/browse/LOG4J2-1318 > Project: Log4j 2 > Issue Type: Bug > Components: Core >Affects Versions: 2.5 >Reporter: Rodrigo Merino >Assignee: Remko Popma > Labels: performance > > With the changes from LOG4J2-1180, when running performance testing > scenarios, we are experiencing some throughput degradations due to an > increase in the GC stall time. > With the changes from LOG4J2-1180, the GC stall went from ~6% to ~11%, having > an impact on the application of a reduction of ~32% of the operations > throughput. The memory allocation rate both before and after the change is > 4GB/s and 12GB/s respectively. > In both cases, the relevant jvm configuration params were: > {code} > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCDateStamps > -XX:+PrintTenuringDistribution -XX:ErrorFile=%MHOME%/logs/err.log > -Xloggc:%HOME%/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError > -XX:+AlwaysPreTouch -Xms2048m -XX:PermSize=256m -XX:MaxPermSize=256m > -XX:+UseParNewGC -XX:NewSize=1024m -XX:+UseNUMA > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org
[jira] [Commented] (LOG4J2-1318) LoggerContext#getLogger causes heavy GC overhead
[ https://issues.apache.org/jira/browse/LOG4J2-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198229#comment-15198229 ] Gary Gregory commented on LOG4J2-1318: -- What the measurement for? 2.5 vs. 2.4.1? Or 2.4.1 + a patch for [LOG4J2-1180]? > LoggerContext#getLogger causes heavy GC overhead > > > Key: LOG4J2-1318 > URL: https://issues.apache.org/jira/browse/LOG4J2-1318 > Project: Log4j 2 > Issue Type: Bug > Components: Core >Affects Versions: 2.5 >Reporter: Rodrigo Merino > Labels: performance > > With the changes from LOG4J2-1180, when running performance testing > scenarios, we are experiencing some throughput degradations due to an > increase in the GC stall time. > With the changes from LOG4J2-1180, the GC stall went from ~6% to ~11%, having > an impact on the application of a reduction of ~32% of the operations > throughput. The memory allocation rate both before and after the change is > 4GB/s and 12GB/s respectively. > In both cases, the relevant jvm configuration params were: > {code} > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCDateStamps > -XX:+PrintTenuringDistribution -XX:ErrorFile=%MHOME%/logs/err.log > -Xloggc:%HOME%/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError > -XX:+AlwaysPreTouch -Xms2048m -XX:PermSize=256m -XX:MaxPermSize=256m > -XX:+UseParNewGC -XX:NewSize=1024m -XX:+UseNUMA > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org
[jira] [Commented] (LOG4J2-1318) LoggerContext#getLogger causes heavy GC overhead
[ https://issues.apache.org/jira/browse/LOG4J2-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15199460#comment-15199460 ] Rodrigo Merino commented on LOG4J2-1318: The measurements were done fir 2.5 vs 2.5 without the changes from LOG4J2-1180 > LoggerContext#getLogger causes heavy GC overhead > > > Key: LOG4J2-1318 > URL: https://issues.apache.org/jira/browse/LOG4J2-1318 > Project: Log4j 2 > Issue Type: Bug > Components: Core >Affects Versions: 2.5 >Reporter: Rodrigo Merino > Labels: performance > > With the changes from LOG4J2-1180, when running performance testing > scenarios, we are experiencing some throughput degradations due to an > increase in the GC stall time. > With the changes from LOG4J2-1180, the GC stall went from ~6% to ~11%, having > an impact on the application of a reduction of ~32% of the operations > throughput. The memory allocation rate both before and after the change is > 4GB/s and 12GB/s respectively. > In both cases, the relevant jvm configuration params were: > {code} > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCDateStamps > -XX:+PrintTenuringDistribution -XX:ErrorFile=%MHOME%/logs/err.log > -Xloggc:%HOME%/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError > -XX:+AlwaysPreTouch -Xms2048m -XX:PermSize=256m -XX:MaxPermSize=256m > -XX:+UseParNewGC -XX:NewSize=1024m -XX:+UseNUMA > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org
[jira] [Commented] (LOG4J2-1318) LoggerContext#getLogger causes heavy GC overhead
[ https://issues.apache.org/jira/browse/LOG4J2-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198307#comment-15198307 ] Remko Popma commented on LOG4J2-1318: - GC-free steady state logging is a big theme of the upcoming 2.6 release. You can track its progress in the LOG4J2-1270 epic and associated tickets. Feedback welcome! However the LOG4J2-1270 work focusses on steady state logging, while it sounds like you experience issues during initialization. I don't know exactly what changes were made for LOG4J2-1180. Where are the temporary objects being created? > LoggerContext#getLogger causes heavy GC overhead > > > Key: LOG4J2-1318 > URL: https://issues.apache.org/jira/browse/LOG4J2-1318 > Project: Log4j 2 > Issue Type: Bug > Components: Core >Affects Versions: 2.5 >Reporter: Rodrigo Merino > Labels: performance > > With the changes from LOG4J2-1180, when running performance testing > scenarios, we are experiencing some throughput degradations due to an > increase in the GC stall time. > With the changes from LOG4J2-1180, the GC stall went from ~6% to ~11%, having > an impact on the application of a reduction of ~32% of the operations > throughput. The memory allocation rate both before and after the change is > 4GB/s and 12GB/s respectively. > In both cases, the relevant jvm configuration params were: > {code} > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCDateStamps > -XX:+PrintTenuringDistribution -XX:ErrorFile=%MHOME%/logs/err.log > -Xloggc:%HOME%/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError > -XX:+AlwaysPreTouch -Xms2048m -XX:PermSize=256m -XX:MaxPermSize=256m > -XX:+UseParNewGC -XX:NewSize=1024m -XX:+UseNUMA > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org
[jira] [Commented] (LOG4J2-1318) LoggerContext#getLogger causes heavy GC overhead
[ https://issues.apache.org/jira/browse/LOG4J2-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15199491#comment-15199491 ] Mikael Ståldal commented on LOG4J2-1318: Can we use just the logger name as key if {{MessageFactory}} is {{null}}? > LoggerContext#getLogger causes heavy GC overhead > > > Key: LOG4J2-1318 > URL: https://issues.apache.org/jira/browse/LOG4J2-1318 > Project: Log4j 2 > Issue Type: Bug > Components: Core >Affects Versions: 2.5 >Reporter: Rodrigo Merino > Labels: performance > > With the changes from LOG4J2-1180, when running performance testing > scenarios, we are experiencing some throughput degradations due to an > increase in the GC stall time. > With the changes from LOG4J2-1180, the GC stall went from ~6% to ~11%, having > an impact on the application of a reduction of ~32% of the operations > throughput. The memory allocation rate both before and after the change is > 4GB/s and 12GB/s respectively. > In both cases, the relevant jvm configuration params were: > {code} > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCDateStamps > -XX:+PrintTenuringDistribution -XX:ErrorFile=%MHOME%/logs/err.log > -Xloggc:%HOME%/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError > -XX:+AlwaysPreTouch -Xms2048m -XX:PermSize=256m -XX:MaxPermSize=256m > -XX:+UseParNewGC -XX:NewSize=1024m -XX:+UseNUMA > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org
[jira] [Commented] (LOG4J2-1318) LoggerContext#getLogger causes heavy GC overhead
[ https://issues.apache.org/jira/browse/LOG4J2-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15199465#comment-15199465 ] Rodrigo Merino commented on LOG4J2-1318: I think that avoiding the composite key creation would do the trick. As long as the composite maps you mentioned are optimized for reading, there should be no heavy impact. > LoggerContext#getLogger causes heavy GC overhead > > > Key: LOG4J2-1318 > URL: https://issues.apache.org/jira/browse/LOG4J2-1318 > Project: Log4j 2 > Issue Type: Bug > Components: Core >Affects Versions: 2.5 >Reporter: Rodrigo Merino > Labels: performance > > With the changes from LOG4J2-1180, when running performance testing > scenarios, we are experiencing some throughput degradations due to an > increase in the GC stall time. > With the changes from LOG4J2-1180, the GC stall went from ~6% to ~11%, having > an impact on the application of a reduction of ~32% of the operations > throughput. The memory allocation rate both before and after the change is > 4GB/s and 12GB/s respectively. > In both cases, the relevant jvm configuration params were: > {code} > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCDateStamps > -XX:+PrintTenuringDistribution -XX:ErrorFile=%MHOME%/logs/err.log > -Xloggc:%HOME%/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError > -XX:+AlwaysPreTouch -Xms2048m -XX:PermSize=256m -XX:MaxPermSize=256m > -XX:+UseParNewGC -XX:NewSize=1024m -XX:+UseNUMA > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org
[jira] [Commented] (LOG4J2-1318) LoggerContext#getLogger causes heavy GC overhead
[ https://issues.apache.org/jira/browse/LOG4J2-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15198474#comment-15198474 ] Remko Popma commented on LOG4J2-1318: - I see now: each call to LoggerContext.getLogger() creates a String key to look up the logger. I guess our expectation was that Logger instances would be cached in fields so this lookup would not happen very often, while your usage is to look up the logger frequently. We can change the implementation to use a nested map, so composite keys are no longer needed. Something like {{Map, Map}}. Getting things out and putting things in can be a bit fiddly, so perhaps we should wrap that logic in a custom {{LoggerCache}} container class, so it can be reused by the various LoggerContext implementations. > LoggerContext#getLogger causes heavy GC overhead > > > Key: LOG4J2-1318 > URL: https://issues.apache.org/jira/browse/LOG4J2-1318 > Project: Log4j 2 > Issue Type: Bug > Components: Core >Affects Versions: 2.5 >Reporter: Rodrigo Merino > Labels: performance > > With the changes from LOG4J2-1180, when running performance testing > scenarios, we are experiencing some throughput degradations due to an > increase in the GC stall time. > With the changes from LOG4J2-1180, the GC stall went from ~6% to ~11%, having > an impact on the application of a reduction of ~32% of the operations > throughput. The memory allocation rate both before and after the change is > 4GB/s and 12GB/s respectively. > In both cases, the relevant jvm configuration params were: > {code} > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCDateStamps > -XX:+PrintTenuringDistribution -XX:ErrorFile=%MHOME%/logs/err.log > -Xloggc:%HOME%/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError > -XX:+AlwaysPreTouch -Xms2048m -XX:PermSize=256m -XX:MaxPermSize=256m > -XX:+UseParNewGC -XX:NewSize=1024m -XX:+UseNUMA > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org For additional commands, e-mail: log4j-dev-h...@logging.apache.org