[ 
https://issues.apache.org/jira/browse/LOG4J2-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remko Popma updated LOG4J2-1179:
--------------------------------
    Description: 
Reorganize and extend performance data on the site.

*Async Loggers Manual Page*
Should be more focussed. Proposed changes:
* Move _"Location, location, location..."_ section on Location Info to general 
Performance page (keep anchors and link to the relevant Performance page 
section to avoid breaking existing links)
* Similarly, move _"Throughput of Logging With Location 
(includeLocation="true")"_ table with throughput results to general Performance 
page
* Move _"FileAppender vs. RandomAccessFileAppender"_ section to general 
Performance page. (Again, keep anchors and link to new section on Performance 
page to avoid breaking links.)
* Rewrite opening paragraph of Async Logger manual page to remove reference to 
RandomAccessFile appender
* Rewrite section on _Latency_
** The histogram shows service time (more useful for users is response time: 
service time + wait time).
** Bar chart diagram on "average latency" is nonsense. Latency is not a normal 
distribution so terms like "average latency" don't make sense. Remove this. (A 
histogram showing the full range of percentiles _does_ make sense.)
** Bar chart diagram with max of 99.99% of observations is better than average 
but still has large drawbacks: this is service time (omitting the crucial wait 
time) and how high are the peaks in the 0.01% we did not report? Better to 
remove this and instead show a histogram with the full range of percentages.

*Performance Page*
# Briefly explain about various aspects of "performance": peak measured 
throughput (what kind of bursts can we deal with?), sustained throughput, and 
response time (service time + wait time).
# Then show how Log4j 2 compares to the alternatives (Logback, Log4j-1.2 and 
JUL) on all these three performance dimensions.
# Finally, document some performance trade-offs for Log4j 2 functionality.

*2. Comparison to alternative logging libraries*
Link to Async Loggers page for bursty logging. 
Clarify that Async Appender exists to minimize dependencies but should be 
avoided if performance is a concern. Async Appender is NOT the default and 
should NOT be used for benchmarking. (I found [this loggly 
article|https://www.loggly.com/blog/benchmarking-java-logging-frameworks/] very 
frustrating in that respect.) Should probably also clarify that any benchmark 
that tests with only one thread doing logging is of limited use.

For various appenders, compare Log4j2 to alternatives with regards to max 
sustained throughput (and separately, response time).
* [File Appender max sustained 
thoughput|https://issues.apache.org/jira/browse/LOG4J2-1297?focusedCommentId=15256490&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15256490]
* Socket appender (TCP/UDP)
* Syslog appender (TCP/UDP)

*3. Log4j 2 functionality performance trade-offs*
* including location
* Cost of various layouts (Gelf, HTML, XML, CSV, Pattern)
* Cost of various Pattern Layout options
* Cost of various appenders (File, RandomAccess File, MemoryMapped File, 
Console, Rewrite, other?). Use the same layout for comparison. Perhaps the 
PatternLayout with the {{%d [%t] %p %c - %m%n}} pattern.
* Cost of various APIs/wrappers (SLF4J, Log4j1, JUL, Commons Logging)
* JDBC appenders? - different JDBC drivers and target databases may have very 
different performance. May become a big project. We could do a quick comparison 
of the JDBC appender to the JDK Derby DB compared against FileAppender just to 
get an idea of max sustained throughput?

-------------------
Of the existing Performance page sections:

* Briefly mention that disabled logging has no measurable cost, but 
de-emphasize this section by moving it down the page. 
* Parameterized messages: use these JMH [benchmark 
results|https://issues.apache.org/jira/browse/LOG4J2-1278?focusedCommentId=15216236&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15216236]?
 (Looks like parameterized messages are currently quite expensive...)
* I like the part about the filters because it a) compares Log4j 2 to Logback 
and b) considers multithreaded applications. I'll turn this into a JMH test and 
show the result as a bar chart.


  was:
Reorganize and extend performance data on the site.

*Async Loggers Manual Page*
Should be more focussed. Proposed changes:
* Move _"Location, location, location..."_ section on Location Info to general 
Performance page (keep anchors and link to the relevant Performance page 
section to avoid breaking existing links)
* Similarly, move _"Throughput of Logging With Location 
(includeLocation="true")"_ table with throughput results to general Performance 
page
* Move _"FileAppender vs. RandomAccessFileAppender"_ section to general 
Performance page. (Again, keep anchors and link to new section on Performance 
page to avoid breaking links.)
* Rewrite opening paragraph of Async Logger manual page to remove reference to 
RandomAccessFile appender
* Rewrite section on _Latency_
** The histogram shows service time (more useful for users is response time: 
service time + wait time).
** Bar chart diagram on "average latency" is nonsense. Latency is not a normal 
distribution so terms like "average latency" don't make sense. Remove this. (A 
histogram showing the full range of percentiles _does_ make sense.)
** Bar chart diagram with max of 99.99% of observations is better than average 
but still has large drawbacks: this is service time (omitting the crucial wait 
time) and how high are the peaks in the 0.01% we did not report? Better to 
remove this and instead show a histogram with the full range of percentages.

*Performance Page*
# Briefly explain about various aspects of "performance": peak measured 
throughput (what kind of bursts can we deal with?), sustained throughput, and 
response time (service time + wait time).
# Then show how Log4j 2 compares to the alternatives (Logback, Log4j-1.2 and 
JUL) on all these three performance dimensions.
# Finally, document some performance trade-offs for Log4j 2 functionality.

*2. Comparison to alternative logging libraries*
Link to Async Loggers page for bursty logging. 
Clarify that Async Appender exists to minimize dependencies but should be 
avoided if performance is a concern. Async Appender is NOT the default and 
should NOT be used for benchmarking. (I found [this loggly 
article|https://www.loggly.com/blog/benchmarking-java-logging-frameworks/] very 
frustrating in that respect.) Should probably also clarify that any benchmark 
that tests with only one thread doing logging is of limited use.

For various appenders, compare Log4j2 to alternatives with regards to max 
sustained throughput (and separately, response time).
* [File Appender max sustained 
thoughput|https://issues.apache.org/jira/browse/LOG4J2-1297?focusedCommentId=15256490&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15256490]
* Socket appender (TCP/UDP)
* Syslog appender (TCP/UDP)

*3. Log4j 2 functionality performance trade-offs*
* including location
* Cost of various layouts (HTML, XML, CSV, Pattern)
* Cost of various Pattern Layout options
* Cost of various appenders (File, RandomAccess File, MemoryMapped File, 
Console, Rewrite, other?) TODO: decide on layout for these tests.
* Cost of various APIs/wrappers (SLF4J, Log4j1, JUL, Commons Logging)
* JDBC appenders? - different JDBC drivers and target databases may have very 
different performance. May become a big project. We could do a quick comparison 
of the JDBC appender to the JDK Derby DB compared against FileAppender just to 
get an idea of max sustained throughput?

-------------------
Of the existing Performance page sections:

* Briefly mention that disabled logging has no measurable cost, but 
de-emphasize this section by moving it down the page. 
* Parameterized messages: use these JMH [benchmark 
results|https://issues.apache.org/jira/browse/LOG4J2-1278?focusedCommentId=15216236&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15216236]?
 (Looks like parameterized messages are currently quite expensive...)
* I like the part about the filters because it a) compares Log4j 2 to Logback 
and b) considers multithreaded applications. I'll turn this into a JMH test and 
show the result as a bar chart.



> Log4j performance documentation
> -------------------------------
>
>                 Key: LOG4J2-1179
>                 URL: https://issues.apache.org/jira/browse/LOG4J2-1179
>             Project: Log4j 2
>          Issue Type: Documentation
>          Components: Documentation, Performance Benchmarks
>    Affects Versions: 2.4.1
>            Reporter: Remko Popma
>            Assignee: Remko Popma
>             Fix For: 2.6
>
>
> Reorganize and extend performance data on the site.
> *Async Loggers Manual Page*
> Should be more focussed. Proposed changes:
> * Move _"Location, location, location..."_ section on Location Info to 
> general Performance page (keep anchors and link to the relevant Performance 
> page section to avoid breaking existing links)
> * Similarly, move _"Throughput of Logging With Location 
> (includeLocation="true")"_ table with throughput results to general 
> Performance page
> * Move _"FileAppender vs. RandomAccessFileAppender"_ section to general 
> Performance page. (Again, keep anchors and link to new section on Performance 
> page to avoid breaking links.)
> * Rewrite opening paragraph of Async Logger manual page to remove reference 
> to RandomAccessFile appender
> * Rewrite section on _Latency_
> ** The histogram shows service time (more useful for users is response time: 
> service time + wait time).
> ** Bar chart diagram on "average latency" is nonsense. Latency is not a 
> normal distribution so terms like "average latency" don't make sense. Remove 
> this. (A histogram showing the full range of percentiles _does_ make sense.)
> ** Bar chart diagram with max of 99.99% of observations is better than 
> average but still has large drawbacks: this is service time (omitting the 
> crucial wait time) and how high are the peaks in the 0.01% we did not report? 
> Better to remove this and instead show a histogram with the full range of 
> percentages.
> *Performance Page*
> # Briefly explain about various aspects of "performance": peak measured 
> throughput (what kind of bursts can we deal with?), sustained throughput, and 
> response time (service time + wait time).
> # Then show how Log4j 2 compares to the alternatives (Logback, Log4j-1.2 and 
> JUL) on all these three performance dimensions.
> # Finally, document some performance trade-offs for Log4j 2 functionality.
> *2. Comparison to alternative logging libraries*
> Link to Async Loggers page for bursty logging. 
> Clarify that Async Appender exists to minimize dependencies but should be 
> avoided if performance is a concern. Async Appender is NOT the default and 
> should NOT be used for benchmarking. (I found [this loggly 
> article|https://www.loggly.com/blog/benchmarking-java-logging-frameworks/] 
> very frustrating in that respect.) Should probably also clarify that any 
> benchmark that tests with only one thread doing logging is of limited use.
> For various appenders, compare Log4j2 to alternatives with regards to max 
> sustained throughput (and separately, response time).
> * [File Appender max sustained 
> thoughput|https://issues.apache.org/jira/browse/LOG4J2-1297?focusedCommentId=15256490&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15256490]
> * Socket appender (TCP/UDP)
> * Syslog appender (TCP/UDP)
> *3. Log4j 2 functionality performance trade-offs*
> * including location
> * Cost of various layouts (Gelf, HTML, XML, CSV, Pattern)
> * Cost of various Pattern Layout options
> * Cost of various appenders (File, RandomAccess File, MemoryMapped File, 
> Console, Rewrite, other?). Use the same layout for comparison. Perhaps the 
> PatternLayout with the {{%d [%t] %p %c - %m%n}} pattern.
> * Cost of various APIs/wrappers (SLF4J, Log4j1, JUL, Commons Logging)
> * JDBC appenders? - different JDBC drivers and target databases may have very 
> different performance. May become a big project. We could do a quick 
> comparison of the JDBC appender to the JDK Derby DB compared against 
> FileAppender just to get an idea of max sustained throughput?
> -------------------
> Of the existing Performance page sections:
> * Briefly mention that disabled logging has no measurable cost, but 
> de-emphasize this section by moving it down the page. 
> * Parameterized messages: use these JMH [benchmark 
> results|https://issues.apache.org/jira/browse/LOG4J2-1278?focusedCommentId=15216236&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15216236]?
>  (Looks like parameterized messages are currently quite expensive...)
> * I like the part about the filters because it a) compares Log4j 2 to Logback 
> and b) considers multithreaded applications. I'll turn this into a JMH test 
> and show the result as a bar chart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org

Reply via email to