[ 
https://issues.apache.org/jira/browse/LOG4J2-467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remko Popma closed LOG4J2-467.
------------------------------


Interesting to see that last year's consumer hardware (Scott's MBP) is much 
faster than 5 year old enterprise hardware...

Below are the results of running the perf tests on Solaris 10 (64bit) with 
JDK1.7.0_06, 4-core Xeon X5570 dual CPU @2.93Ghz with hyperthreading switched 
on (16 virtual cores):

*CACHED*
{code}
(1 thread) : throughput: 2,993,579 ops/sec. (avg of 5 runs)
(2 threads): throughput: 1,643,640 ops/sec. (avg of 5 runs)
(4 threads): throughput: 1,509,098 ops/sec. (avg of 3 runs)

(1 thread) : throughput: 2,905,708 ops/sec. (     "       )
(2 threads): throughput: 1,887,124 ops/sec. (     "       )
(4 threads): throughput: 1,403,921 ops/sec. (     "       )

(1 thread) : throughput: 3,031,088 ops/sec. (     "       )
(2 threads): throughput: 1,828,200 ops/sec. (     "       )
(4 threads): throughput: 1,517,553 ops/sec. (     "       )

(1 thread) : throughput: 2,936,223 ops/sec. (     "       )
(2 threads): throughput: 1,505,578 ops/sec. (     "       )
(4 threads): throughput: 1,184,906 ops/sec. (     "       )
{code}

*UNCACHED*
{code}
(1 thread) : throughput: 2,368,698 ops/sec. (     "       )
(2 threads): throughput: 1,360,309 ops/sec. (     "       )
(4 threads): throughput:   998,752 ops/sec. (     "       )

(1 thread) : throughput: 2,396,895 ops/sec. (     "       )
(2 threads): throughput: 1,347,167 ops/sec. (     "       )
(4 threads): throughput: 1,179,600 ops/sec. (     "       )

(1 thread) : throughput: 2,354,092 ops/sec. (     "       )
(2 threads): throughput: 1,444,437 ops/sec. (     "       )
(4 threads): throughput: 1,089,047 ops/sec. (     "       )

(1 thread) : throughput: 2,465,949 ops/sec. (     "       )
(2 threads): throughput: 1,231,593 ops/sec. (     "       )
(4 threads): throughput: 1,144,265 ops/sec. (     "       )
{code}

The one thing that the enterprise hardware has going for it is that there is 
very little variance between runs.

Summary (in million ops/sec):

| |Cached|Uncached|
|1 thread|3.0|2.4|
|2 threads|1.7|1.3|
|4 threads|1.4|1.1|

On the Windows and Unix platforms I tested, caching the thread name seems to 
boost performance. I can't explain Scott's results. However, I think the 
results as a whole justify the slightly added complexity of caching the thread 
name.


> Thread name caching in async logger incompatible with use of Thread.setName()
> -----------------------------------------------------------------------------
>
>                 Key: LOG4J2-467
>                 URL: https://issues.apache.org/jira/browse/LOG4J2-467
>             Project: Log4j 2
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 2.0-beta9
>         Environment: Debian Squeeze amd64
> OpenJDK 7u25
>            Reporter: Anthony Baldocchi
>            Assignee: Remko Popma
>             Fix For: 2.0-rc1, 2.0
>
>         Attachments: PerfTestDriver.java, PerfTestDriver.java
>
>
> AsyncLogger caches a thread's name in a thread-local info variable.  I make 
> use of a thread pool where the submitted Runnables call Thread.setName() at 
> the beginning of their task and the thread name is included in the log 
> message.  For an example of this behavior, see 
> org.jboss.netty.util.ThreadRenamingRunnable in Netty 3.x.  With the cached 
> thread name, the log messages will contain whatever name the thread had when 
> it logged for the first time and so long as the thread doesn't terminate 
> (such as in a core pool thread), all log messages involving this thread will 
> be erroneous.  If Thread.getName has a significant performance impact for 
> async logging, I would be satisfied if this behavior were configurable, 
> perhaps on a per-logger basis, so that the penalty only needs to be taken by 
> users who make use of Thread.setName()



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to