Re: Running disruptor async performance tests

2014-03-30 Thread Rebecca Ahlvarsson
Thanks again, Remko. After you detailed explanation everything makes sense
now.

I am using an i7 machine with Ubuntu.

-Becky


On Sat, Mar 29, 2014 at 8:49 PM, Remko Popma remko.po...@gmail.com wrote:

 When you run PerfTest with these options below you get the same as the
 first test run by PerfTestDriver Log4j2: Loggers all async (single thread)
 :
 -Dlog4j.configurationFile=perf3PlainNoLoc.xml
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.
 AsyncLoggerContextSelector

 PerfTestDriver runs each test 5 times and averages the result in the last
 report line.

 (When running PerfTestDriver you don't need to specify the above system
 properties; it will start a new process for each test and specify the
 correct system properties for each child process.)

 How to read the report line results:
 PerfTest format:
 avg=1250 99%=2048 99.99%=16384 sampleCount=500
 2587352 operations/second

 The first line is latency test results. It measured 5,000,000 calls to
 logger.info, with a 1 microsec busy-spin pause in between each
 measurement.
 All these measurements (in nanosecs) were added to a Histogram that was set
 up with buckets that are powers of two (2, 4. 8, 16, ... until 2^31). The
 Histogram then reported that the average time taken by each call was 1250
 nanos. 99% of the measurements took less than 2048 nanos, and 99.99% of the
 measurements took less than 16,384 nanos. This is to give some idea of the
 outliers (which is what you are most interested in when measuring latency).

 Ideally you want to print the whole Histogram, so you know exactly how many
 measurements each bucket had. This would give you a diagram similar to the
 Async Logging Latency Histogram (at xx threads) diagram on the Async
 Logger page (http://logging.apache.org/log4j/2.x/manual/async.html ). If
 you want to do that you will need to modify the source code of
 PerfTest#createSamplingReport to loop over the buckets in Histogram.
 Histogram#getUpperBoundAt(index) gives the max number that goes into that
 bucket (the power of 2), and Histogram#getCountAt(index) gives the number
 of measurements for that bucket.

 The PerfTestDriver final report line averages the result of 5 test runs.
 For throughput this is okay, but for latency that is taking the average of
 an average which is not a good idea (I should change that...).
 Still, the PerfTestDriver also reports the individual PerfTest results, so
 you get 5 latency reports which should give you some indication.

 About the difference in the resulting numbers:
 I've noticed that there can be large variation between runs. This can be
 caused by many things.
 I recommend running each test at least 3-5 times (this is why
 PerfTestDriver will run PerfTest 5 times, so I can see if there is much
 variation between runs). Also, try to not have any other processes running
 at the same time as you are running the test. If unavoidable, try to have a
 consistent load...

 I would run the PerfTest a few more times and see if you get similar
 results each time or whether the results become more similar to what we see
 when this test is kicked off by the PerfTestDriver.

 Just out of interest, what hardware/OS are you on?



 On Sat, Mar 29, 2014 at 10:46 PM, Rebecca Ahlvarsson
 rahlvars...@gmail.comwrote:

  Thank you very much, Remko. So I ran both tests on my machine and got the
  numbers below. I am interested in the latency numbers and throughput when
  we have one (i.e. threadCount 1 ?) asynchronous logger thread using
  disruptor. Can you point me out to these numbers on my results below? Not
  sure what I am getting when I run RunLog4j2 because the average is much
  higher than when I run PerfTestDriver. Also what is the difference
 between
  the five runs on PerfTestDriver? Are they the same thing executed 5
 times?
 
  java -Dlog4j.configurationFile=perf3PlainNoLoc.xml
 
 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -cp
 
 
 target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
  org.apache.logging.log4j.core.async.perftest.PerfTest
  org.apache.logging.log4j.core.async.perftest.RunLog4j2 blah blah.log 1
  -verbose
  Starting PerfTest blah (1)...
  Warming up the JVM...
  Warmup complete in 24.1 seconds
  Waiting 10 seconds for buffers to drain warmup data...
  Starting the main test...
  avg=1250 99%=2048 99.99%=16384 sampleCount=500
  2587352 operations/second
 
 
  java -Dlog4j.configurationFile=perf3PlainNoLoc.xml
 
 
 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
  -cp
 
 
 target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
  org.apache.logging.log4j.core.async.perftest.PerfTestDriver
  Log4j2: Loggers all async (single thread) (1/5)... (2/5)... (3/5)...
  (4/5)... (5/5)... took 319.3 seconds
  avg=102 99%=256 99.99%=1024 sampleCount=500
  4080781 operations/second
  avg=168 99%=256 99.99%=1024 

Re: Running disruptor async performance tests

2014-03-29 Thread Remko Popma
Hi Becky,

The performance test code is a bit messy. Apologies for that.

To answer your questions: I used the longer message for throughput tests,
and the shorter message for latency tests.
PerfTest parameters:
[0] = runner class (like org.apache.logging.log4j.core.
async.perftest.RunLog4j2)
[1] = (blah) is the test name (like Loggers all async) printed to the
console when a test is started.
[2] = file name to write results to
[3] = thread count (default=3)
followed by optional parameters -verbose and -throughput.

The current implementation of PerfTest runs one throughput and one latency
test, regardless of whether -throughput is specified.

The PerfTestDriver class is the start point for all tests. It sets up
various things like classpath, and system properties to control various
things for different tests. If you use the PerfTest class directly you need
to set up these parameters manually:
-Dlog4j.configurationFile=perf3PlainNoLoc.xml
-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector

The  perf3PlainNoLoc.xml config determines the log file.
The PerfTestDriver would automatically delete this file after each run, you
probably want to do this manually.

Hope this helps,
Remko



On Sat, Mar 29, 2014 at 12:03 PM, Rebecca Ahlvarsson
rahlvars...@gmail.comwrote:

 I am trying to run the async performance tests described on the link below
 on my machine.


 http://logging.apache.org/log4j/2.x/manual/async.html#Asynchronous_Logging_Performance

 I am not an expert with log4j, so here is how far I got after building
 log4j with maven:

 java -cp

 target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
 org.apache.logging.log4j.core.async.perftest.PerfTest
 org.apache.logging.log4j.core.async.perftest.RunLog4j2 blah blah.log 1
 -verbose

 Then I get this in the output:

 avg=17 99%=32 99.99%=64 sampleCount=500
 9962247 operations/second

 The questions I have are:

 1. It looks like the source code IPerfTestRunner uses a much shorter
 message Short Msg instead of the 500 characters message stated in the
 link above. Is that intentional or is it a bug? Do we want to test the
 latency with the 500-character message or just a short message?

 2. I notice that my logs are NOT going to any file. I am probably
 misconfiguring something with log4j. How do I generate a file with the
 messages from the performance test?

 3. I just want to test with one asynchronous logging thread, so I am
 passing threadCount 1 above. What does the second parameter 'blah' mean?

 4. Not sure why I get operations/seconds if I am not passing -throughput in
 the command-line. I just want to get the latency numbers for now. After
 that I will worry about throughput.

 So basically I just want to run the same test you run to see those great
 numbers on my production machine.

 Thanks for the help!

 -Becky



Re: Running disruptor async performance tests

2014-03-29 Thread Rebecca Ahlvarsson
Thank you very much, Remko. So I ran both tests on my machine and got the
numbers below. I am interested in the latency numbers and throughput when
we have one (i.e. threadCount 1 ?) asynchronous logger thread using
disruptor. Can you point me out to these numbers on my results below? Not
sure what I am getting when I run RunLog4j2 because the average is much
higher than when I run PerfTestDriver. Also what is the difference between
the five runs on PerfTestDriver? Are they the same thing executed 5 times?

java -Dlog4j.configurationFile=perf3PlainNoLoc.xml
-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
-cp
target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
org.apache.logging.log4j.core.async.perftest.PerfTest
org.apache.logging.log4j.core.async.perftest.RunLog4j2 blah blah.log 1
-verbose
Starting PerfTest blah (1)...
Warming up the JVM...
Warmup complete in 24.1 seconds
Waiting 10 seconds for buffers to drain warmup data...
Starting the main test...
avg=1250 99%=2048 99.99%=16384 sampleCount=500
2587352 operations/second


java -Dlog4j.configurationFile=perf3PlainNoLoc.xml
-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
-cp
target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
org.apache.logging.log4j.core.async.perftest.PerfTestDriver
Log4j2: Loggers all async (single thread) (1/5)... (2/5)... (3/5)...
(4/5)... (5/5)... took 319.3 seconds
avg=102 99%=256 99.99%=1024 sampleCount=500
4080781 operations/second
avg=168 99%=256 99.99%=1024 sampleCount=500
3376301 operations/second
avg=103 99%=256 99.99%=2048 sampleCount=500
8383943 operations/second
avg=102 99%=256 99.99%=2048 sampleCount=500
3500849 operations/second
avg=65 99%=256 99.99%=2048 sampleCount=500
3143970 operations/second
throughput: 4,497,168 ops/sec. latency(ns): avg=108.0 99%  256.0 99.99% 
1638.4 (2500 samples)
-
Log4j2: Loggers mixed sync/async (single thread) (1/5)... (2/5)... (3/5)...
(4/5)... (5/5)... took 322.6 seconds
avg=214 99%=512 99.99%=2048 sampleCount=500
4257812 operations/second
avg=217 99%=256 99.99%=2048 sampleCount=500
3027177 operations/second
avg=271 99%=512 99.99%=2048 sampleCount=500
4304127 operations/second
avg=217 99%=512 99.99%=2048 sampleCount=500
3008435 operations/second
avg=223 99%=512 99.99%=2048 sampleCount=500
543 operations/second
throughput: 4,005,510 ops/sec. latency(ns): avg=228.4 99%  460.8 99.99% 
2048.0 (2500 samples)
-



On Sat, Mar 29, 2014 at 3:46 AM, Remko Popma remko.po...@gmail.com wrote:

 Hi Becky,

 The performance test code is a bit messy. Apologies for that.

 To answer your questions: I used the longer message for throughput tests,
 and the shorter message for latency tests.
 PerfTest parameters:
 [0] = runner class (like org.apache.logging.log4j.core.
 async.perftest.RunLog4j2)
 [1] = (blah) is the test name (like Loggers all async) printed to the
 console when a test is started.
 [2] = file name to write results to
 [3] = thread count (default=3)
 followed by optional parameters -verbose and -throughput.

 The current implementation of PerfTest runs one throughput and one latency
 test, regardless of whether -throughput is specified.

 The PerfTestDriver class is the start point for all tests. It sets up
 various things like classpath, and system properties to control various
 things for different tests. If you use the PerfTest class directly you need
 to set up these parameters manually:
 -Dlog4j.configurationFile=perf3PlainNoLoc.xml

 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector

 The  perf3PlainNoLoc.xml config determines the log file.
 The PerfTestDriver would automatically delete this file after each run, you
 probably want to do this manually.

 Hope this helps,
 Remko



 On Sat, Mar 29, 2014 at 12:03 PM, Rebecca Ahlvarsson
 rahlvars...@gmail.comwrote:

  I am trying to run the async performance tests described on the link
 below
  on my machine.
 
 
 
 http://logging.apache.org/log4j/2.x/manual/async.html#Asynchronous_Logging_Performance
 
  I am not an expert with log4j, so here is how far I got after building
  log4j with maven:
 
  java -cp
 
 
 target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
  org.apache.logging.log4j.core.async.perftest.PerfTest
  org.apache.logging.log4j.core.async.perftest.RunLog4j2 blah blah.log 1
  -verbose
 
  Then I get this in the output:
 
  avg=17 99%=32 99.99%=64 sampleCount=500
  9962247 operations/second
 
  The questions I have are:
 
  1. It looks like the source code IPerfTestRunner uses a much shorter
  message Short Msg instead of the 500 characters message stated in the
  link above. Is that intentional or is it a bug? Do we want to test the
  latency with the 500-character message or just a short message?
 
  2. I notice that my logs are NOT going to 

Re: Running disruptor async performance tests

2014-03-29 Thread Remko Popma
When you run PerfTest with these options below you get the same as the
first test run by PerfTestDriver Log4j2: Loggers all async (single thread)
:
-Dlog4j.configurationFile=perf3PlainNoLoc.xml
-DLog4jContextSelector=org.apache.logging.log4j.core.async.
AsyncLoggerContextSelector

PerfTestDriver runs each test 5 times and averages the result in the last
report line.

(When running PerfTestDriver you don't need to specify the above system
properties; it will start a new process for each test and specify the
correct system properties for each child process.)

How to read the report line results:
PerfTest format:
avg=1250 99%=2048 99.99%=16384 sampleCount=500
2587352 operations/second

The first line is latency test results. It measured 5,000,000 calls to
logger.info, with a 1 microsec busy-spin pause in between each measurement.
All these measurements (in nanosecs) were added to a Histogram that was set
up with buckets that are powers of two (2, 4. 8, 16, ... until 2^31). The
Histogram then reported that the average time taken by each call was 1250
nanos. 99% of the measurements took less than 2048 nanos, and 99.99% of the
measurements took less than 16,384 nanos. This is to give some idea of the
outliers (which is what you are most interested in when measuring latency).

Ideally you want to print the whole Histogram, so you know exactly how many
measurements each bucket had. This would give you a diagram similar to the
Async Logging Latency Histogram (at xx threads) diagram on the Async
Logger page (http://logging.apache.org/log4j/2.x/manual/async.html ). If
you want to do that you will need to modify the source code of
PerfTest#createSamplingReport to loop over the buckets in Histogram.
Histogram#getUpperBoundAt(index) gives the max number that goes into that
bucket (the power of 2), and Histogram#getCountAt(index) gives the number
of measurements for that bucket.

The PerfTestDriver final report line averages the result of 5 test runs.
For throughput this is okay, but for latency that is taking the average of
an average which is not a good idea (I should change that...).
Still, the PerfTestDriver also reports the individual PerfTest results, so
you get 5 latency reports which should give you some indication.

About the difference in the resulting numbers:
I've noticed that there can be large variation between runs. This can be
caused by many things.
I recommend running each test at least 3-5 times (this is why
PerfTestDriver will run PerfTest 5 times, so I can see if there is much
variation between runs). Also, try to not have any other processes running
at the same time as you are running the test. If unavoidable, try to have a
consistent load...

I would run the PerfTest a few more times and see if you get similar
results each time or whether the results become more similar to what we see
when this test is kicked off by the PerfTestDriver.

Just out of interest, what hardware/OS are you on?



On Sat, Mar 29, 2014 at 10:46 PM, Rebecca Ahlvarsson
rahlvars...@gmail.comwrote:

 Thank you very much, Remko. So I ran both tests on my machine and got the
 numbers below. I am interested in the latency numbers and throughput when
 we have one (i.e. threadCount 1 ?) asynchronous logger thread using
 disruptor. Can you point me out to these numbers on my results below? Not
 sure what I am getting when I run RunLog4j2 because the average is much
 higher than when I run PerfTestDriver. Also what is the difference between
 the five runs on PerfTestDriver? Are they the same thing executed 5 times?

 java -Dlog4j.configurationFile=perf3PlainNoLoc.xml

 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
 -cp

 target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
 org.apache.logging.log4j.core.async.perftest.PerfTest
 org.apache.logging.log4j.core.async.perftest.RunLog4j2 blah blah.log 1
 -verbose
 Starting PerfTest blah (1)...
 Warming up the JVM...
 Warmup complete in 24.1 seconds
 Waiting 10 seconds for buffers to drain warmup data...
 Starting the main test...
 avg=1250 99%=2048 99.99%=16384 sampleCount=500
 2587352 operations/second


 java -Dlog4j.configurationFile=perf3PlainNoLoc.xml

 -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
 -cp

 target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
 org.apache.logging.log4j.core.async.perftest.PerfTestDriver
 Log4j2: Loggers all async (single thread) (1/5)... (2/5)... (3/5)...
 (4/5)... (5/5)... took 319.3 seconds
 avg=102 99%=256 99.99%=1024 sampleCount=500
 4080781 operations/second
 avg=168 99%=256 99.99%=1024 sampleCount=500
 3376301 operations/second
 avg=103 99%=256 99.99%=2048 sampleCount=500
 8383943 operations/second
 avg=102 99%=256 99.99%=2048 sampleCount=500
 3500849 operations/second
 avg=65 99%=256 99.99%=2048 sampleCount=500
 3143970 operations/second
 throughput: 4,497,168 ops/sec. 

Re: Running disruptor async performance tests

2014-03-28 Thread Matt Sicker
Going to guess Remko has something useful to say on this, but since he's in
Japan, just wait a bit for the timezones to make sense.


On 28 March 2014 22:03, Rebecca Ahlvarsson rahlvars...@gmail.com wrote:

 I am trying to run the async performance tests described on the link below
 on my machine.


 http://logging.apache.org/log4j/2.x/manual/async.html#Asynchronous_Logging_Performance

 I am not an expert with log4j, so here is how far I got after building
 log4j with maven:

 java -cp

 target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
 org.apache.logging.log4j.core.async.perftest.PerfTest
 org.apache.logging.log4j.core.async.perftest.RunLog4j2 blah blah.log 1
 -verbose

 Then I get this in the output:

 avg=17 99%=32 99.99%=64 sampleCount=500
 9962247 operations/second

 The questions I have are:

 1. It looks like the source code IPerfTestRunner uses a much shorter
 message Short Msg instead of the 500 characters message stated in the
 link above. Is that intentional or is it a bug? Do we want to test the
 latency with the 500-character message or just a short message?

 2. I notice that my logs are NOT going to any file. I am probably
 misconfiguring something with log4j. How do I generate a file with the
 messages from the performance test?

 3. I just want to test with one asynchronous logging thread, so I am
 passing threadCount 1 above. What does the second parameter 'blah' mean?

 4. Not sure why I get operations/seconds if I am not passing -throughput in
 the command-line. I just want to get the latency numbers for now. After
 that I will worry about throughput.

 So basically I just want to run the same test you run to see those great
 numbers on my production machine.

 Thanks for the help!

 -Becky




-- 
Matt Sicker boa...@gmail.com