Ok, my bet is this is on memory layout. So, to test you may want to load up 
your favorite word processor with one or two documents and see if this 
consistently gives you the slower performance numbers.

Regards,
Kirk

> On Aug 2, 2017, at 3:46 PM, Roger Alsing <rogerals...@gmail.com> wrote:
> 
> Adding to that,
> I've also tried replacing the current forkjoin threadpool with a custom 
> thread/core affine scheduler and the behavior is exactly the same.
> 
> 
> Den onsdag 2 augusti 2017 kl. 15:44:29 UTC+2 skrev Roger Alsing:
> This is the output of the xloggc 
> https://gist.github.com/rogeralsing/64a9e11b825e870acb20bb4dfb69cc29 
> <https://gist.github.com/rogeralsing/64a9e11b825e870acb20bb4dfb69cc29>
> and here is the console output of the same run 
> https://gist.github.com/rogeralsing/22d78fe3ae5155f920fd659c66b124db 
> <https://gist.github.com/rogeralsing/22d78fe3ae5155f920fd659c66b124db>
> 
> 
> 
> Den onsdag 2 augusti 2017 kl. 10:55:22 UTC+2 skrev Kirk Pepperdine:
> There are a couple of very long safe point times in there. By long I mean 6 
> or more milliseconds. However, without full gc logging it’s difficult to know 
> if the safe pointing is due to GC or something else.
> 
> Other than that…. the logs all show pretty normal operations. Can you run 
> this with -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Xloggc:<logfile> as 
> well as the flags you’re using. I have some analytics that I could run but I 
> need time stamps and GC times for them to be meaningful.
> 
> I’d run myself but I’m currently running a couple of other benchmarks.
> 
> Kind regards,
> Kirk
> 
>> On Aug 1, 2017, at 9:32 PM, Roger Alsing <roger...@gmail.com <>> wrote:
>> 
>> Does this tell anyone anything?
>> https://gist.github.com/rogeralsing/1e814f80321378ee132fa34aae77ef6d 
>> <https://gist.github.com/rogeralsing/1e814f80321378ee132fa34aae77ef6d>
>> https://gist.github.com/rogeralsing/85ce3feb409eb7710f713b184129cc0b 
>> <https://gist.github.com/rogeralsing/85ce3feb409eb7710f713b184129cc0b>
>> 
>> This is beyond my understanding of the JVM.
>> 
>> ps. no multi socket or numa.
>> 
>> Regards
>> Roger
>> 
>> 
>> Den tisdag 1 augusti 2017 kl. 20:22:23 UTC+2 skrev Georges Gomes:
>> Are you benchmarking on a multi-socket/NUMA server?
>> 
>> On Tue, Aug 1, 2017, 1:48 PM Wojciech Kudla <wojciec...@gmail.com <>> wrote:
>> It definitely makes sense to have a look at gc activity, but I would suggest 
>> looking at safepoints from a broader perspective. Just use  
>> -XX:+PrintGCApplicationStoppedTime to see what's going on. If it's 
>> safepoints, you could get more details with safepoint statistics. 
>> Also, benchmark runs in java may appear undeterministic simply because 
>> compilation happens in background threads by default and some runs may 
>> exhibit a different runtime profile since the compilation threads receive 
>> their time slice in different moments throughout the benchmark. 
>> Are the results also jittery when run entirely in interpreted mode? It may 
>> be worth to experiment with various compilation settings (ie. disable tiered 
>> compilation, employ different warmup strategies, play around with compiler 
>> control). 
>> Are you employing any sort of affinitizing threads to cpus? 
>> Are you running on a multi-socket setup?
>> 
>> 
>> On Tue, 1 Aug 2017, 19:27 Roger Alsing, <roger...@gmail.com <>> wrote:
>> Some context: I'm building an actor framework, similar to Akka but 
>> polyglot/cross-platform..
>> For each platform we have the same benchmarks, where one of them is an in 
>> process ping-pong benchmark.
>> 
>> On .NET and Go, we can spin up pairs of ping-pong actors equal to the number 
>> of cores in the CPU and no matter if we spin up more pairs, the total 
>> throughput remains roughly the same.
>> But, on the JVM. if we do this, I can see how we max out at 100% CPU, as 
>> expected, but if I instead spin up a lot more pairs, e.g. 20 * core_count, 
>> the total throughput tipples.
>> 
>> I suspect this is due to the system running in a more steady state kind of 
>> fashion in the latter case, mailboxes are never completely drained and 
>> actors don't have to switch between processing and idle.
>> Would this be fair to assume?
>> This is the reason why I believe this is a question for this specific forum.
>> 
>> Now to the real question.. roughly 60-40 when the benchmark is started, it 
>> runs at 250 mil msg/sec. steadily and the other times it runs at 350 mil 
>> msg/sec.
>> The reason why I find this strange is that it is stable over time. if I 
>> don't stop the benchmark, it will continue at the same pace.
>> 
>> If anyone is bored and like to try it out, the repo is here:
>> https://github.com/AsynkronIT/protoactor-kotlin 
>> <https://github.com/AsynkronIT/protoactor-kotlin>
>> and the actual benchmark here: 
>> https://github.com/AsynkronIT/protoactor-kotlin/blob/master/examples/src/main/kotlin/actor/proto/examples/inprocessbenchmark/InProcessBenchmark.kt
>>  
>> <https://github.com/AsynkronIT/protoactor-kotlin/blob/master/examples/src/main/kotlin/actor/proto/examples/inprocessbenchmark/InProcessBenchmark.kt>
>> 
>> This is also consistent with or without various vm arguments.
>> 
>> I'm very interested to hear if anyone has any theories what could cause this 
>> behavior.
>> 
>> One factor that seems to be involved is GC, but not in the obvious way, 
>> rather reversed.
>> In the beginning, when the framework allocated more memory, it more often 
>> ran at the high speed.
>> And the fewer allocations I've managed to do w/o touching the hot path, the 
>> more the benchmark have started to toggle between these two numbers.
>> 
>> Thoughts?
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "mechanical-sympathy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to mechanical-sympathy+unsubscr...@googlegroups.com <>.
>> For more options, visit https://groups.google.com/d/optout 
>> <https://groups.google.com/d/optout>.
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "mechanical-sympathy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to mechanical-sympathy+unsubscr...@googlegroups.com <>.
>> For more options, visit https://groups.google.com/d/optout 
>> <https://groups.google.com/d/optout>.
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "mechanical-sympathy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to mechanical-sympathy+unsubscr...@googlegroups.com <>.
>> For more options, visit https://groups.google.com/d/optout 
>> <https://groups.google.com/d/optout>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to mechanical-sympathy+unsubscr...@googlegroups.com 
> <mailto:mechanical-sympathy+unsubscr...@googlegroups.com>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to