Re: Faster System.nanotime() ?

2019-04-30 Thread Nitsan Wakart
The code has a nice explanation of the workaround they need to resort to to ensure a monotonic time source. https://doc.rust-lang.org/src/std/time.rs.html#157 // And here we come upon a sad state of affairs. The whole point of // `Instant` is that it's monotonically increasing. We

Re: Faster System.nanotime() ?

2019-04-30 Thread Nitsan Wakart
Zing has the option to do just that on systems which reliably support it (-XX:+UseRdtsc IIRC). So yes it can be done, and is sometimes even the right thing to do. On Tue, Apr 30, 2019 at 7:50 AM dor laor wrote: > It might be since in the past many systems did not have a stable rdtsc and > thus i

Re: Exotic classes

2019-04-22 Thread Nitsan Wakart
The cost would be a combination of the deopt, cost of slow down, and cost of compilation all of which are variable based on the generated code which embeds the constant and the compiler used. > On 22 Apr 2019, at 20:25, 'Carl Mastrangelo' via mechanical-sympathy > wrote: > > These classes (e

Re: Huge, unexpected performance overhead (of static methods?)

2019-02-06 Thread Nitsan Wakart
Profile it with Solaris Studio(yes you can!), this should give you insight into the assembly level as well. The code may have been compiled by c1, or c2, but you are correct that if you see a symbol in perf-map-agent then it is definitely compiled. Can you reproduce the issue in a minimal JMH ben

Re: How badly do JFR stack traces lie?

2017-12-03 Thread Nitsan Wakart
So, as apangin points out there's an issue where JFR cannot walk the stack safely. To add insult to injury, JFR does not report failed samples at all, which results in a systematic omission of certain methods from the profile. This is a massive reporting issue in my opinion, and has not been fix

Re: Executing thread by JVM.

2017-11-12 Thread Nitsan Wakart
Default behavior for what you describe: - An OS thread is created, and tied to a new Thread object. Your code is the "Runnable" for that Thread - When the thread is started a bunch of JVM runtime code is executed, finally calling into Thread::run, which in turn calls into your code. - Your code is

Re: jvm stuck in SafepointSynchronize::begin()

2017-09-28 Thread Nitsan Wakart
Thanks Alex! For future reference to all here: AFAIK lightweight-java-profiler has never progressed much beyond the proof of concept stage and is not actively maintained. It was forked and developed into honest-profiler, which offers more features, is actively developed and is more stable. On a

Re: Aeron zero'ing buffer?

2017-05-29 Thread 'Nitsan Wakart' via mechanical-sympathy
In particular for Aeron LogBuffer false sharing on length writes can only happen (on systems with cache line length 64b) for 0 length messages as the data header size is 32b:https://github.com/real-logic/Aeron/blob/e0bb87c4538125c577a653f20f5865a6d6d8dc95/aeron-client/src/main/java/io/aeron/prot

Re: Using a SEDA-like architecture to reduce icache misses

2017-03-15 Thread 'Nitsan Wakart' via mechanical-sympathy
This to me is the distilled observation here, so repeating in isolation: "Mostly where we have complex code running on small work items" Which makes allot of sense. -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this

Re: HotSpot code cache and instruction prefetching

2017-02-10 Thread 'Nitsan Wakart' via mechanical-sympathy
We have no availability based heuristics, nor does anyone else AFAIK, for dynamically enabling/disabling/tuning code gen to fit into smaller code cache pools or react to low space indications. Running out of code cache space is pretty rare and you can have a bigger cache if you like.Zing and Ope

Re: HotSpot code cache and instruction prefetching

2017-02-08 Thread 'Nitsan Wakart' via mechanical-sympathy
We've implemented a code cache allocation scheme to this effect in recent versions of Zing. Zing's code cache was similarly naive and since Zing has been tiered compiling for a while now we started at a similar point to what you describe.The hypothesis (supported by some evidence) was that in su

Re: Prefetching and false sharing

2017-02-06 Thread 'Nitsan Wakart' via mechanical-sympathy
I'm a bit late to the party, yes JCTools pads 128, this was based on measurement and was visible on non-NUMA setups. See notes here:http://psy-lob-saw.blogspot.co.za/2013/10/spsc-revisited-part-iii-fastflow-sparse.html Look for the comparison between Y8 and Y83 which compare 2 identical queues e

Re: Operation Reordering

2017-01-18 Thread 'Nitsan Wakart' via mechanical-sympathy
"- You can assume atomicity of values (no word tearing)" This seems to have tickled people, I apologise for my imprecise wording. Better wording would be: "You can assume atomicity of read/writes, and no word tearing, to the extent these are promised to you by the spec" - long/double plain write

Re: Operation Reordering

2017-01-17 Thread 'Nitsan Wakart' via mechanical-sympathy
"what about all the encoders/decoders or any program that rely on data access patterns to pretend to be and remain "fast"?" There's no problem in reordering while maintaining observable effects, right? You should assume a compiler interprets "observable effects" to mean "order imposed by memory m

Re: Who is the first one invented lock-free MPMC circular buffer

2017-01-06 Thread 'Nitsan Wakart' via mechanical-sympathy
As far as my understanding goes:Disruptor is not strictly speaking Multi Consumer, and is not a queue (allows multi consumers to have independent views + dependencies etc) and does not employ same algo. Algos are similar, and both draw from Lamport. Disruptor algo for multi producer has also cha

Re: US Presidents and Computing Security

2017-01-02 Thread 'Nitsan Wakart' via mechanical-sympathy
amusing, but not relevant to mailing list. -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsubscr...@googlegroups.com. For more opti

Re: Continuous performance monitoring with Java FlightRecorder (JFR)?

2016-12-07 Thread 'Nitsan Wakart' via mechanical-sympathy
For profiling information you have the option of using honest-profiler:https://github.com/RichardWarburton/honest-profiler It works on OpenJDK, Oracle (going back to before JMC days) and Zing (recent releases), though it does rely on an unofficial profiling API.It does not cover anything else th

Re: Single writer counter: how expensive is a volatile read?

2016-11-08 Thread 'Nitsan Wakart' via mechanical-sympathy
To summarize from other comments:- No volatile read -> no HB- Plain writes/reads -> no atomicity (concern for 32bit)- Java9 offers us Opaque read/writes -> - HB, + atomicity, might be a good fit- Aleksey points out false sharing concerns- Vladimir highlights cache miss costsThe one addition I ha

Re: Fwd: AIX/Linux Admin (Production Support)

2016-09-26 Thread 'Nitsan Wakart' via mechanical-sympathy
"Hi, please remove me from your mailing list" I think what you meant is:"Please block & remove this spammer from OUR mailing-list";-) -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving email

Re: what is available in intel vtune that cannot be done with perf/pmc today ?

2016-08-25 Thread 'Nitsan Wakart' via mechanical-sympathy
VTune gives you a nice top level analysis and drill down to many counters. Perf can profile all these counters if you wanna, but you have to say what you want. You can use top-lev to expose the bottlenecks and then use perf to annotate/profile for those counters. On Thursday, August 25, 20

Re: Lynx Queue - a new SP/SC queue

2016-08-15 Thread 'Nitsan Wakart' via mechanical-sympathy
2c, as my name was mentioned:The technique is novel, it's a neat magic trick :-)When referring to the JCTools SPSC please note I can't take credit, it's a mashup of great ideas from Martin, FastFlow, BQueue and so on.AFAIK for Java code this approach is not attainable.Given that for most people