The code has a nice explanation of the workaround they need to resort to to
ensure a monotonic time source.
https://doc.rust-lang.org/src/std/time.rs.html#157
// And here we come upon a sad state of affairs. The whole point of
// `Instant` is that it's monotonically increasing. We
Zing has the option to do just that on systems which reliably support it
(-XX:+UseRdtsc IIRC). So yes it can be done, and is sometimes even the
right thing to do.
On Tue, Apr 30, 2019 at 7:50 AM dor laor wrote:
> It might be since in the past many systems did not have a stable rdtsc and
> thus i
The cost would be a combination of the deopt, cost of slow down, and cost of
compilation all of which are variable based on the generated code which embeds
the constant and the compiler used.
> On 22 Apr 2019, at 20:25, 'Carl Mastrangelo' via mechanical-sympathy
> wrote:
>
> These classes (e
Profile it with Solaris Studio(yes you can!), this should give you insight into
the assembly level as well.
The code may have been compiled by c1, or c2, but you are correct that if you
see a symbol in perf-map-agent then it is definitely compiled.
Can you reproduce the issue in a minimal JMH ben
So, as apangin points out there's an issue where JFR cannot walk the stack
safely. To add insult to injury, JFR does not report failed samples at all,
which results in a systematic omission of certain methods from the profile.
This is a massive reporting issue in my opinion, and has not been fix
Default behavior for what you describe:
- An OS thread is created, and tied to a new Thread object. Your code is
the "Runnable" for that Thread
- When the thread is started a bunch of JVM runtime code is executed,
finally calling into Thread::run, which in turn calls into your code.
- Your code is
Thanks Alex!
For future reference to all here:
AFAIK lightweight-java-profiler has never progressed much beyond the proof of
concept stage and is not actively maintained. It was forked and developed into
honest-profiler, which offers more features, is actively developed and is more
stable. On a
In particular for Aeron LogBuffer false sharing on length writes can only
happen (on systems with cache line length 64b) for 0 length messages as the
data header size is
32b:https://github.com/real-logic/Aeron/blob/e0bb87c4538125c577a653f20f5865a6d6d8dc95/aeron-client/src/main/java/io/aeron/prot
This to me is the distilled observation here, so repeating in isolation:
"Mostly where we have complex code running on small work items"
Which makes allot of sense.
--
You received this message because you are subscribed to the Google Groups
"mechanical-sympathy" group.
To unsubscribe from this
We have no availability based heuristics, nor does anyone else AFAIK, for
dynamically enabling/disabling/tuning code gen to fit into smaller code cache
pools or react to low space indications. Running out of code cache space is
pretty rare and you can have a bigger cache if you like.Zing and Ope
We've implemented a code cache allocation scheme to this effect in recent
versions of Zing. Zing's code cache was similarly naive and since Zing has been
tiered compiling for a while now we started at a similar point to what you
describe.The hypothesis (supported by some evidence) was that in su
I'm a bit late to the party, yes JCTools pads 128, this was based on
measurement and was visible on non-NUMA setups. See notes
here:http://psy-lob-saw.blogspot.co.za/2013/10/spsc-revisited-part-iii-fastflow-sparse.html
Look for the comparison between Y8 and Y83 which compare 2 identical queues
e
"- You can assume atomicity of values (no word tearing)"
This seems to have tickled people, I apologise for my imprecise wording.
Better wording would be:
"You can assume atomicity of read/writes, and no word tearing, to the extent
these are promised to you by the spec"
- long/double plain write
"what about all the encoders/decoders or any program that rely on data access
patterns to pretend to be and remain "fast"?"
There's no problem in reordering while maintaining observable effects, right?
You should assume a compiler interprets "observable effects" to mean "order
imposed by memory m
As far as my understanding goes:Disruptor is not strictly speaking Multi
Consumer, and is not a queue (allows multi consumers to have independent views
+ dependencies etc) and does not employ same algo. Algos are similar, and both
draw from Lamport. Disruptor algo for multi producer has also cha
amusing, but not relevant to mailing list.
--
You received this message because you are subscribed to the Google Groups
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more opti
For profiling information you have the option of using
honest-profiler:https://github.com/RichardWarburton/honest-profiler
It works on OpenJDK, Oracle (going back to before JMC days) and Zing (recent
releases), though it does rely on an unofficial profiling API.It does not cover
anything else th
To summarize from other comments:- No volatile read -> no HB- Plain
writes/reads -> no atomicity (concern for 32bit)- Java9 offers us Opaque
read/writes -> - HB, + atomicity, might be a good fit- Aleksey points out false
sharing concerns- Vladimir highlights cache miss costsThe one addition I ha
"Hi, please remove me from your mailing list"
I think what you meant is:"Please block & remove this spammer from OUR
mailing-list";-)
--
You received this message because you are subscribed to the Google Groups
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving email
VTune gives you a nice top level analysis and drill down to many counters. Perf
can profile all these counters if you wanna, but you have to say what you want.
You can use top-lev to expose the bottlenecks and then use perf to
annotate/profile for those counters.
On Thursday, August 25, 20
2c, as my name was mentioned:The technique is novel, it's a neat magic trick
:-)When referring to the JCTools SPSC please note I can't take credit, it's a
mashup of great ideas from Martin, FastFlow, BQueue and so on.AFAIK for Java
code this approach is not attainable.Given that for most people
21 matches
Mail list logo