Jeremy (and all),

I’m not on the serviceability list so I won’t include the messages so far. :-) 
Also CCing the hotspot GC list, in case they have some feedback on this.

Could I suggest a (much) simpler but at least as powerful and flexible way to 
do this? (This is something we’ve been meaning to do for a while now for 
TwitterJDK, the JDK we develop and deploy here at Twitter.) You can force 
allocations to go into the slow path periodically by artificially setting the 
TLAB top to a lower value. So, imagine a TLAB is 4M. You can set top to 
(bottom+1M). When an allocation thinks the TLAB is full (in this case, the 
first 1MB is full) it will call the allocation slow path. There, you can 
intercept it, sample the allocation (and, like in your case, you’ll also have 
the correct stack trace), notice that the TLAB is not actually full, extend its 
to top to, say, (bottom+2M), and you’re done.

Advantages of this approach:

* This is a much smaller, simpler, and self-contained change (no compiler 
changes necessary to maintain...).

* When it’s off, the overhead is only one extra test at the slow path TLAB 
allocation (i.e., negligible; we do some sampling on TLABs in TwitterJDK using 
a similar mechanism and, when it’s off, I’ve observed no performance overhead).

* (most importantly) You can turn this on and off, and adjust the sampling 
rate, dynamically. If you do the sampling based on JITed code, you’ll have to 
recompile all methods with allocation sites to turn the sampling on or off. 
(You can of course have it always on and just discard the output; it’d be nice 
not to have to do that though. IMHO, at least.)

* You can also very cheaply turn this on and off (or adjust the sampling 
frequncy) per thread, if that’s be helpful in some way (just add the 
appropriate info on the thread’s TLAB).

A few extra comments on the previous discussion:

* "JFR samples per new TLAB allocation. It provides really very good picture 
and I haven't seen overhead more than 2” : When TLABs get very large, I don’t 
think sampling one object per TLAB is enough to get a good sample (IMHO, at 
least). It’s probably OK for something like jbb which mostly allocates 
instances of a handful of classes and has very few allocation sites. But, a lot 
of the code we run at Twitter is a lot more elaborate than that and, in our 
experience, sampling one object per TLAB is not enough. You can, of course, 
decrease the TLAB size to increase the sampling size. But it’d be good not to 
have to do that given a smaller TLAB size could increase contention across 
threads.

* "Should it *just* take a stack trace, or should the behavior be 
configurable?” : I think we’d have to separate the allocation sampling 
mechanism from the consumption of the allocation samples. Once the sampling 
mechanism is in, different JVMs can take advantage of it in different ways. I 
assume that the Oracle folks would like at least a JFR event for every such 
sample. But in your build you can add extra code to collect the information in 
the way you have now.

* Talking of JFR, it’s a bit unfortunate that the AllocObjectInNewTLAB event 
has both the new TLAB information and the allocation information. It would have 
been nice if that event was split into two, say NewTLAB and AllocObjectInTLAB, 
and we’d be able to fire the latter for each sample.

* "Should the interval between samples be configurable?” : Totally. In fact, 
it’d be helpful if it was configurable dynamically. Imagine if a JVM starts 
misbehaving after 2-3 weeks of running. You can dynamically increase the 
sampling rate to get a better profile if the default is not giving fine-grain 
enough information.

* "As long of these features don’t contribute to sampling bias” : If the 
sampling interval is fixed, sampling bias would be a very real concern. In the 
above example, I’d increment top by 1M (the sampling frequency) + p% (a fudge 
factor). 

* "Yes, a perhaps optional callbacks would be nice too.” : Oh, no. :-) But, as 
I said, we should definitely separate the sampling mechanism from the mechanism 
that consumes the samples.

* "Another problem with our submitting things is that we can't really test on 
anything other than Linux.” : Another reason to go with a as platform 
independent solution as possible. :-)

Regards,

Tony

-----

Tony Printezis | JVM/GC Engineer / VM Team | Twitter

@TonyPrintezis
tprinte...@twitter.com

Reply via email to