On Wed, 4 Nov 2020 09:27:39 GMT, Kim Barrett <kbarr...@openjdk.org> wrote:
>> I didn't say it "doesn't work for shenandoah", I said it wouldn't have >> worked with the old shenandoah barriers without additional work, like adding >> calls to resolve. I understand the design intent of notifying the table >> management that its hash codes are out of date. And the num-dead callback >> isn't the right place, since there are num-dead callback invocations that >> aren't associated with hash code invalidation. (It's not a correctness >> wrong, it's a "these things are unrelated and this causes unnecessary work" >> wrong.) > > It used to be that jvmti tagmap processing was all-in-one (in GC weak > reference processing, with the weak clearing, dead table entry removal, and > rehashing all done in one pass. This change has split that up, with the weak > clearing happening in a different place (still as part of the GC's weak > reference processing) than the others (which I claim can now be part of the > mutator, whether further separated or not). > > "Concurrent GC" has nothing to do with whether tagmaps need rehashing. Any > copying collector needs to do so. A non-copying collector (whether concurrent > or not) would not. (We don't have any purely non-copying collectors, but G1 > concurrent oldgen collection is non-copying.) And weak reference clearing > (whether concurrent or not) has nothing to do with whether objects have been > moved and so the hashing has been invalidated. > > There's also a "well known" issue with address-based hashing and generational > or similar collectors, where a simple boolean "objects have moved" flag can > be problematic, and which tagmaps seem likely to be prone to. The old > do_weak_oops tries to mitigate it by recognizing when the object didn't move. The new rehash function also doesn't move the objects either. It essentially does the same as the old weak_oops_do function. ------------- PR: https://git.openjdk.java.net/jdk/pull/967