On Friday, May 18, 2018 at 2:30:08 AM UTC-5, Francesco Nigro wrote:
>
> Thanks Gil!
>
> I will failback to my original (semi-practical) concern, using this 
> renewed knowledge :)
> Suppose that we want to perform write operations surrounding both a j.u.c. 
> Lock and synchronized mutual exclusion block and we want:
>
>    1. these writes operations to not being moved inside the block and 
>    maintain their relative positions from it
>
> Preventing your appear-prior-to-lock-acquisition writes from "moving into 
the block" is subtly different from preventing their re-ordering with 
writes and reads that are within the block. Are you sure you want the 
former and not just the latter?

It is easier to see how to prevent the latter. All you have to do is order 
the earlier writes against those in-block writes and/or reads, ignoring the 
lock. This is where, for example, a lazySet on the inside-the-block writes 
will order them against the before-the-block writes, regardless of how the 
monitor enter is dealt with, which can save you from using volatiles. If 
there are reads within the block that you need to order against the 
before-the-block writes, you'd need to use volatiles (e.g. a volatile store 
for the last before-the-block write, AND a volatile load for the first 
in-the-block read).

If you actually want to prevent the former (and there are ways to observe 
whether or not reordering "into" the block occur), you may need more 
complicated things. But do you really need that? Someone may be able to 
find some way to detect whether or not such reordering of the writes and 
the lock-enter happens [I'm actually not sure whether such reordering, 
without also reordering against writes and reads in the block, is 
detectable]. An if that detection is possible, it also means that [by 
definition] someone can build a concurrent algorithm that depends on the 
reordering behavior not occurring. But it seems [to me] like a pretty 
complicated and sensitive thing to build a dependency on.

>
>    - 2.  the effects of the writes would be atomically readable from 
>    other threads
>    
>  Which sort of atomicity are you referring to here? atomicity within a 
single store (i.e. no word tearing of a long or a double), or atomicity 
across multiple such stores?

If it's the first, (no word tearing) a volatile write will ensure that (in 
a fairly expensive way), but a lazyset will also ensure the same with a 
much lower cost.

If it's the second (atomicity across multiple such stores, you need 
something like a synchronized block or a locked code region for the writes 
that you need atomicity across to live in. Such a block can be coarsened 
(e.g. joined with a subsequent block, or hoisted out of a loop) block, or 
it may be optimized in various other ways (e.g. biased locking), but 
whatever valid things happen to it, the atomicity across the writes within 
it will remain when seen by other threads.

Given that we can't assume any semantic difference betwenn the j.u.c Lock 
> and intrinsic ones and there aren't clearly listed (but on specific 
> implementations/the Cookbook) any effects on the sorrounding code, how we 
> can implement it correctly?
> And...we can do it just with this knowledge?
> The only solution I see is by using a volatile store on both (or at least 
> on the first) writes operation, while ordered (aka lazySet) ones can't work 
> as expected. 
>
> Cheers,
> Franz
>
>
> Il giorno mercoledì 16 maggio 2018 17:26:59 UTC+2, Gil Tene ha scritto:
>>
>> Note that Doug' Lea's JMM cookbook is written for implementors of JDKs 
>> and related libraries and JITs, NOT for users of those JDKs and libraries. 
>> It says so right in the title. It describes rules that would result in a 
>> *sufficient* implementation of the JMM but is not useful for deducing 
>> the *required or expected *behavior of all JMM implementations. Most JMM 
>> implementations go beyond the cookbook rules in at least some places and 
>> apply JMM-valid transformations that are not included in it and can be 
>> viewed as "shortcuts" that bypass some of the rules in the cookbook. There 
>> are many examples of this in practice. Lock coarsening and lock biasing 
>> optimizations are two good example sets.
>>
>> This means that you need to read the cookbook very carefully, and 
>> (specifically) that you should not interpret it as a promise of what the 
>> relationships between various operations are guaranteed to be. If you use 
>> the cookbook for the latter, your code will break.
>>
>>
>> Putting aside the current under-the-hood implementations of monitor 
>> enter/exit and of ReentrantLock (which may and will change), the 
>> requirements are clear:
>>
>> from e.g. 
>> https://docs.oracle.com/javase/10/docs/api/java/util/concurrent/locks/ReentrantLock.html
>> :
>>
>> "A reentrant mutual exclusion Lock 
>> <https://docs.oracle.com/javase/10/docs/api/java/util/concurrent/locks/Lock.html>
>>  with 
>> the same basic behavior and semantics as the implicit monitor lock accessed 
>> using synchronized methods and statements, but with extended 
>> capabilities."
>>
>>
>> from e.g. 
>> https://docs.oracle.com/javase/10/docs/api/java/util/concurrent/locks/Lock.html
>> :
>>
>> "Memory Synchronization
>>
>> All Lock implementations *must* enforce the same memory synchronization 
>> semantics as provided by the built-in monitor lock, as described in Chapter 
>> 17 of The Java™ Language Specification 
>> <https://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html#jls-17.4>:
>>
>>
>>    - A successful lock operation has the same memory synchronization 
>>       effects as a successful *Lock* action.
>>       - A successful unlock operation has the same memory 
>>       synchronization effects as a successful *Unlock* action.
>>    
>> Unsuccessful locking and unlocking operations, and reentrant 
>> locking/unlocking operations, do not require any memory synchronization 
>> effects."
>>
>>
>> So based on the spec, I'd say that you cannot make any assumptions about 
>> semantic differences between ReentrantLock and synchronized blocks (even if 
>> you find current implementation differences).
>>
>> On Wednesday, May 16, 
>>>
>>> Hi guys!
>>>
>>> probably this one should be more a concurrency-interest question, but 
>>> I'm sure that's will fit with most of the people around here as well :)
>>> I was looking to how ReentrantLock and synchronized are different from a 
>>> semantic point of view and I've found (there are no experimental proofs on 
>>> my side TBH) something interesting on the 
>>> http://gee.cs.oswego.edu/dl/jmm/cookbook.html. 
>>> It seems to me that:
>>>
>>> normal store;
>>> monitorEnter;
>>> [mutual exclusion zone]
>>> monitorExit;
>>> ....
>>>
>>> is rather different from a:
>>>
>>> normal store;
>>> (loop of...)
>>> volatile load
>>> volatile store (=== CAS)
>>> ---
>>> [mutual exclusion zone]
>>> volatile store
>>>
>>> With the former to represent a synchronized block and the latter a spin 
>>> lock acquisition and release, both with a normal store on top.
>>> From anyone coming from other worlds than the JVM i suppose that the 
>>> volatile load could be translated as a load acquire, while the volatile 
>>> store as a sequential consistent store.
>>>
>>> About the monitorEnter/Exit that's more difficult to find something that 
>>> fit other known memory models (C++) and that's the reason of my request :) 
>>> From the point of view of the compiler guarantees seems that a normal 
>>> store (ie any store release as well) preceding monitorEnter could be moved 
>>> inside the mutual exclusion zone (past the monitorEnter), because the 
>>> compiler isn't enforcing 
>>> any memory barrier between them, while with the current implementation 
>>> of a j.u.c. lock that's can't happen.
>>> That's the biggest difference I could spot on them, but I'm struggling 
>>> to find anything (beside looking at the JVM source code) that 
>>> observe/trigger such compiler re-ordering.
>>> What do you think about this? I'm just worried of something that 
>>> actually isn't implemented/isn't happening on any known JVM implementation?
>>>
>>
>> AFAIK Most JVM implementations will absolutely do this for monitors and 
>> will commonly re-order to move stores and loads that precede a monitor 
>> enter such that they execute after the monitor enter (but before the 
>> associated monitor exit). Most forms of lock coarsening will have this 
>> effect in actual emitted code (that any cpu will then see, regardless of 
>> architecture and CPU memory model). In addition, lock biasing optimizations 
>> (on monitors) will often result in emitted code that does not enforce a 
>> storestore or loadstore order between the monitor enter and preceding loads 
>> or stores on some architectures, allowing the processor to reorder that 
>> (biased to this thread) monitor enter operation (and potentially some 
>> operations that follow) with prior loads and stores.
>>
>> The exact same optimizations would be valid to do for ReentrantLock 
>> (since per the spec, it shares the same memory ordering semantics 
>> requirements), but I think that in (currently) practice there are fewer JIT 
>> optimizations applied to ReentrantLock than to monitor enter/exit.
>>  
>>
>>>
>>> Thanks,
>>> Franz
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to