Of course another thread (that isn't blocked waiting to lock on the same
object as the lock block )can see provider_, since no thread is blocked
from accessing the Provider property like a static constructor is.

The concern with that and the double-check lock is that the first if
statement (if (provider_ == null)) might not be seeing the most up-to-date
write on weak memory architectures.  So, the addition of Thread.MemoryRead
was suggested.  Since the assignment to provider_ is the last line before
the end of the block adding a memory barrier in hopes to flush that write
is redundant (but not unsafe) because the end of the lock does the same
thing.  Unless you're locking out all threads with the same lock you have
the issue that the update to provider_ was cached and not all threads can
see it, so you'd add a memory barrier (or change it to a VolatileWrite or
declare provider_ volatile, which would be my suggestion) if a
MemoryBarrier isn't the next line after the assignment to provider_.  All
that's guaranteed by the CLI spec.

Now, from the compiler optimization point of view, I don't think it's as
clear.  I don't believe there's anything in the spec. that guarantees a
compiler shall not optimize a member variable so it happens to be accessed
outside the lock block in which it was originally written (and essentially
not be guarded) unless its access is considered a volatile operation (the
parameter in Thread.Volile[Read|Write], Interlocked*, etc. or
declared "volatile").  Vance Morrision has described the memory model
implemented in Microsoft's .NET 2.0 CLI in more detail in "Understand the
Impact of Low-Lock Techniques in Multithreaded Apps" [1] but that's not
currently part of the spec.  I'm sure someone will chime in an contradict
me; but some of the brightest haven't been able to provide definitive
proof based solely on the spec.

The safe route is to declare any member accessed by more than one thread
at a time volatile, even if you synchronize access to that member through
a single lock object.  If you don't synchronize that way (which the double
check lock does not) then you need to make all accesses to that member a
volatile operation and optimize where it's redundant.  My suggestion is to
declare the member volatile, it's too hard to ensure that a method with a
volatile side-effect will be used to write/read the member.

[1] http://msdn.microsoft.com/msdnmag/issues/05/10/MemoryModels/

On Tue, 10 Jul 2007 11:09:32 +0200, Fabian Schmied <[EMAIL PROTECTED]>
wrote:

>> The Monitor.Exit generated by the lock statement has an implicit memory
>> barrier that will flush the CPU write cache at the end of the lock
block.
>> Adding a MemoryBarrier after the assignment to "instance" only
guarantees
>> that cached writes up to that point are flushed to RAM.   The only thing
>> visible by any other threads is "provider_"; exiting the lock block
>> flushes remaining writes to RAM, in the correct order (i.e. if the write
>> to "instance" was cached AND the write to "provider_" was cached it
would
>> flush the "instance" write first.
>
>Right, but my concern was whether another thread could see "provider_"
>being assigned _before_ the lock block was left. I.e. the cached value
>of "provider_" being flushed to RAM before Monitor.Exit is reached.
>Because I was of the understanding this would make the memory
>barrier/volatile flag necessary.
>
>I understand this is no problem with the x86 model, and I presume it
>is with the "weak" ECMA/IA64 memory models. What about the "strong"
>2.0 model, is it guaranteed that "provider_" won't be visible until
>Monitor.Exit? Or is it guaranteed that provider_ will only be flushed
>to RAM together with all the writes up to the assignment?

===================================
This list is hosted by DevelopMentorĀ®  http://www.develop.com

View archives and manage your subscription(s) at http://discuss.develop.com

Reply via email to