The Monitor.Exit generated by the lock statement has an implicit memory
barrier that will flush the CPU write cache at the end of the lock block.
Adding a MemoryBarrier after the assignment to "instance" only guarantees
that cached writes up to that point are flushed to RAM.   The only thing
visible by any other threads is "provider_"; exiting the lock block
flushes remaining writes to RAM, in the correct order (i.e. if the write
to "instance" was cached AND the write to "provider_" was cached it would
flush the "instance" write first.  Since this is one thread it would see
all cached writes made within that thread, so the assignment of "instance"
to "provider_" would use the value in the cache, if the write
to "instance" was cached).

If you had some other instructions after the assignment to "provider_",
and unsynchronized reading of "provider_" outside the block, adding a
MemoryBarrier call after the assignment might be warranted; but, I think
it's safer to just declare "provider_" volatile.

On Mon, 9 Jul 2007 19:25:53 +0200, Fabian Schmied <[EMAIL PROTECTED]>
wrote:

>> ...not with MemoryBarrier within the synchronized block...
>
>Hm, can you explain this a little more? If one adds a write barrier
>after construction of the instance, but before assigning the instance
>to the field, as Stoyan's original code did [1], doesn't this have the
>desired effect of guaranteeing that the code outside the lock can only
>see the fully constructed instance?
>
>Fabian
>
>[1] Copied for reference from Stoyan's posting:
>
>if (provider_ == null)
>{
>   lock (syncLock_)
>   {
>
>       if (provider_ == null)
>       {
>           I instance = Activator....
>           Thread.MemoryBarrier(); // so that any out-of-order writes
>complete[1]
>           provider_ = instance;
>       }
>
>   }
>}

===================================
This list is hosted by DevelopMentorĀ®  http://www.develop.com

View archives and manage your subscription(s) at http://discuss.develop.com

Reply via email to