On Wed, 29 Jun 2011 13:28:20 +0200, Hans-Peter Diettrich
<drdiettri...@aol.com> wrote:

Vinzent Höfler schrieb:

Question is, what makes one variable use read/write-through, while other variables can be read from the cache, with lazy-write?
 Synchronisation. Memory barriers. That's what they are for.

And this doesn't happen out of thin air. How else?

Ok, maybe I misunderstood your question. Normally, these days every
memory access is cached (and that's of course independent from compiler
optimizations).
But there are usually possibilities to define certain areas as
write-through (like the video frame buffer). But that's more a chip-set
thing than it has anything to do with concurrent programming.

Apart from that you have to use the appropiate CPU instructions.

Is this a compiler requirement, which has to enforce read/write-through for all volatile variables?
 No.  "volatile" (at least in C) does not mean that.

Can you provide a positive answer instead?

Ada2005 RM:

|C.6(16): For a volatile object all reads and updates of the object as
|         a whole are performed directly to memory.
|C.6(20): The external effect [...] is defined to include each read and
|         update of a volatile or atomic object. The implementation shall
|         not generate any memory reads or updates of atomic or volatile
|         objects other than those specified by the program.

That's Ada's definition of "volatile". C's definition is less stronger, but
should basically have the same effect.

Is that positive enough for you? ;)

I meant "volatile" as read/write-through, not related to C. Read as "synchronized", if you ar more comfortable with that term ;-)

In that case, there is no such requirement for a compiler. At least not
for the ones I know of.

But if so, which variables (class fields...) can ever be treated as non-volatile, when they can be used from threads other than the main thread?
 Without explicit synchronisation? Actually, none.

Do you understand the implication of your answer?

I hope so. :)

When it's up to every coder, to insert explicit synchronization whenever required, how to determine the places where explicit code is required?

By careful analysis. Although there may exist tools which detect potentially
un-synchronised accesses to shared variables, there will be no tool that
inserts synchronisation code automatically for you. If there were, the Linux
community wouldn't have such big trouble to get rid of their "BKL" (big
kernel lock) from the days, where the Linux kernel was made preemptible.

Aren't we then in the unpleasant situation, that *every* expression can be evaluated using unsynchronized values, unless the compiler uses synchronized read instructions to obtain *every* value [except from stack]?

If they are accessed by only one thread, I'd assert that each core's view
on its own cache is not susceptible to memory ordering issues. So each
thread on its own can be treated like the single-CPU case (i.e. nothing
special needed). Synchronisation is only required if different threads of
execution may need access to the same, shared variable.

But in that case you should synchronise anyway, although in the single-core
case you could get away with using types which were guaranteed to be atomic.
I've done that a couple of times, but I am well aware that this will break
any time soon.


Vinzent.
_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to