On 24 Jun 2011, at 20:08, Greg Ercolano wrote:
>> 
>> And in practice, if you have one writer thread and one reader thread,
>> you can probably just use volatile ints to signal between them, rather
>> than using a mutex - this has the advantages of being cheaper and also
>> more portable (since it doesn't require any OS specific coding.)
> 
>       Hmm, I wouldn't want to gamble on that, as I'm pretty sure
>       volatile only protects you from compiler optimization,
>       and nothing else.

Yes - the point of using volatile here (with any threading stuff, really) is 
just to warn the compiler not to assume that it knows the value of the flags at 
any point, so as to stop it (the compiler) optimising out any reads or writes.
It doesn't impact on the atmoicity (or otherwise) of the accesses.

>       Also, even if the operations *are* atomic, don't you end
>       up with a potential race condition?

That depends very much on how the data is actually stored to pass it between 
the threads.
For example, if you use some sort of FIFO or queue, then one thread can be 
writing whilst the other is reading, with no contention, as the reader will get 
the last (complete) record in the store, whilst the writer is in the progress 
of writing the next entry.
Assuming the reader polls faster (on average) than the writer writes (which is 
a Good Thing in general anyway if you are polling for updates) then the queue 
will still tend to be mostly empty, most of the time.

With a little care, if you have more than one entry you can use to store data 
between the threads, you can always arrange things so that one thread can 
safely read from one whilst another thread is writing to the other. Things only 
really get complicated if you are using a single data entry...
(Or if you are trying to pull multiple readers and writers together with a 
centralised store - that's just nasty though...)

>       I'm pretty sure a mutex is the only way to prevent a race..
>       what does your code look like to do mutex-like sync without
>       the OS calls?

Again, that depends on what the data is - if I needed to be sure the reader 
sees every value the writer set, I'd be using a queue or FIFO like interface, 
so it is going to Just Work anyway.

If the reader only needed to see the latest value at any point, which is 
actually quite common if the data being read is just to maintain some context 
on the display during a long calculation (e.g. to reassure the user that the 
program is Doing Stuff), then having a really robust signalling mechanism is 
seldom really necessary - you can do really hacky things like:
(assuming a single store)
- before writing to the store the writer sets the "keep out" flag
- now the writer checks the "read in progress" flag; if it is set, clear the 
"keep out" flag and skip this write
- else the writer updates the store
- the writer clears the "keep out" flag

When the reader polls the store it
- checks the "keep out" flag; if it is set, skip this read, but try again soon
- set the "read in progress" flag [1]
- read the store
- clear the "read in progress" flag

[1] is where you get a race - depending on what h/w config you have, this might 
still result in contention, as both threads might think they have exclusive 
access at this point if the writer sets the "keep out" on the same cycle. Never 
happens on uniprocessor systems, regardless of threading, since the reader and 
writer can not physically be running at the same time - but happens readily in 
multi-processor systems. 
So what are the consequences? If we *really* care about the data being passed 
between the threads, I'd not be using a single buffer anyway.
What we have here is a case where the reader might read a version of the store 
data that is partly old data, partly new - what will that actually look like?
In a modern system, the data will most likely be dumped into the store between 
processors in cache-line (or even multi-line) sized chunks (it's rare to write 
single words, let alone bytes, to the memory bus anymore...) and if we assume 
the memory managers between the CPU's snoop each others caches (again, almost 
certainly true) it is unlikely that the writer could write data (into its 
cache) without the reader's MMU knowing that data had changed.
So... if the data fits entirely within one cache line, you will (*almost* 
certainly) get a complete, coherent, data set on every read, regardless of 
thread synchronisation.
If the data spans multiple cache lines (either because it is too big, or 
inconveniently aligned) then you can get a read which contains inconsistent 
data. In this case the individual values are going to be self-consistent, since 
the compiler will have aligned them in the cache lines, but the set of actual 
values you end up with may not belong together...
Depending on what you are doing with that data, and the way in which it 
changes, that may not matter; if your values are slowly varying (slowly as 
compared to your display update rate) then transiently displaying a set that 
are part new, part old, probably will not even be visible - the "wrong" old 
values will be very similar to the "true" new values for that data set, and 
will be displayed for only a very brief period before being replaced with a new 
correct set.
If the data jump about in messy ways - well... I'd still try it, and if it 
looks like rubbish on the display, do something more rigorous then...
I'd not assume I needed to use some OS specific mechanism until we'd tried it 
and it went horribly awry!



_______________________________________________
fltk mailing list
[email protected]
http://lists.easysw.com/mailman/listinfo/fltk

Reply via email to