If you have multiple threads accessing it, you manage access using a
mutex.  If locking is important to the application that it's in.
(Clearing the compression is as important as clearing the library
state.  If there's a lock around the library state clearing, a lock
needs to exist around the compression.  If not, it's the same level of
priority.)

A read of a 'volatile uint64_t', btw, is supposed to make sure that it
reads from the original memory locations, not cached copies of it in
register or spread across multiple registers.  Even if it can't be
accessed atomically, the guarantee is that it will fetch from memory,
so that it can be changed outside the current program flow (signal
handlers, shm, mmap, and the like).  Yes, it needs to be locked (all
possible concurrent access needs to be locked, ideally, even though
that's a performance hog).

I'd not doubt that there are platforms where uint32_t can't be
accessed atomically.  Or even uint16_t.  The question becomes, "at
what point does it become not-cost-effective to support platforms that
cannot guarantee things that can be guaranteed by the platforms used
by a majority of the user base?"

So, here's a novel idea: why don't you write a patch to clear the
compression structs that adheres to your view of appropriate design,
and submit it?

-Kyle H

On 3/29/07, David Schwartz <[EMAIL PROTECTED]> wrote:

> This is the precise optimization that 'volatile' inhibits.

For single-threaded code, you are right. But we are talking about 
multi-threaded code.

> 'volatile'
> requires that the value not be cached in "cheap-to-access" locations
> like registers, instead being re-loaded from "expensive-to-access"
> locations like the original memory -- because it may be changed from
> outside the control flow that the compiler knows about.

I had this exact same argument a long time ago about code like this:

int foo;
volatile int have_pointer=0;
volatile int *pointer=NULL;

In one thread:
pointer=&foo;
have_pointer=1;

In another:
if(have_pointer!=0) *pointer++;

All I could say was that this code breaks the standard. One thread accesses 
'have_pointer' while another thread might be modifying it. Did I know how it 
could fail? No. In fact, nobody could think of a way it could fail because we 
didn't really realize that the ordering could break and not just the concurrent 
access/modification.

Did it fail? Yes. I believe we first had faults on new UltraSparc CPUs (might have been 
new Alphas, I can't remember for sure). It was a classic case of "worked on old 
CPUs, failed on new ones". The new ones had some write reordering capability.

If you look at Itanium compilers, you will see that they don't put memory 
barriers between 'volatile' accesses. So 'volatile' does not provide ordering. 
It is known not to provide atomicity. There are platforms where a 'uint64_t' 
*can't* be written atomically, so what's a read of a volatile 'uint64_t' even 
supposed to do?

So you have to be saying, "you don't get atomicity or ordering, but you do get 
..." What exactly? And what standard says so? My copy of the pthreads standard says 
that accessing a variable in one thread while another thread is or might be modifying it 
is undefined behavior.

IMO, "it may break on a new CPU" is totally unacceptable for code that is going 
to be distributed and especially for libraries with expected wide distribution.

DS

PS: There are many, many more stories of "I couldn't think of any way it could 
break" code breaking when the real world thought of ways. You have to code based on 
guarantees in standards, not your own lack of imagination.


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [email protected]
Automated List Manager                           [EMAIL PROTECTED]



--

-Kyle H
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [email protected]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to