Er, my question about "knowing" the threading model was intended to be 
rhetorical. Of course OpenSSL doesn't need to know the implementation, only 
that one is available that fits its API.

The argument left then, is a philosophical one: is it OpenSSL's responsibility 
to "know" what the standard threading model for all, or at least "popular", 
operating systems is? Or, is it the software integrator's responsibility to 
provide a model implementation that meets OpenSSL's generic threading API?

Given that OpenSSL is maintained by volunteers (and any commercial contributers 
would be expected to cater to their own needs first), it is improper to 
complain about something OpenSSL does "wrong", or is "missing": you don't like 
it? Fix it yourself.

That said, code forks are bad, from a maintenance point of view, and avoiding 
them is desirable. Therefore, when something is "missing", it is best if a 
means to accommodate it externally to the main source code base can be done. 
Voila! the threading API that OpenSSL provides to describe its threading NEEDS. 
Now, anyone can adapt it to their particular threading model WITHOUT having to 
fork the OpenSSL code proper. I would suggest that (a) this is a "good thing", 
and (b) intentional on the part of core OpenSSL devs, as it provides the 
necessary "hooks" to allow for externally linked implementations.

In fact, this is done throughout OpenSSL. Someone in this thread pointed out 
the POSIX dependency on files and sockets. That isn't strictly correct: OpenSSL 
operates on BIOs: a wrapper around source of data. It is only because of the 
sheer prevalence of POSIX environments that we have the conveniences of file, 
socket, and memory BIO wrappers and convenience functions operating directly on 
these POXIX types. I know this, because in my environment my data traffic 
interface is proprietary, and certainly not a POSIX socket and "BIO pairs are 
my friend".

So, the argument devolves to, "Well, the OpenSSL devs should provide 'well 
known' threading API implementations for the threading model that OpenSSL 
uses." Perhaps, but what is to stop anyone from contributing them? Whether part 
of the OpenSSL code build or an environment-specific plug-in that is maintained 
separately is irrelevant toward solving a particular integration issue. And 
again, telling someone who is a volunteer what they "should" do is a bit 
"pushy" IMNSHO.

About the only problem with this model of API definitions for externally 
provided components is the extra degree of indirection that is involved: A BIO 
wrapper around a file, or socket, will always be slower than using the file or 
socket directly. However, I submit the difference will be minor unless data 
flows through such layers of code in small dribs and drabs.

Does this add complexity? Of course. Flexibility almost always adds complexity. 
Embedded, non-POSIX environments, can be particularly hostile to the 
integrator. Would you rather NOT have the ability to provide the need wrapper 
and HAVE to fork OpenSSL? I think not.

Environments where one does not control the entire software stack are also 
troublesome. But here too, there is nothing stopping one from wrapping ALL 
calls used to OpenSSL so as to do the right thing on the "first" one, and hook 
into process exit handling for the "last" one (if at all possible). It takes 
some arrogance to presume that OpenSSL will be integrated into a particular 
environment that it should "automagically" know about. Yes, this will slow 
things down. Yes, it can be optimized by swapping dispatch tables to replace a 
conditional test with an indirect call. That's the price you pay when you don't 
have control over the whole software stack. (And here too, if performance 
matters that much, you could fork OpenSSL in extremis, if you had to.)

There is a place for abstraction of externalities, and a place for providing 
common wrapper implementations for them. Do not criticize those who do the 
former for not doing the latter.






-----Original Message-----
From: Darryl Miles [mailto:[email protected]]
Sent: Fri 6/25/2010 3:01 AM
To: [email protected]
Cc: Rene Hollan
Subject: Re: [openssl.org #2293] OpenSSL dependence on external threading 
functions is a critical design flaw
 
Rene Hollan wrote:
> And how, pray tell, should OpenSSL know the threading model the
> application uses?
>
> Unless I'm missing something, it strikes me as perfectly reasonable for
> OpenSSL to require a "reasonable" threading API to implement SSL data
> integrity, and for the application to provide an implementation that
> wraps the threading model it uses within that API.


It doesn't need to know the "threading model" in use.

What OpenSSL needs to provide is an "atomic test-and-set" during the 
setup of the Threading API calls.  For this you are going to have to 
provide CPU architecture dependent functions (since you can not rely on 
pthread!).


That is the application program when setting up the threading callback 
(and therefore "threading model") will have 2 mode of operation:

SET_IF_NOT_SET (in this mode the threading primitives if already setup 
would not be overwritten)

SET_ALWAYS (in this mode the threading primitive would always be 
overwritten, just like happens now, this mode is not adviable for use 
and should not be the default).


The important primitive in this is the "atomic test-and-set" to the 
operation to sets up the threading primitives.



Now it is possible for ALL applications out there that use OpenSSL and 
already know they are using pthread to wrap up the setting of the thread 
callback with pthread mutex calls.

But OpenSSL still needs to expose a variable for all these applications 
to synchronize on.  All OpenSSL needs to do is expose an exported data 
symbol the size of the natural-bitwidth of the architecture.

static atomic_t openssl_threading_synchronization_value;

The example in the OpenSSL documentation need to be updated to explain 
how to use pthread and this exposed data variable together to setup a 
threading model.

Then all the application that use OpenSSL needs to be FIXED.  This last 
point some would say is the real headache of the solution and the world 
should not be this way.



I am all for someone creating a multi-architecture low level library 
that provides a C API to the low level CPU assisted operations.  Note 
this is not a threading library but a library that exposed 
non-privileged CPU assisted operations that the C language does not.

Not limited to but including:
  * Atomic read/write/exchange/test-and-set
  * Bus lock operations
  * Memory barrier operations
  * Add/Subtract with/without carry (ever wanted a non-overflowing 
addition/subtraction).
  * Access to limited instructions that operate on double-natural-bitwidth.
  * Instructions that make use of FLAGS register for input/output.
  * Full register save/stores (sometimes useful for signals/exception 
handling).

In fact I have just such a library already which I maybe presueded to 
package up and release.  This will cover IA32 (i386/x86_64), Sparc 
(v5/v7) and ARM.  Both for GCC and the platforms respective native compiler.



For my liking there are starting to be too many issues with OpenSSL in 
this modern world.  So if the lists petition for change continues to go 
unresolved I'm all for seeing a fork created to push though a new policy 
(and relinquish control from current incumbents, along the lines of the 
EGCS period for GCC).  Then let darwinism take its course in the Open 
Source way.  But this isn't going to happen right now for me.



Darryl

Reply via email to