BTW, this is now heading down a different route to the original thread. 
That's ok, but I wanted to mention that we're no longer discussing 
BN_BLINDING leaks ...

On March 23, 2004 12:18 pm, you wrote:
> First off I'm not all that familiar with the openSSL code base so my
> comment may be inappropriate.
>
> A couple years ago I made a post that I felt the memory managment is
> not done optimally.  I think what you have discovered illustrates this.
>
> IMHO we should have a higher level memory managment layer between the
> library level functions and the actual system malloc() etc. routines.
>
> What I am thinking is that we need to define a level where teh
> underlying system malloc()s are done one or more pages at a time and
> that the pages are logically associated with a connection.  The idea
> then is that any memory needed for a specific connection gets allocated
> in the page pool associated with that connection.

The problem with your suggestion is not that it is without merit, it is 
that you are talking about some industrial-sized cans and a frightening 
quantity of worms. One of the issues that overlaps a lot of this stuff is 
threading, eg. the fact it would be nice to have thread-local storage for 
things like BN_CTX. There is also the issue of locking, and the interface 
by which applications (or other libraries) can hook and override 
openssl's internal default choice for memory management. Eg. we currently 
let callers provide malloc/realloc/free callbacks. W.r.t. threading, the 
smallest problem I can imagine is that this creates an acryonym problem 
(TLS having its boot on quite a different foot). Beyond that, I can 
imagine far larger problems, but I didn't want to be too depressing.

As a side-note, I have been working on BN_CTX improvements in the 
background, and one of the more frustrating issues has been the glibc 
malloc on linux - contrary to the comment you made about malloc 
conserving memory (which perhaps is a reasonable generalisation for the 
most part), this malloc appears quite keen to optimise for speed no 
matter what the cost. I have constantly seen changes that result in 
reduced allocation and increased reuse, and then seen these result in 
slight slow-downs in execution speed!! And I'm not talking about 
expensive table management overheads either, there are times when abusing 
malloc is almost "free" (groan) when compared to using the most basic 
mechanics to cache and reuse the memory yourself. No doubt if I could 
stomach (and afford) the MS/VC++ environment, I would see something quite 
different with the malloc implementation there.

My point is that implementing something like memory pools and attempting 
to fit that into openssl right now would be like trying to promote social 
reforms and corporate deconstruction and fit that into western electoral 
systems. A fine and noble aim, almost certainly doomed to logjam on the 
vast mechanics of the existing system being geared orthogonally to what 
you're trying to do. To continue the chomskyesque analogy, the problem is 
that a lot of redesign and rewriting is required to accompany this at a 
more fundamental level. These are subjects at the forefront of my 
thinking and a few others too, but they will require a lot of cleanup to 
continue on the current code first so that a broader (and less urgent) 
plan can be made for a future version to tackle these issues. In 
particular, this requires a break with the current API in a few ways and 
so involves going down a road that you would not want to go down too 
often.

Cheers,
Geoff

-- 
Geoff Thorpe
[EMAIL PROTECTED]
http://www.geoffthorpe.net/

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to