At 12:53 PM 6/9/2002 -0700, Aaron Bannert wrote:
>On Sun, Jun 09, 2002 at 03:09:24AM +0300, Zeev Suraski wrote:
> > Hmm, but doesn't that mean that the largest contiguous block this heap 
> will
> > be able to provide is 8KB, then?
>
>8K is just the minimum chunk size, there is no absolute maximum.
>
> > > * There's a two-layer structure to the heaps:
> > >     - apr_pool objects are what the application uses.  Each pool
> > >       provides a fast alloc interface, no free function, and a
> > >       "destructor" that returns all the allocated space when the
> > >       pool is destroyed.
> >
> > This is probably not very suitable for PHP.  We allocate and free *a lot*,
> > not being able to free is going to increase memory consumption
> > significantly.  If we use APR heaps, are we bound by this behavior?
>
>The expectation is that pools are cleared at normal intervals, so
>that eventually the memory allocation for an application reaches a
>"steady state". In PHP this could be accomplished by simply using
>the per-request pool that is already available when the internal PHP
>functions are called from httpd. At the end of a request this pool is
>"cleared", and then able to be reused on subsequent requests.

I haven't looked at the source for too long but it seems to be much worse 
than HeapCreate().If it doesn't have a node for a certain size it uses 
malloc(). So in the end it seems as if you might reach a situation where 
you do lots of malloc()'s instead of allocating another few pages and 
dividing them up yourself without mutexes.
Please correct me if I'm wrong. I haven't had time to study this code in 
great depth.
Andi


-- 
PHP Development Mailing List <http://www.php.net/>
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to