On 09/22/2009 11:05 PM, Neil Conway wrote:
> On Tue, Sep 22, 2009 at 1:21 PM, Graham Leggett wrote:
>> Thanks for this, I've committed it to trunk, and backported it to 1.4
>> and 1.3.
>
> Whoops -- the patch had a boneheaded typo in the changes to
> apr_hash_copy(). Attached is a fix for the b
On Tue, Sep 22, 2009 at 1:21 PM, Graham Leggett wrote:
> Thanks for this, I've committed it to trunk, and backported it to 1.4
> and 1.3.
Whoops -- the patch had a boneheaded typo in the changes to
apr_hash_copy(). Attached is a fix for the bug.
Sorry, my mistake.
Neil
apr_hash_copy_bug-1.pat
Neil Conway wrote:
> Fair enough. The one place this doesn't work is when expanding the
> bucket array in apr_hash_set() (since it returns NULL), but in that
> case we can just skip the expansion if the apr_pool_create() fails,
> and retry it next time.
>
> Attached is a revised patch -- thanks f
On Sat, Sep 19, 2009 at 10:09 AM, Graham Leggett wrote:
> Hmmm. I think it's reasonable to return NULL in the failure cases, where
> NULL means "out of memory".
Fair enough. The one place this doesn't work is when expanding the
bucket array in apr_hash_set() (since it returns NULL), but in that
c
Neil Conway wrote:
> Unfortunately, the APR hash table API doesn't provide any means to
> report errors, so we can't easily propagate apr_pool_create() failures
> back to the client program. Presuming changing the APR hash API isn't
> an option for 1.3.x / 1.4.x, is there a better solution than ju
On Mon, Sep 14, 2009 at 7:43 PM, Bing Swen wrote:
> We'd love to see such a patch. we've been hampered by this problem for years.
Attached is a patch that implements the scheme I suggested earlier
(allocating the bucket array in a subpool). The patch is against the
1.3 branch.
Unfortunately, the
Hi,
> I notice that the implementation of expand_array() in
> tables/apr_hash.c allocates a new bucket array, without making any
> attempt to release the memory allocated for the previous bucket array.
> That wastes memory: if the hash table grows exponentially, this
> strategy wastes O(n) extra m
We'd love to see such a patch. we've been hampered by this problem for years.
We use apr_hash_t to look up several in-memory lexicons, each of them with over
half a million words. The initial slot array size expands from 2^n - 1, n = 4.
That wastes too much as we need to put all the hash lexicon