On Thu, Feb 16, 2023 at 11:26 PM David Rowley <dgrowle...@gmail.com> wrote:

> I didn't hear it mentioned explicitly here, but I suspect it's faster
> when increasing the initial size due to the memory context caching
> code that reuses aset MemoryContexts (see context_freelists[] in
> aset.c). Since we reset the context before caching it, then it'll
> remain fast when we can reuse a context, provided we don't need to do
> a malloc for an additional block beyond the initial block that's kept
> in the cache.


This is what we were seeing. The larger initial size reduces/eliminates the
multiple smaller blocks that are malloced and freed in each per-query
execution.

Maybe we should think of a more general-purpose way of doing this
> caching which just keeps a global-to-the-process dclist of blocks
> laying around.  We could see if we have any free blocks both when
> creating the context and also when we need to allocate another block.
> I see no reason why this couldn't be shared among the other context
> types rather than keeping this cache stuff specific to aset.c.  slab.c
> might need to be pickier if the size isn't exactly what it needs, but
> generation.c should be able to make use of it the same as aset.c
> could.  I'm unsure what'd we'd need in the way of size classing for
> this, but I suspect we'd need to pay attention to that rather than do
> things like hand over 16MBs of memory to some context that only wants
> a 1KB initial block.


Yeah. There’s definitely a smarter and more reusable approach than I was
proposing. A lot of that code is fairly mature and I figured more people
wouldn’t want to alter it in such ways - but I’m up for it if an approach
like this is the direction we’d want to go in.



-- 
Jonah H. Harris

Reply via email to