On Tue, Feb 21, 2023 at 2:46 AM Andres Freund wrote:
> On 2023-02-21 08:33:22 +1300, David Rowley wrote:
> > I am interested in a bump allocator for tuplesort.c. There it would be
> > used in isolation and all the code which would touch pointers
> > allocated by the bump allocator would be
On Sat, 18 Feb 2023 at 06:52, Andres Freund wrote:
> And did you increase ALLOCSET_DEFAULT_INITSIZE everywhere, or just passed a
> larger block size in CreateExecutorState()? If the latter,the context
> freelist wouldn't even come into play.
I think this piece of information is critical to
Hi,
On 2023-02-21 08:33:22 +1300, David Rowley wrote:
> On Tue, 21 Feb 2023 at 07:30, Andres Freund wrote:
> > 2) We should introduce an API mcxt.c API to perform allocations that the
> >caller promises not to individually free.
>
> It's not just pfree. Offhand, there's also repalloc,
>
On Tue, 21 Feb 2023 at 07:30, Andres Freund wrote:
> 2) We should introduce an API mcxt.c API to perform allocations that the
>caller promises not to individually free.
It's not just pfree. Offhand, there's also repalloc,
GetMemoryChunkSpace and GetMemoryChunkContext too.
I am interested in
Hi,
On 2023-02-17 09:52:01 -0800, Andres Freund wrote:
> On 2023-02-17 17:26:20 +1300, David Rowley wrote:
> Random note:
>
> I wonder if we should having a bitmap (in an int) in front of aset's
> freelist. In a lot of cases we incur plenty cache misses, just to find the
> freelist bucket empty.
Hi,
On 2023-02-17 17:26:20 +1300, David Rowley wrote:
> I didn't hear it mentioned explicitly here, but I suspect it's faster
> when increasing the initial size due to the memory context caching
> code that reuses aset MemoryContexts (see context_freelists[] in
> aset.c). Since we reset the
On Fri, Feb 17, 2023 at 12:03 AM David Rowley wrote:
> On Fri, 17 Feb 2023 at 17:40, Jonah H. Harris
> wrote:
> > Yeah. There’s definitely a smarter and more reusable approach than I was
> proposing. A lot of that code is fairly mature and I figured more people
> wouldn’t want to alter it in
On Fri, 17 Feb 2023 at 17:40, Jonah H. Harris wrote:
> Yeah. There’s definitely a smarter and more reusable approach than I was
> proposing. A lot of that code is fairly mature and I figured more people
> wouldn’t want to alter it in such ways - but I’m up for it if an approach
> like this is
On Thu, Feb 16, 2023 at 11:26 PM David Rowley wrote:
> I didn't hear it mentioned explicitly here, but I suspect it's faster
> when increasing the initial size due to the memory context caching
> code that reuses aset MemoryContexts (see context_freelists[] in
> aset.c). Since we reset the
On Fri, 17 Feb 2023 at 16:40, Andres Freund wrote:
> I'd like a workload that hits a perf issue with this, because I think there
> likely are some general performance improvements that we could make, without
> changing the initial size or the "growth rate".
I didn't hear it mentioned explicitly
Hi,
On 2023-02-16 21:34:18 -0500, Jonah H. Harris wrote:
> On Thu, Feb 16, 2023 at 7:32 PM Andres Freund wrote:
> Given not much changed regarding that allocation context IIRC, I’d think
> all recents. It was observed in 13, 14, and 15.
We did have a fair bit of changes in related code in the
On Thu, Feb 16, 2023 at 7:32 PM Andres Freund wrote:
> What PG version?
>
Hey, Andres. Thanks for the reply.
Given not much changed regarding that allocation context IIRC, I’d think
all recents. It was observed in 13, 14, and 15.
Do you have a way to reproduce this with core code,
> e.g.
Hi,
On 2023-02-16 16:49:07 -0500, Jonah H. Harris wrote:
> I've been working on a federated database project that heavily relies on
> foreign data wrappers. During benchmarking, we noticed high system CPU
> usage in OLTP-related cases, which we traced back to multiple brk calls
> resulting from
Hi everyone,
I've been working on a federated database project that heavily relies on
foreign data wrappers. During benchmarking, we noticed high system CPU
usage in OLTP-related cases, which we traced back to multiple brk calls
resulting from block frees in AllocSetReset upon ExecutorEnd's
14 matches
Mail list logo