On Tue, Sep 24, 2019 at 2:47 PM Melanie Plageman
wrote:
> I think it would be helpful if we could repeat the performance tests
> Robert did on that machine with the current patch (unless this version
> of the patch is exactly the same as the ones he tested previously).
I still have access to a PO
Peter Geoghegan writes:
> On Fri, Oct 4, 2019 at 7:32 AM Jeff Davis wrote:
>> The patch has been floating around for a very long time, so I don't
>> remember exactly why I chose a signed value. Sorry.
> I am reminded of the fact that int64 is used to size buffers within
> tuplesort.c, because it
On Fri, Oct 4, 2019 at 7:32 AM Jeff Davis wrote:
> The patch has been floating around for a very long time, so I don't
> remember exactly why I chose a signed value. Sorry.
I am reminded of the fact that int64 is used to size buffers within
tuplesort.c, because it needs to support negative availM
I wrote:
> Just to make things even more mysterious, prairiedog finally showed
> the Assert failure on its fourth run with c477f3e449 included:
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2019-10-04%2012%3A35%3A41
> It's also now apparent that lapwing and locust were fai
Tomas Vondra writes:
> On Fri, Oct 04, 2019 at 12:36:01AM -0400, Tom Lane wrote:
>> What I think is happening is that c477f3e449 allowed this bit in
>> AllocSetRealloc:
>> context->mem_allocated += blksize - oldblksize;
>> to be executed in situations where blksize < oldblksize, where before
>> th
Jeff Davis writes:
> On Fri, 2019-10-04 at 10:26 +0200, Tomas Vondra wrote:
>> Yeah, I think that's an oversight. Maybe there's a reason why Jeff
>> used int64, but I can't think of any.
> I had chosen a 64-bit value to account for the situation Tom mentioned:
> that, in theory, Size might not be
On Fri, 2019-10-04 at 10:26 +0200, Tomas Vondra wrote:
> On Fri, Oct 04, 2019 at 12:36:01AM -0400, Tom Lane wrote:
> > So ... why exactly did this patch define
> > MemoryContextData.mem_allocated
> > as int64? That seems to me to be doubly wrong: it is not the right
> > width
> > on 32-bit machine
On Fri, Oct 04, 2019 at 10:26:44AM +0200, Tomas Vondra wrote:
On Fri, Oct 04, 2019 at 12:36:01AM -0400, Tom Lane wrote:
I haven't chased down exactly what else would need to change.
It might be that s/int64/Size/g throughout the patch is
sufficient, but I haven't analyzed it.
I think so too,
On Fri, Oct 04, 2019 at 12:36:01AM -0400, Tom Lane wrote:
So ... why exactly did this patch define MemoryContextData.mem_allocated
as int64? That seems to me to be doubly wrong: it is not the right width
on 32-bit machines, and it is not the right signedness anywhere. I think
that field ought t
So ... why exactly did this patch define MemoryContextData.mem_allocated
as int64? That seems to me to be doubly wrong: it is not the right width
on 32-bit machines, and it is not the right signedness anywhere. I think
that field ought to be of type Size (a/k/a size_t, but memnodes.h always
calls
s of the benchmarks I've done over the past couple
of days are in [1]. It includes both the reindex benchmark used by by
Robert in 2015, and a regular read-only pgbench. In general, the
overhead of the accounting is pretty much indistinguishable from noise
(at least on those two machines).
cSetDelete(), saving the keeper size is required, because it wipes
the block headers in addition to the contents.
> Oh, and one more thing - this probably needs to add at least some
> basic
> explanation of the accounting to src/backend/mmgr/README.
Added.
Regards,
Jeff Davis
From 718
On Sun, Sep 29, 2019 at 12:12:49AM +0200, Tomas Vondra wrote:
On Thu, Sep 26, 2019 at 01:36:46PM -0700, Jeff Davis wrote:
On Thu, 2019-09-26 at 21:22 +0200, Tomas Vondra wrote:
It's worth mentioning that those bechmarks (I'm assuming we're
talking
about the numbers Rober shared in [1]) were don
On Thu, Sep 26, 2019 at 01:36:46PM -0700, Jeff Davis wrote:
On Thu, 2019-09-26 at 21:22 +0200, Tomas Vondra wrote:
It's worth mentioning that those bechmarks (I'm assuming we're
talking
about the numbers Rober shared in [1]) were done on patches that used
the eager accounting approach (i.e. walk
On Thu, 2019-09-26 at 21:22 +0200, Tomas Vondra wrote:
> It's worth mentioning that those bechmarks (I'm assuming we're
> talking
> about the numbers Rober shared in [1]) were done on patches that used
> the eager accounting approach (i.e. walking all parent contexts and
> updating the accounting f
pared to no memory accounting at all?
We didn't spend much time comparing performance with and without
memory accounting, as it seems like this was discussed extensively in
the previous thread.
It looks like you agree with the approach and the results. Did you find
any other issues with
On Tue, Sep 24, 2019 at 02:05:51PM +0200, Tomas Vondra wrote:
> The way I see it we can do either eager or lazy accounting. Eager might
> work better for aggregates with many contexts, but it does increase the
> overhead for the "regular" aggregates with just one or two contexts.
> Considering how
t; (eage
> > r/lazy)
> > 2MB 82 302.715 427.554 1.4123978
> > 3MB 3474567.829 896.143 1.578191674
> > 7.67MB86942657.9724903.0631.844663149
>
> Thank you for collecting data on this. Were you able to find any
> regression when
egates are wrong and should be fixed just
like we fixed array_agg/string_agg (even without the memory accounting).
The way I see it we can do either eager or lazy accounting. Eager might
work better for aggregates with many contexts, but it does increase the
overhead for the "regular" a
On Wed, Jul 24, 2019 at 11:52:28PM +0200, Tomas Vondra wrote:
> I think Heikki was asking about places with a lot of sub-contexts, which a
> completely different issue. It used to be the case that some aggregates
> created a separate context for each group - like array_agg. That would
> make Jeff's
567.829 896.143 1.578191674
> 7.67MB86942657.9724903.0631.844663149
Thank you for collecting data on this. Were you able to find any
regression when compared to no memory accounting at all?
It looks like you agree with the approach and the results. Did you find
any oth
I wanted to address a couple of questions Jeff asked me off-list about
Greenplum's implementations of Memory Accounting.
Greenplum has two memory accounting sub-systems -- one is the
MemoryContext-based system proposed here.
The other memory accounting system tracks "logical" m
On Tue, Jul 23, 2019 at 06:18:26PM -0700, Melanie Plageman wrote:
On Thu, Jul 18, 2019 at 11:24 AM Jeff Davis wrote:
Previous discussion:
https://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop
This patch introduces a way to ask a memory context how much memory it
currently has allocated
On Thu, Jul 18, 2019 at 11:24 AM Jeff Davis wrote:
> Previous discussion:
> https://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop
>
> This patch introduces a way to ask a memory context how much memory it
> currently has allocated. Each time a new block (not an individual
> chunk, but a ne
On Mon, 2019-07-22 at 18:16 +0200, Tomas Vondra wrote:
> * I changed it to only update mem_allocated for the current
> > > context,
> > > not recursively for all parent contexts. It's now up to the
> > > function
> > > that reports memory usage to recurse or not (as needed).
> >
> > Is that OK for
On Mon, Jul 22, 2019 at 11:30:37AM +0300, Heikki Linnakangas wrote:
On 18/07/2019 21:24, Jeff Davis wrote:
Previous discussion:
https://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop
This patch introduces a way to ask a memory context how much memory it
currently has allocated. Each time
On 18/07/2019 21:24, Jeff Davis wrote:
Previous discussion:
https://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop
This patch introduces a way to ask a memory context how much memory it
currently has allocated. Each time a new block (not an individual
chunk, but a new malloc'd block) is al
nd control memory
usage at execution time.
I ran the same tests as Robert did before[1] on my laptop[2]. The only
difference is that I also set max_parallel_workers[_per_gather]=0 to be
sure. I did 5 runs, alternating between memory-accounting and master,
and I got the following results for "elapse
28 matches
Mail list logo