Hi Simon,
On Thu, 17 Sep 2020 at 06:54, Simon Riggs wrote:
> Should pg_rusage_init();
> be at the start of the REDO loop, since you only use it if we take that path?
Thanks for commenting.
I may be misunderstanding your words, but as far as I see it the
pg_rusage_init() is only called if we're
On 2020-09-16 14:01:21 +1200, Thomas Munro wrote:
> On Wed, Sep 16, 2020 at 1:30 PM David Rowley wrote:
> > Thanks a lot for the detailed benchmark results and profiles. That was
> > useful. I've pushed both patches now. I did a bit of a sweep of the
> > comments on the 0001 patch before pushing
On Wed, Sep 16, 2020 at 2:54 PM Simon Riggs wrote:
> I really like this patch, thanks for proposing it.
I'm pleased to be able to say that I agree completely with this
comment from Simon. :-)
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Thu, 10 Sep 2020 at 14:45, David Rowley wrote:
> I've also attached another tiny patch that I think is pretty useful
> separate from this. It basically changes:
>
> LOG: redo done at 0/D518FFD0
>
> into:
>
> LOG: redo done at 0/D518FFD0 system usage: CPU: user: 58.93 s,
> system: 0.74 s,
On Tue, Sep 15, 2020 at 7:01 PM Thomas Munro wrote:
> On Wed, Sep 16, 2020 at 1:30 PM David Rowley wrote:
> > I also did some further performance tests of something other than
> > recovery. I can also report a good performance improvement in VACUUM.
> > Something around the ~25% reduction mark
>
On Wed, Sep 16, 2020 at 1:30 PM David Rowley wrote:
> Thanks a lot for the detailed benchmark results and profiles. That was
> useful. I've pushed both patches now. I did a bit of a sweep of the
> comments on the 0001 patch before pushing it.
>
> I also did some further performance tests of
On Wed, 16 Sep 2020 at 02:10, Jakub Wartak wrote:
> BTW: this message "redo done at 0/9749FF70 system usage: CPU: user: 13.46 s,
> system: 0.78 s, elapsed: 14.25 s" is priceless addition :)
Thanks a lot for the detailed benchmark results and profiles. That was
useful. I've pushed both patches
David Rowley wrote:
> I've attached patches in git format-patch format. I'm proposing to commit
> these in about 48 hours time unless there's some sort of objection before
> then.
Hi David, no objections at all, I've just got reaffirming results here, as per
[1] (SLRU thread but combined
On Fri, 11 Sep 2020 at 17:48, Thomas Munro wrote:
>
> On Fri, Sep 11, 2020 at 3:53 AM David Rowley wrote:
> > That gets my benchmark down to 60.8 seconds, so 2.2 seconds better than v4b.
>
> . o O ( I wonder if there are opportunities to squeeze some more out
> of this with __builtin_prefetch...
On Fri, Sep 11, 2020 at 3:53 AM David Rowley wrote:
> That gets my benchmark down to 60.8 seconds, so 2.2 seconds better than v4b.
. o O ( I wonder if there are opportunities to squeeze some more out
of this with __builtin_prefetch... )
> I've attached v6b and an updated chart showing the
On Fri, Sep 11, 2020 at 1:45 AM David Rowley wrote:
> On Thu, 10 Sep 2020 at 10:40, Thomas Munro wrote:
> > I wonder if we could also identify a range at the high end that is
> > already correctly sorted and maximally compacted so it doesn't even
> > need to be copied out.
>
> I've experimented
On Fri, 11 Sep 2020 at 01:45, David Rowley wrote:
> I've attached v4b (b is for backwards since the traditional backwards
> tuple order is maintained). v4b seems to be able to run my benchmark
> in 63 seconds. I did 10 runs today of yesterday's v3 patch and got an
> average of 72.8 seconds, so
On Thu, 10 Sep 2020 at 10:40, Thomas Munro wrote:
>
> I wonder if we could also identify a range at the high end that is
> already correctly sorted and maximally compacted so it doesn't even
> need to be copied out.
I've experimented quite a bit with this patch today. I think I've
tested every
On Thu, Sep 10, 2020 at 2:34 AM David Rowley wrote:
> I think you were adequately caffeinated. You're right that this is
> fairly simple to do, but it looks even more simple than looping twice
> of the array. I think it's just a matter of looping over the
> itemidbase backwards and putting the
On Wed, 9 Sep 2020 at 05:38, Thomas Munro wrote:
>
> On Wed, Sep 9, 2020 at 3:47 AM David Rowley wrote:
> > On Tue, 8 Sep 2020 at 12:08, Thomas Munro wrote:
> > > One thought is that if we're going to copy everything out and back in
> > > again, we might want to consider doing it in a
> > >
On Wed, Sep 9, 2020 at 3:47 AM David Rowley wrote:
> On Tue, 8 Sep 2020 at 12:08, Thomas Munro wrote:
> > One thought is that if we're going to copy everything out and back in
> > again, we might want to consider doing it in a
> > memory-prefetcher-friendly order. Would it be a good idea to
> >
On Mon, 7 Sep 2020 at 19:47, David Rowley wrote:
> I wondered if we could get around that just by having another buffer
> somewhere and memcpy the tuples into that first then copy the tuples
> out that buffer back into the page. No need to worry about the order
> we do that in as there's no
On Tue, 8 Sep 2020 at 12:08, Thomas Munro wrote:
>
> One thought is that if we're going to copy everything out and back in
> again, we might want to consider doing it in a
> memory-prefetcher-friendly order. Would it be a good idea to
> rearrange the tuples to match line pointer order, so that
On Mon, Sep 7, 2020 at 7:48 PM David Rowley wrote:
> I was wondering today if we could just get rid of the sort in
> compactify_tuples() completely. It seems to me that the existing sort
> is there just so that the memmove() is done in order of tuple at the
> end of the page first. We seem to be
On Sun, 6 Sep 2020 at 23:37, David Rowley wrote:
> The test replayed ~2.2 GB of WAL. master took 148.581 seconds and
> master+0001+0002 took 115.588 seconds. That's about 28% faster. Pretty
> nice!
I was wondering today if we could just get rid of the sort in
compactify_tuples() completely. It
On Thu, 20 Aug 2020 at 11:28, Thomas Munro wrote:
> I fixed up the copyright messages, and removed some stray bits of
> build scripting relating to the Perl-generated file. Added to
> commitfest.
I'm starting to look at this. So far I've only just done a quick
performance test on it. With the
On Wed, Aug 19, 2020 at 11:41 PM Thomas Munro wrote:
> On Tue, Aug 18, 2020 at 6:53 AM Peter Geoghegan wrote:
> > I definitely think that we should have something like this, though.
> > It's a relatively easy win. There are plenty of workloads that spend
> > lots of time on pruning.
>
> Alright
On Tue, Aug 18, 2020 at 6:53 AM Peter Geoghegan wrote:
> I definitely think that we should have something like this, though.
> It's a relatively easy win. There are plenty of workloads that spend
> lots of time on pruning.
Alright then, here's an attempt to flesh the idea out a bit more, and
On Mon, Aug 17, 2020 at 4:01 AM Thomas Munro wrote:
> While writing this email, I checked the archives and discovered that a
> couple of other people have complained about this hot spot before and
> proposed faster sorts already[2][3], and then there was a wide ranging
> discussion of various
Hi,
With [1] applied so that you can get crash recovery to be CPU bound
with a pgbench workload, we spend an awful lot of time in qsort(),
called from compactify_tuples(). I tried replacing that with a
specialised sort, and I got my test crash recovery time from ~55.5s
down to ~49.5s quite
25 matches
Mail list logo