Hi,
I appreciate the feedback and suggestions.
On Tue, Jul 31, 2018 at 8:01 AM, Robert Haas wrote:
>> How would this work if a relfilenode number that belonged to an old
>> relation got recycled for a new relation?
>> ..
>> I think something like this could be made to work -- both on the
>> ma
From: 'Andres Freund' [mailto:and...@anarazel.de]
> I'm continuing to work on it, but unfortunately there's a couple
> projects that have higher priority atm :(. I'm doubtful I can have a
> patchset in a committable shape for v12, but I'm pretty sure I'll have
> it in a shape good enough to make p
From: Robert Haas [mailto:robertmh...@gmail.com]
> It's not clear to me whether it would be worth the overhead of doing
> something like this.
Quite frankly, not really to me, too.
> Making relation drops faster at the cost of
> making buffer cleaning slower could be a loser.
The purpose is not
On Tue, Jul 31, 2018 at 8:01 AM, Robert Haas wrote:
> On Mon, Jul 30, 2018 at 1:22 AM, Jamison, Kirk
> wrote:
>> 1. Because the multiple scans of the whole shared buffer per concurrent
>> truncate/drop table was the cause of the time-consuming behavior, DURING the
>> failover process while WAL
Hi,
On 2018-07-30 05:22:48 +, Jamison, Kirk wrote:
> BTW, are there any updates whether the community will push through
> anytime soon regarding the buffer mapping implementation you
> mentioned?
I'm continuing to work on it, but unfortunately there's a couple
projects that have higher priori
Hi,
On 2018-07-30 16:01:53 -0400, Robert Haas wrote:
> (1) Limit the number of deferred drops to a reasonably small number
> (one cache line? 1kB?).
Yea, you'd have to, because we'd frequently need to check it, and it'd
need to be in shared memory. But that'd still leave us to regress to
O(n^2)
On Mon, Jul 30, 2018 at 1:22 AM, Jamison, Kirk wrote:
> 1. Because the multiple scans of the whole shared buffer per concurrent
> truncate/drop table was the cause of the time-consuming behavior, DURING the
> failover process while WAL is being applied, we temporary delay the scanning
> and inv
Hi Andres,
> I think this is a case where the potential work arounds are complex enough to
> use significant resources to get right, and are likely to make properly
> fixing the issue harder. I'm willing to comment on proposals that claim not
> to be problmatic in those regards, but I have *SER
On 2018-07-19 00:53:14 +, Jamison, Kirk wrote:
> Hi,
> Thank you for your replies.
>
> On Tue, July 10, 2018 4:15 PM, Andres Freund wrote:
> >I think you'd run into a lot of very hairy details with this approach.
> >Consider what happens if client processes need fresh buffers and need to
> >
Hi,
Thank you for your replies.
On Tue, July 10, 2018 4:15 PM, Andres Freund wrote:
>I think you'd run into a lot of very hairy details with this approach.
>Consider what happens if client processes need fresh buffers and need to write
>out a victim buffer. You'll need to know that the relevant
On Tue, Jul 10, 2018 at 10:05 AM Jamison, Kirk
wrote:
> Since in the current implementation, the replay of each TRUNCATE/DROP
> TABLE scans the whole shared buffer.
>
> One approach (though idea is not really developed yet) is to improve the
> recovery by delaying the shared buffer scan and inval
Hi,
On 2018-07-10 07:05:12 +, Jamison, Kirk wrote:
> Hello hackers,
>
> Recently, the problem on improving performance of multiple drop/truncate
> tables in a single transaction with large shared_buffers (as shown below) was
> solved by commit b416691.
> BEGIN;
>
Hello hackers,
Recently, the problem on improving performance of multiple drop/truncate tables
in a single transaction with large shared_buffers (as shown below) was solved
by commit b416691.
BEGIN;
truncate tbl001;
...
truncate tbl050;
13 matches
Mail list logo