On Tue, Nov 11, 2014 at 3:00 PM, Andres Freund
wrote:
> On 2014-11-11 09:29:22 +, Thom Brown wrote:
> > On 26 September 2014 12:40, Amit Kapila wrote:
> >
> > > On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas
> > > wrote:
> > > >
> > > > But this gets at another point: the way we're benchmark
On 2014-11-11 09:29:22 +, Thom Brown wrote:
> On 26 September 2014 12:40, Amit Kapila wrote:
>
> > On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas
> > wrote:
> > >
> > > But this gets at another point: the way we're benchmarking this right
> > > now, we're really conflating the effects of thr
On 26 September 2014 12:40, Amit Kapila wrote:
> On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas
> wrote:
> >
> > But this gets at another point: the way we're benchmarking this right
> > now, we're really conflating the effects of three different things:
> >
> > 1. Changing the locking regimen a
On Tue, Oct 14, 2014 at 3:24 PM, Amit Kapila
wrote:
> On Thu, Oct 9, 2014 at 6:17 PM, Amit Kapila
wrote:
> > On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas
wrote:
> > >
> > > On another point, I think it would be a good idea to rebase the
> > > bgreclaimer patch over what I committed, so that we h
On Tue, Oct 14, 2014 at 3:32 PM, Andres Freund
wrote:
> On 2014-10-14 15:24:57 +0530, Amit Kapila wrote:
> > After that I observed that contention for LW_SHARED has reduced
> > for this load, but it didn't help much in terms of performance, so I
again
> > rechecked the profile and this time most o
On 2014-10-14 15:24:57 +0530, Amit Kapila wrote:
> After that I observed that contention for LW_SHARED has reduced
> for this load, but it didn't help much in terms of performance, so I again
> rechecked the profile and this time most of the contention is moved
> to spinlock used in dynahash for bu
On Thu, Oct 9, 2014 at 6:17 PM, Amit Kapila wrote:
>
> On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas
wrote:
> >
> > On another point, I think it would be a good idea to rebase the
> > bgreclaimer patch over what I committed, so that we have a
> > clean patch against master to test with.
>
> Please
On 2014-10-10 12:28:13 +0530, Amit Kapila wrote:
> On Fri, Oct 10, 2014 at 1:08 AM, Andres Freund
> wrote:
> > On 2014-10-09 16:01:55 +0200, Andres Freund wrote:
> > >
> > > I don't think OLTP really is the best test case for this. Especially not
> > > pgbench with relatilvely small rows *and* a u
On Fri, Oct 10, 2014 at 1:08 AM, Andres Freund
wrote:
> On 2014-10-09 16:01:55 +0200, Andres Freund wrote:
> >
> > I don't think OLTP really is the best test case for this. Especially not
> > pgbench with relatilvely small rows *and* a uniform distribution of
> > access.
> >
> > Try parallel COPY
On 2014-10-09 16:01:55 +0200, Andres Freund wrote:
> On 2014-10-09 18:17:09 +0530, Amit Kapila wrote:
> > On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas wrote:
> > >
> > > On another point, I think it would be a good idea to rebase the
> > > bgreclaimer patch over what I committed, so that we have a
On Thu, Oct 9, 2014 at 7:31 PM, Andres Freund
wrote:
>
> On 2014-10-09 18:17:09 +0530, Amit Kapila wrote:
> > On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas
wrote:
> > >
> > > On another point, I think it would be a good idea to rebase the
> > > bgreclaimer patch over what I committed, so that we h
On 2014-10-09 18:17:09 +0530, Amit Kapila wrote:
> On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas wrote:
> >
> > On another point, I think it would be a good idea to rebase the
> > bgreclaimer patch over what I committed, so that we have a
> > clean patch against master to test with.
>
> Please fin
On Thu, Oct 2, 2014 at 1:07 PM, Andres Freund wrote:
> Do a make check-world and it'll hopefully fail ;). Check
> pg_buffercache_pages.c.
Yep. Committed, with an update to the comments in lwlock.c to allude
to the pg_buffercache issue.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
T
On 2014-10-02 20:04:58 +0300, Heikki Linnakangas wrote:
> On 10/02/2014 05:40 PM, Robert Haas wrote:
> >On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund
> >wrote:
> >>>OK.
> >>
> >>Given that the results look good, do you plan to push this?
> >
> >By "this", you mean the increase in the number of b
On 10/02/2014 05:40 PM, Robert Haas wrote:
On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund wrote:
OK.
Given that the results look good, do you plan to push this?
By "this", you mean the increase in the number of buffer mapping
partitions to 128, and a corresponding increase in MAX_SIMUL_LWLO
On 2014-10-02 10:56:05 -0400, Robert Haas wrote:
> On Thu, Oct 2, 2014 at 10:44 AM, Andres Freund wrote:
> > On 2014-10-02 10:40:30 -0400, Robert Haas wrote:
> >> On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund
> >> wrote:
> >> >> OK.
> >> >
> >> > Given that the results look good, do you plan to
On Thu, Oct 2, 2014 at 10:44 AM, Andres Freund wrote:
> On 2014-10-02 10:40:30 -0400, Robert Haas wrote:
>> On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund
>> wrote:
>> >> OK.
>> >
>> > Given that the results look good, do you plan to push this?
>>
>> By "this", you mean the increase in the numbe
On 2014-10-02 10:40:30 -0400, Robert Haas wrote:
> On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund wrote:
> >> OK.
> >
> > Given that the results look good, do you plan to push this?
>
> By "this", you mean the increase in the number of buffer mapping
> partitions to 128, and a corresponding incre
On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund wrote:
>> OK.
>
> Given that the results look good, do you plan to push this?
By "this", you mean the increase in the number of buffer mapping
partitions to 128, and a corresponding increase in MAX_SIMUL_LWLOCKS?
If so, and if you don't have any res
On 2014-09-25 10:42:29 -0400, Robert Haas wrote:
> On Thu, Sep 25, 2014 at 10:24 AM, Andres Freund
> wrote:
> > On 2014-09-25 10:22:47 -0400, Robert Haas wrote:
> >> On Thu, Sep 25, 2014 at 10:14 AM, Andres Freund
> >> wrote:
> >> > That leads me to wonder: Have you measured different, lower, n
On 2014-10-01 20:54:39 +0200, Andres Freund wrote:
> Here we go.
>
> Postgres was configured with.
> -c shared_buffers=8GB \
> -c log_line_prefix="[%m %p] " \
> -c log_min_messages=debug1 \
> -p 5440 \
> -c checkpoint_segments=600
> -c max_connections=200
Robert reminded me that I missed to
On 2014-09-25 16:50:44 +0200, Andres Freund wrote:
> On 2014-09-25 10:44:40 -0400, Robert Haas wrote:
> > On Thu, Sep 25, 2014 at 10:42 AM, Robert Haas wrote:
> > > On Thu, Sep 25, 2014 at 10:24 AM, Andres Freund
> > > wrote:
> > >> On 2014-09-25 10:22:47 -0400, Robert Haas wrote:
> > >>> On Thu
Part of this patch was already committed, and the overall patch has had
its fair share of review for this commitfest, so I'm marking this as
"Returned with feedback". The benchmark results for the bgreclaimer
showed a fairly small improvement, so it doesn't seem like anyone's
going to commit th
On 2014-09-26 09:59:41 -0400, Robert Haas wrote:
> On Fri, Sep 26, 2014 at 8:26 AM, Andres Freund wrote:
> > Neither, really. The hash calculation is visible in the profile, but not
> > that pronounced yet. The primary thing noticeable in profiles (besides
> > cache efficiency) is the comparison o
On 2014-09-26 16:47:55 +0300, Heikki Linnakangas wrote:
> On 09/26/2014 03:26 PM, Andres Freund wrote:
> >On 2014-09-26 15:04:54 +0300, Heikki Linnakangas wrote:
> >>On 09/25/2014 05:40 PM, Andres Freund wrote:
> >>>There's two reasons for that: a) dynahash just isn't very good and it
> >>>does a l
On 2014-09-26 17:01:52 +0300, Ants Aasma wrote:
> On Fri, Sep 26, 2014 at 3:26 PM, Andres Freund wrote:
> > Neither, really. The hash calculation is visible in the profile, but not
> > that pronounced yet. The primary thing noticeable in profiles (besides
> > cache efficiency) is the comparison of
On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas wrote:
> On Fri, Sep 26, 2014 at 7:40 AM, Amit Kapila
> wrote:
>
>> First of all thanks for committing part-1 of this changes and it
>> seems you are planing to commit part-3 based on results of tests
>> which Andres is planing to do and for remaining
On Fri, Sep 26, 2014 at 3:26 PM, Andres Freund wrote:
> Neither, really. The hash calculation is visible in the profile, but not
> that pronounced yet. The primary thing noticeable in profiles (besides
> cache efficiency) is the comparison of the full tag after locating a
> possible match in a buc
On Fri, Sep 26, 2014 at 8:26 AM, Andres Freund wrote:
> Neither, really. The hash calculation is visible in the profile, but not
> that pronounced yet. The primary thing noticeable in profiles (besides
> cache efficiency) is the comparison of the full tag after locating a
> possible match in a buc
On 09/26/2014 03:26 PM, Andres Freund wrote:
On 2014-09-26 15:04:54 +0300, Heikki Linnakangas wrote:
On 09/25/2014 05:40 PM, Andres Freund wrote:
There's two reasons for that: a) dynahash just isn't very good and it
does a lot of things that will never be necessary for these hashes. b)
the key
On Fri, Sep 26, 2014 at 7:40 AM, Amit Kapila
wrote:
> First of all thanks for committing part-1 of this changes and it
> seems you are planing to commit part-3 based on results of tests
> which Andres is planing to do and for remaining part (part-2), today
> I have tried some tests, the results o
On 2014-09-26 15:04:54 +0300, Heikki Linnakangas wrote:
> On 09/25/2014 05:40 PM, Andres Freund wrote:
> >There's two reasons for that: a) dynahash just isn't very good and it
> >does a lot of things that will never be necessary for these hashes. b)
> >the key into the hash table is*far* too wide.
On 09/25/2014 05:40 PM, Andres Freund wrote:
There's two reasons for that: a) dynahash just isn't very good and it
does a lot of things that will never be necessary for these hashes. b)
the key into the hash table is*far* too wide. A significant portion of
the time is spent comparing buffer/lock
On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas
wrote:
>
> But this gets at another point: the way we're benchmarking this right
> now, we're really conflating the effects of three different things:
>
> 1. Changing the locking regimen around the freelist and clocksweep.
> 2. Adding a bgreclaimer pr
On 2014-09-25 09:34:57 -0500, Merlin Moncure wrote:
> On Thu, Sep 25, 2014 at 9:14 AM, Andres Freund wrote:
> >> Why stop at 128 mapping locks? Theoretical downsides to having more
> >> mapping locks have been mentioned a few times but has this ever been
> >> measured? I'm starting to wonder if
On Thu, Sep 25, 2014 at 10:24 AM, Andres Freund wrote:
> On 2014-09-25 10:22:47 -0400, Robert Haas wrote:
>> On Thu, Sep 25, 2014 at 10:14 AM, Andres Freund
>> wrote:
>> > That leads me to wonder: Have you measured different, lower, number of
>> > buffer mapping locks? 128 locks is, if we'd as w
On Thu, Sep 25, 2014 at 9:14 AM, Andres Freund wrote:
>> Why stop at 128 mapping locks? Theoretical downsides to having more
>> mapping locks have been mentioned a few times but has this ever been
>> measured? I'm starting to wonder if the # mapping locks should be
>> dependent on some other va
On 2014-09-25 10:09:30 -0400, Robert Haas wrote:
> I think the long-term solution here is that we need a lock-free hash
> table implementation for our buffer mapping tables, because I'm pretty
> sure that just cranking the number of locks up and up is going to
> start to have unpleasant side effect
On 2014-09-25 10:22:47 -0400, Robert Haas wrote:
> On Thu, Sep 25, 2014 at 10:14 AM, Andres Freund
> wrote:
> > That leads me to wonder: Have you measured different, lower, number of
> > buffer mapping locks? 128 locks is, if we'd as we should align them
> > properly, 8KB of memory. Common L1 cac
On Thu, Sep 25, 2014 at 10:14 AM, Andres Freund wrote:
> That leads me to wonder: Have you measured different, lower, number of
> buffer mapping locks? 128 locks is, if we'd as we should align them
> properly, 8KB of memory. Common L1 cache sizes are around 32k...
Amit has some results upthread s
On 2014-09-25 09:02:25 -0500, Merlin Moncure wrote:
> On Thu, Sep 25, 2014 at 8:51 AM, Robert Haas wrote:
> > 1. To see the effect of reduce-replacement-locking.patch, compare the
> > first TPS number in each line to the third, or the second to the
> > fourth. At scale factor 1000, the patch wins
On 2014-09-25 09:51:17 -0400, Robert Haas wrote:
> On Tue, Sep 23, 2014 at 5:50 PM, Robert Haas wrote:
> > The patch I attached the first time was just the last commit in the
> > git repository where I wrote the patch, rather than the changes that I
> > made on top of that commit. So, yes, the re
On Thu, Sep 25, 2014 at 10:02 AM, Merlin Moncure wrote:
> On Thu, Sep 25, 2014 at 8:51 AM, Robert Haas wrote:
>> 1. To see the effect of reduce-replacement-locking.patch, compare the
>> first TPS number in each line to the third, or the second to the
>> fourth. At scale factor 1000, the patch wi
On Thu, Sep 25, 2014 at 8:51 AM, Robert Haas wrote:
> 1. To see the effect of reduce-replacement-locking.patch, compare the
> first TPS number in each line to the third, or the second to the
> fourth. At scale factor 1000, the patch wins in all of the cases with
> 32 or more clients and exactly h
On Tue, Sep 23, 2014 at 5:50 PM, Robert Haas wrote:
> The patch I attached the first time was just the last commit in the
> git repository where I wrote the patch, rather than the changes that I
> made on top of that commit. So, yes, the results from the previous
> message are with the patch atta
On 9/23/14, 7:13 PM, Robert Haas wrote:
I think we expose far too little information in our system views. Just
to take one example, we expose no useful information about lwlock
acquire or release, but a lot of real-world performance problems are
caused by lwlock contention.
I sent over a propos
On Tue, Sep 23, 2014 at 7:42 PM, Andres Freund wrote:
>> It will actually be far worse than that, because we'll acquire and
>> release the spinlock for every buffer over which we advance the clock
>> sweep, instead of just once for the whole thing.
>
> I said double, because we already acquire the
On 2014-09-23 19:21:10 -0400, Robert Haas wrote:
> On Tue, Sep 23, 2014 at 6:54 PM, Andres Freund wrote:
> > I think it might be possible to construct some cases where the spinlock
> > performs worse than the lwlock. But I think those will be clearly in the
> > minority. And at least some of those
On Tue, Sep 23, 2014 at 6:54 PM, Andres Freund wrote:
> Am I understanding you correctly that you also measured context switches
> for spinlocks? If so, I don't think that's a valid comparison. LWLocks
> explicitly yield the CPU as soon as there's any contention while
> spinlocks will, well, spin.
On Tue, Sep 23, 2014 at 6:02 PM, Gregory Smith wrote:
> On 9/23/14, 10:31 AM, Robert Haas wrote:
>> I suggest we count these things:
>>
>> 1. The number of buffers the reclaimer has put back on the free list.
>> 2. The number of times a backend has run the clocksweep.
>> 3. The number of buffers p
On 2014-09-23 16:29:16 -0400, Robert Haas wrote:
> On Tue, Sep 23, 2014 at 10:55 AM, Robert Haas wrote:
> > But this gets at another point: the way we're benchmarking this right
> > now, we're really conflating the effects of three different things:
> >
> > 1. Changing the locking regimen around t
On 9/23/14, 10:31 AM, Robert Haas wrote:
I suggest we count these things:
1. The number of buffers the reclaimer has put back on the free list.
2. The number of times a backend has run the clocksweep.
3. The number of buffers past which the reclaimer has advanced the
clock sweep (i.e. the numbe
On Tue, Sep 23, 2014 at 5:43 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Tue, Sep 23, 2014 at 4:29 PM, Robert Haas wrote:
>>> I did some more experimentation on this. Attached is a patch that
>>> JUST does #1, and, ...
>
>> ...and that was the wrong patch. Thanks to Heikki for point that o
Robert Haas writes:
> On Tue, Sep 23, 2014 at 4:29 PM, Robert Haas wrote:
>> I did some more experimentation on this. Attached is a patch that
>> JUST does #1, and, ...
> ...and that was the wrong patch. Thanks to Heikki for point that out.
> Second try.
But the results you gave in the previo
On Tue, Sep 23, 2014 at 4:29 PM, Robert Haas wrote:
> I did some more experimentation on this. Attached is a patch that
> JUST does #1, and, ...
...and that was the wrong patch. Thanks to Heikki for point that out.
Second try.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Ente
On Tue, Sep 23, 2014 at 10:55 AM, Robert Haas wrote:
> But this gets at another point: the way we're benchmarking this right
> now, we're really conflating the effects of three different things:
>
> 1. Changing the locking regimen around the freelist and clocksweep.
> 2. Adding a bgreclaimer proce
On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas wrote:
> [ review ]
Oh, by the way, I noticed that this patch breaks pg_buffercache. If
we're going to have 128 lock partitions, we need to bump
MAX_SIMUL_LWLOCKS.
But this gets at another point: the way we're benchmarking this right
now, we're real
Hi,
On 2014-09-23 10:31:24 -0400, Robert Haas wrote:
> I suggest we count these things:
>
> 1. The number of buffers the reclaimer has put back on the free list.
> 2. The number of times a backend has run the clocksweep.
> 3. The number of buffers past which the reclaimer has advanced the clock
>
On Fri, Sep 19, 2014 at 7:21 AM, Amit Kapila
wrote:
> Specific numbers of both the configurations for which I have
> posted data in previous mail are as follows:
>
> Scale Factor - 800
> Shared_Buffers - 12286MB (Total db size is 12288MB)
> Client and Thread Count = 64
> buffers_touched_freelist
On Mon, Sep 22, 2014 at 10:43 AM, Gregory Smith
wrote:
> On 9/16/14, 8:18 AM, Amit Kapila wrote:
>
>> I think the main reason for slight difference is that
>> when the size of shared buffers is almost same as data size, the number
>> of buffers it needs from clock sweep are very less, as an examp
On 9/16/14, 8:18 AM, Amit Kapila wrote:
I think the main reason for slight difference is that
when the size of shared buffers is almost same as data size, the number
of buffers it needs from clock sweep are very less, as an example in first
case (when size of shared buffers is 12286MB), it actual
On Tue, Sep 16, 2014 at 10:21 PM, Robert Haas wrote:
> On Tue, Sep 16, 2014 at 8:18 AM, Amit Kapila
> wrote:
>
>> In most cases performance with patch is slightly less as compare
>> to HEAD and the difference is generally less than 1% and in a case
>> or 2 close to 2%. I think the main reason fo
On Tue, Sep 16, 2014 at 8:18 AM, Amit Kapila
wrote:
> In most cases performance with patch is slightly less as compare
> to HEAD and the difference is generally less than 1% and in a case
> or 2 close to 2%. I think the main reason for slight difference is that
> when the size of shared buffers i
On Sun, Sep 14, 2014 at 12:23 PM, Amit Kapila
wrote:
> On Fri, Sep 12, 2014 at 11:55 AM, Amit Chapel
> wrote:
> > On Thu, Sep 11, 2014 at 4:31 PM, Andres Freund
> wrote:
> > > On 2014-09-10 12:17:34 +0530, Amit Kapila wrote:
>
> I will post the data with the latest patch separately (where I wil
On 14/09/14 19:00, Amit Kapila wrote:
On Fri, Sep 12, 2014 at 11:09 PM, Gregory Smith
mailto:gregsmithpg...@gmail.com>> wrote:
> This looks like it's squashed one of the very fundamental buffer
> scaling issues though; well done Amit.
Thanks.
> I'll go back to my notes and try to recreate
On Fri, Sep 12, 2014 at 11:09 PM, Gregory Smith
wrote:
> This looks like it's squashed one of the very fundamental buffer
> scaling issues though; well done Amit.
Thanks.
> I'll go back to my notes and try to recreate the pathological cases
> that plagued both the 8.3 BGW rewrite and the abort
On Fri, Sep 12, 2014 at 11:55 AM, Amit Kapila
wrote:
> On Thu, Sep 11, 2014 at 4:31 PM, Andres Freund
wrote:
> > On 2014-09-10 12:17:34 +0530, Amit Kapila wrote:
> > > +++ b/src/backend/postmaster/bgreclaimer.c
> >
> > A fair number of comments in that file refer to bgwriter...
>
> Will fix.
Fix
On 9/11/14, 7:01 AM, Andres Freund wrote:
I'm not convinced that these changes can be made without also changing
the bgwriter logic. Have you measured whether there are differences in
how effective the bgwriter is? Not that it's very effective right now :)
The current background writer tuning
On 2014-09-12 12:38:48 +0300, Ants Aasma wrote:
> On Thu, Sep 11, 2014 at 4:22 PM, Andres Freund wrote:
> >> > Hm. Perhaps we should do a bufHdr->refcount != zero check without
> >> > locking here? The atomic op will transfer the cacheline exclusively to
> >> > the reclaimer's CPU. Even though it
On Thu, Sep 11, 2014 at 4:22 PM, Andres Freund wrote:
>> > Hm. Perhaps we should do a bufHdr->refcount != zero check without
>> > locking here? The atomic op will transfer the cacheline exclusively to
>> > the reclaimer's CPU. Even though it very shortly afterwards will be
>> > touched afterwards
On Thu, Sep 11, 2014 at 4:31 PM, Andres Freund
wrote:
> On 2014-09-10 12:17:34 +0530, Amit Kapila wrote:
> > +++ b/src/backend/postmaster/bgreclaimer.c
>
> A fair number of comments in that file refer to bgwriter...
Will fix.
> > @@ -0,0 +1,302 @@
> >
+/*-
On Thu, Sep 11, 2014 at 10:03 AM, Andres Freund wrote:
> On 2014-09-11 09:48:10 -0400, Robert Haas wrote:
>> On Thu, Sep 11, 2014 at 9:22 AM, Andres Freund
>> wrote:
>> > I wonder if we should recheck the number of freelist items before
>> > sleeping. As the latch currently is reset before sleep
On 2014-09-11 09:48:10 -0400, Robert Haas wrote:
> On Thu, Sep 11, 2014 at 9:22 AM, Andres Freund wrote:
> > I wonder if we should recheck the number of freelist items before
> > sleeping. As the latch currently is reset before sleeping (IIRC) we
> > might miss being woken up soon. It very well mi
On Thu, Sep 11, 2014 at 9:22 AM, Andres Freund wrote:
>> It's exactly the same as what bgwriter.c does.
>
> So what? There's no code in common, so I see no reason to have one
> signal handler using underscores and the next one camelcase names.
/me shrugs.
It's not always possible to have things
On Thu, Sep 11, 2014 at 6:59 PM, Andres Freund
wrote:
>
> > > We really need a more centralized way to handle error cleanup in
> > > auxiliary processes. The current state of affairs is really pretty
> > > helter-skelter. But for this patch, I think we should aim to mimic
> > > the existing styl
> > We really need a more centralized way to handle error cleanup in
> > auxiliary processes. The current state of affairs is really pretty
> > helter-skelter. But for this patch, I think we should aim to mimic
> > the existing style, as ugly as it is. I'm not sure whether Amit's got
> > the log
On 2014-09-11 09:02:34 -0400, Robert Haas wrote:
> Thanks for reviewing, Andres.
>
> On Thu, Sep 11, 2014 at 7:01 AM, Andres Freund wrote:
> >> +static void bgreclaim_quickdie(SIGNAL_ARGS);
> >> +static void BgreclaimSigHupHandler(SIGNAL_ARGS);
> >> +static void ReqShutdownHandler(SIGNAL_ARGS);
>
On Thu, Sep 11, 2014 at 6:32 PM, Robert Haas wrote:
>
> Thanks for reviewing, Andres.
>
> On Thu, Sep 11, 2014 at 7:01 AM, Andres Freund
wrote:
> >> +static void bgreclaim_quickdie(SIGNAL_ARGS);
> >> +static void BgreclaimSigHupHandler(SIGNAL_ARGS);
> >> +static void ReqShutdownHandler(SIGNAL_ARG
Thanks for reviewing, Andres.
On Thu, Sep 11, 2014 at 7:01 AM, Andres Freund wrote:
>> +static void bgreclaim_quickdie(SIGNAL_ARGS);
>> +static void BgreclaimSigHupHandler(SIGNAL_ARGS);
>> +static void ReqShutdownHandler(SIGNAL_ARGS);
>> +static void bgreclaim_sigusr1_handler(SIGNAL_ARGS);
>
> Th
Hi,
On 2014-09-10 12:17:34 +0530, Amit Kapila wrote:
> include $(top_srcdir)/src/backend/common.mk
> diff --git a/src/backend/postmaster/bgreclaimer.c
> b/src/backend/postmaster/bgreclaimer.c
> new file mode 100644
> index 000..3df2337
> --- /dev/null
> +++ b/src/backend/postmaster/bgreclaim
On 10/09/14 18:54, Amit Kapila wrote:
On Wed, Sep 10, 2014 at 5:46 AM, Mark Kirkwood
mailto:mark.kirkw...@catalyst.net.nz>>
wrote:
>
> In terms of the effect of the patch - looks pretty similar to the
scale 2000 results for read-write, but read-only is a different and more
interesting story - u
On Wed, Sep 10, 2014 at 5:46 AM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz> wrote:
>
> On 05/09/14 23:50, Amit Kapila wrote:
>>
>> On Fri, Sep 5, 2014 at 8:42 AM, Mark Kirkwood
>> > FWIW below are some test results on the 60 core beast with this patch
>> applied to 9.4. I'll need to do more ru
On Tue, Sep 9, 2014 at 12:16 AM, Robert Haas wrote:
> On Fri, Sep 5, 2014 at 9:19 AM, Amit Kapila
wrote:
> > On Fri, Sep 5, 2014 at 5:17 PM, Amit Kapila
wrote:
> >> Apart from above, I think for this patch, cat version bump is required
> >> as I have modified system catalog. However I have not
On Tue, Sep 9, 2014 at 3:46 AM, Robert Haas wrote:
> On Fri, Sep 5, 2014 at 9:19 AM, Amit Kapila wrote:
>> One regression failed on linux due to spacing issue which is
>> fixed in attached patch.
I just read the latest patch by curiosity, wouldn't it make more sense
to split the patch into two di
On 05/09/14 23:50, Amit Kapila wrote:
On Fri, Sep 5, 2014 at 8:42 AM, Mark Kirkwood
mailto:mark.kirkw...@catalyst.net.nz>>
wrote:
>
> On 04/09/14 14:42, Amit Kapila wrote:
>>
>> On Thu, Sep 4, 2014 at 8:00 AM, Mark Kirkwood
mailto:mark.kirkw...@catalyst.net.nz>>
>> wrote:
>>>
>>>
>>>
>>>
On Tue, Sep 9, 2014 at 3:11 AM, Thom Brown wrote:
> On 5 September 2014 14:19, Amit Kapila wrote:
>> On Fri, Sep 5, 2014 at 5:17 PM, Amit Kapila
wrote:
>> >
>> > Apart from above, I think for this patch, cat version bump is required
>> > as I have modified system catalog. However I have not don
On Mon, Sep 8, 2014 at 10:12 PM, Merlin Moncure wrote:
>
> On Fri, Sep 5, 2014 at 6:47 AM, Amit Kapila
wrote:
> > Client Count/Patch_Ver (tps) 8 16 32 64 128
> > HEAD 58614 107370 140717 104357 65010
> > Patch 60092 113564 165014 213848 216065
> >
> > This data is median of 3 runs, detailed repor
On 5 September 2014 14:19, Amit Kapila wrote:
> On Fri, Sep 5, 2014 at 5:17 PM, Amit Kapila
> wrote:
> >
> > Apart from above, I think for this patch, cat version bump is required
> > as I have modified system catalog. However I have not done the
> > same in patch as otherwise it will be bit di
On Fri, Sep 5, 2014 at 9:19 AM, Amit Kapila wrote:
> On Fri, Sep 5, 2014 at 5:17 PM, Amit Kapila wrote:
>> Apart from above, I think for this patch, cat version bump is required
>> as I have modified system catalog. However I have not done the
>> same in patch as otherwise it will be bit difficu
On Fri, Sep 5, 2014 at 6:47 AM, Amit Kapila wrote:
> Client Count/Patch_Ver (tps) 8 16 32 64 128
> HEAD 58614 107370 140717 104357 65010
> Patch 60092 113564 165014 213848 216065
>
> This data is median of 3 runs, detailed report is attached with mail.
> I have not repeated the test for all config
On Fri, Sep 5, 2014 at 8:42 AM, Mark Kirkwood
wrote:
>
> On 04/09/14 14:42, Amit Kapila wrote:
>>
>> On Thu, Sep 4, 2014 at 8:00 AM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz>
>> wrote:
>>>
>>>
>>>
>>> Hi Amit,
>>>
>>> Results look pretty good. Does it help in the read-write case too?
>>
>>
>>
On Wed, Sep 3, 2014 at 1:45 AM, Robert Haas wrote:
>
> On Thu, Aug 28, 2014 at 7:11 AM, Amit Kapila
wrote:
> > I have updated the patch to address the feedback. Main changes are:
> >
> > 1. For populating freelist, have a separate process (bgreclaimer)
> > instead of doing it by bgwriter.
> > 2.
On 04/09/14 14:42, Amit Kapila wrote:
On Thu, Sep 4, 2014 at 8:00 AM, Mark Kirkwood
wrote:
Hi Amit,
Results look pretty good. Does it help in the read-write case too?
Last time I ran the tpc-b test of pgbench (results of which are
posted earlier in this thread), there doesn't seem to be an
Robert Haas wrote:
> On Wed, Sep 3, 2014 at 7:27 AM, Amit Kapila wrote:
> >> +Background Reclaimer's Processing
> >> +-
> >>
> >> I suggest titling this section "Background Reclaim".
> >
> > I don't mind changing it, but currently used title is based on similar
> >
Robert Haas wrote:
> On Thu, Sep 4, 2014 at 7:25 AM, Amit Kapila wrote:
>> Its not difficult to handle such cases, but it can have downside also
>> for the cases where demand from backends is not high.
>> Consider in above case if instead of 500 more allocations, it just
>> does 5 more allocation
On Thu, Sep 4, 2014 at 7:25 AM, Amit Kapila wrote:
> Its not difficult to handle such cases, but it can have downside also
> for the cases where demand from backends is not high.
> Consider in above case if instead of 500 more allocations, it just
> does 5 more allocations, then bgreclaimer will a
On Wed, Sep 3, 2014 at 9:45 AM, Amit Kapila wrote:
>
> > Performance Data:
> > ---
> >
> > Configuration and Db Details
> > IBM POWER-7 16 cores, 64 hardware threads
> > RAM = 64GB
> > Database Locale =C
> > checkpoint_segments=256
> > checkpoint_timeout=15min
> >
On Wed, Sep 3, 2014 at 8:03 PM, Robert Haas wrote:
> On Wed, Sep 3, 2014 at 7:27 AM, Amit Kapila
wrote:
>
> >> +while (tmp_num_to_free > 0)
> >>
> >> I am not sure it's a good idea for this value to be fixed at loop
> >> start and then just decremented.
> >
> > It is based on the idea what bg
On Thu, Sep 4, 2014 at 8:00 AM, Mark Kirkwood
wrote:
>
>
> Hi Amit,
>
> Results look pretty good. Does it help in the read-write case too?
Last time I ran the tpc-b test of pgbench (results of which are
posted earlier in this thread), there doesn't seem to be any major
gain for that, however for
On 03/09/14 16:22, Amit Kapila wrote:
On Wed, Sep 3, 2014 at 9:45 AM, Amit Kapila wrote:
On Thu, Aug 28, 2014 at 4:41 PM, Amit Kapila
wrote:
I have yet to collect data under varying loads, however I have
collected performance data for 8GB shared buffers which shows
reasonably good performa
1 - 100 of 127 matches
Mail list logo