On Mon, Apr 25, 2016 at 6:04 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Sun, Apr 17, 2016 at 7:32 PM, Amit Kapila
> wrote:
>
>> On Thu, Apr 14, 2016 at 8:05 AM, Andres Freund
>> wrote:
>> >
>> > On 2016-04-14 07:59:07 +0530, Amit Kapila wrote:
>> > > What you want to see by
On Sun, Apr 17, 2016 at 7:32 PM, Amit Kapila
wrote:
> On Thu, Apr 14, 2016 at 8:05 AM, Andres Freund wrote:
> >
> > On 2016-04-14 07:59:07 +0530, Amit Kapila wrote:
> > > What you want to see by prewarming?
> >
> > Prewarming appears to greatly reduce the per-run variance on that
> > machine, ma
On Fri, Apr 15, 2016 at 1:59 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Thu, Apr 14, 2016 at 5:35 AM, Andres Freund wrote:
>
>> On 2016-04-14 07:59:07 +0530, Amit Kapila wrote:
>> > What you want to see by prewarming?
>>
>> Prewarming appears to greatly reduce the per-run var
On Thu, Apr 14, 2016 at 8:05 AM, Andres Freund wrote:
>
> On 2016-04-14 07:59:07 +0530, Amit Kapila wrote:
> > What you want to see by prewarming?
>
> Prewarming appears to greatly reduce the per-run variance on that
> machine, making it a lot easier to get meaningful results.
>
I think you are r
On Thu, Apr 14, 2016 at 5:35 AM, Andres Freund wrote:
> On 2016-04-14 07:59:07 +0530, Amit Kapila wrote:
> > What you want to see by prewarming?
>
> Prewarming appears to greatly reduce the per-run variance on that
> machine, making it a lot easier to get meaningful results. Thus it'd
> make it
On 2016-04-14 07:59:07 +0530, Amit Kapila wrote:
> What you want to see by prewarming?
Prewarming appears to greatly reduce the per-run variance on that
machine, making it a lot easier to get meaningful results. Thus it'd
make it easier to compare pre/post padding numbers.
> Will it have safe ef
On Tue, Apr 12, 2016 at 9:32 PM, Andres Freund wrote:
>
> On 2016-04-12 19:42:11 +0530, Amit Kapila wrote:
> > Andres suggested me on IM to take performance data on x86 m/c
> > by padding PGXACT and the data for the same is as below:
> >
> > median of 3, 5-min runs
>
> Thanks for running these.
>
On Tue, Apr 12, 2016 at 5:12 PM, Amit Kapila
wrote:
> On Tue, Apr 12, 2016 at 3:48 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> On Tue, Apr 12, 2016 at 12:40 AM, Andres Freund
>> wrote:
>>
>>> I did get access to the machine (thanks!). My testing shows that
>>> performance i
On 2016-04-12 19:42:11 +0530, Amit Kapila wrote:
> Yes, it seems generally it is a good idea, but not sure if it is a complete
> fix for variation in performance we are seeing when we change shared memory
> structures.
I didn't suspect it would be. More whether it'd be beneficial
performance wise.
On Tue, Apr 12, 2016 at 3:48 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Tue, Apr 12, 2016 at 12:40 AM, Andres Freund
> wrote:
>
>> I did get access to the machine (thanks!). My testing shows that
>> performance is sensitive to various parameters influencing memory
>> allocati
On Tue, Apr 12, 2016 at 12:40 AM, Andres Freund wrote:
> I did get access to the machine (thanks!). My testing shows that
> performance is sensitive to various parameters influencing memory
> allocation. E.g. twiddling with max_connections changes
> performance. With max_connections=400 and the p
On Mon, Apr 11, 2016 at 7:33 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Sun, Apr 10, 2016 at 2:24 PM, Amit Kapila
> wrote:
>>
>> I also tried to run perf top during pgbench and get some interesting
>>> results.
>>>
>>> Without 5364b357:
>>>5,69% postgres
On 2016-04-11 14:40:29 -0700, Andres Freund wrote:
> On 2016-04-11 12:17:20 -0700, Andres Freund wrote:
> I did get access to the machine (thanks!). My testing shows that
> performance is sensitive to various parameters influencing memory
> allocation. E.g. twiddling with max_connections changes
>
On 2016-04-11 14:40:29 -0700, Andres Freund wrote:
> On 2016-04-11 12:17:20 -0700, Andres Freund wrote:
> > On 2016-04-11 22:08:15 +0300, Alexander Korotkov wrote:
> > > On Mon, Apr 11, 2016 at 5:04 PM, Alexander Korotkov <
> > > a.korot...@postgrespro.ru> wrote:
> > >
> > > > On Mon, Apr 11, 2016
On 2016-04-11 12:17:20 -0700, Andres Freund wrote:
> On 2016-04-11 22:08:15 +0300, Alexander Korotkov wrote:
> > On Mon, Apr 11, 2016 at 5:04 PM, Alexander Korotkov <
> > a.korot...@postgrespro.ru> wrote:
> >
> > > On Mon, Apr 11, 2016 at 8:10 AM, Andres Freund wrote:
> > >
> > >> Could you retry
On 2016-04-11 22:08:15 +0300, Alexander Korotkov wrote:
> On Mon, Apr 11, 2016 at 5:04 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
> > On Mon, Apr 11, 2016 at 8:10 AM, Andres Freund wrote:
> >
> >> Could you retry after applying the attached series of patches?
> >>
> >
> > Yes,
On Mon, Apr 11, 2016 at 5:04 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Mon, Apr 11, 2016 at 8:10 AM, Andres Freund wrote:
>
>> Could you retry after applying the attached series of patches?
>>
>
> Yes, I will try with these patches and snapshot too old reverted.
>
I've run
On Mon, Apr 11, 2016 at 8:10 AM, Andres Freund wrote:
> On 2016-04-10 09:03:37 +0300, Alexander Korotkov wrote:
> > On Sun, Apr 10, 2016 at 8:36 AM, Alexander Korotkov <
> > a.korot...@postgrespro.ru> wrote:
> >
> > > On Sat, Apr 9, 2016 at 10:49 PM, Andres Freund
> wrote:
> > >
> > >>
> > >>
>
On Sun, Apr 10, 2016 at 2:24 PM, Amit Kapila
wrote:
> On Sun, Apr 10, 2016 at 11:33 AM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> On Sun, Apr 10, 2016 at 8:36 AM, Alexander Korotkov <
>> a.korot...@postgrespro.ru> wrote:
>>
>>> On Sat, Apr 9, 2016 at 10:49 PM, Andres Freund
>
On 2016-04-10 09:03:37 +0300, Alexander Korotkov wrote:
> On Sun, Apr 10, 2016 at 8:36 AM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
> > On Sat, Apr 9, 2016 at 10:49 PM, Andres Freund wrote:
> >
> >>
> >>
> >> On April 9, 2016 12:43:03 PM PDT, Andres Freund
> >> wrote:
> >> >On
On Sun, Apr 10, 2016 at 6:15 PM, Amit Kapila
wrote:
> On Sun, Apr 10, 2016 at 11:10 AM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> On Sun, Apr 10, 2016 at 7:26 AM, Amit Kapila
>> wrote:
>>
>>> On Sun, Apr 10, 2016 at 1:13 AM, Andres Freund
>>> wrote:
>>>
On 2016-04-09 22
On Sun, Apr 10, 2016 at 11:10 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Sun, Apr 10, 2016 at 7:26 AM, Amit Kapila
> wrote:
>
>> On Sun, Apr 10, 2016 at 1:13 AM, Andres Freund
>> wrote:
>>
>>> On 2016-04-09 22:38:31 +0300, Alexander Korotkov wrote:
>>> > There are results wi
On Sun, Apr 10, 2016 at 11:33 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Sun, Apr 10, 2016 at 8:36 AM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> On Sat, Apr 9, 2016 at 10:49 PM, Andres Freund
>> wrote:
>>
>>>
>>>
>>> On April 9, 2016 12:43:03 PM PDT, Andre
On Sun, Apr 10, 2016 at 8:36 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Sat, Apr 9, 2016 at 10:49 PM, Andres Freund wrote:
>
>>
>>
>> On April 9, 2016 12:43:03 PM PDT, Andres Freund
>> wrote:
>> >On 2016-04-09 22:38:31 +0300, Alexander Korotkov wrote:
>> >> There are results
On Sun, Apr 10, 2016 at 7:26 AM, Amit Kapila
wrote:
> On Sun, Apr 10, 2016 at 1:13 AM, Andres Freund wrote:
>
>> On 2016-04-09 22:38:31 +0300, Alexander Korotkov wrote:
>> > There are results with 5364b357 reverted.
>>
>>
> What exactly is this test?
> I think assuming it is a read-only -M prepa
On Sat, Apr 9, 2016 at 10:49 PM, Andres Freund wrote:
>
>
> On April 9, 2016 12:43:03 PM PDT, Andres Freund
> wrote:
> >On 2016-04-09 22:38:31 +0300, Alexander Korotkov wrote:
> >> There are results with 5364b357 reverted.
> >
> >Crazy that this has such a negative impact. Amit, can you reproduc
On Sun, Apr 10, 2016 at 1:13 AM, Andres Freund wrote:
> On 2016-04-09 22:38:31 +0300, Alexander Korotkov wrote:
> > There are results with 5364b357 reverted.
>
>
What exactly is this test?
I think assuming it is a read-only -M prepared pgbench run where data fits
in shared buffers. However if yo
On April 9, 2016 12:43:03 PM PDT, Andres Freund wrote:
>On 2016-04-09 22:38:31 +0300, Alexander Korotkov wrote:
>> There are results with 5364b357 reverted.
>
>Crazy that this has such a negative impact. Amit, can you reproduce
>that? Alexander, I guess for r/w workload 5364b357 is a benefit on
On 2016-04-09 22:38:31 +0300, Alexander Korotkov wrote:
> There are results with 5364b357 reverted.
Crazy that this has such a negative impact. Amit, can you reproduce
that? Alexander, I guess for r/w workload 5364b357 is a benefit on that
machine as well?
> It's much closer to what we had befor
On Sat, Apr 9, 2016 at 11:24 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Fri, Apr 8, 2016 at 10:19 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> On Fri, Apr 8, 2016 at 7:39 PM, Andres Freund wrote:
>>
>>> As you can see in
>>>
>>
>>> http://archives.postgre
On Fri, Apr 8, 2016 at 10:19 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Fri, Apr 8, 2016 at 7:39 PM, Andres Freund wrote:
>
>> As you can see in
>>
>
>> http://archives.postgresql.org/message-id/CA%2BTgmoaeRbN%3DZ4oWENLvgGLeHEvGZ_S_Z3KGrdScyKiSvNt3oA%40mail.gmail.com
>> I'm p
On Fri, Apr 8, 2016 at 7:39 PM, Andres Freund wrote:
> On 2016-04-07 16:50:44 +0300, Alexander Korotkov wrote:
> > On Thu, Apr 7, 2016 at 4:41 PM, Andres Freund
> wrote:
> >
> > > On 2016-03-31 20:21:02 +0300, Alexander Korotkov wrote:
> > > > ! BEGIN_BUFSTATE_CAS_LOOP(bufHdr);
> > > >
> > >
On 2016-04-07 16:50:44 +0300, Alexander Korotkov wrote:
> On Thu, Apr 7, 2016 at 4:41 PM, Andres Freund wrote:
>
> > On 2016-03-31 20:21:02 +0300, Alexander Korotkov wrote:
> > > ! BEGIN_BUFSTATE_CAS_LOOP(bufHdr);
> > >
> > > ! Assert(BUF_STATE_GET_REFCOUNT(state) > 0);
> > > ! wasDirt
On Thu, Apr 7, 2016 at 4:41 PM, Andres Freund wrote:
> On 2016-03-31 20:21:02 +0300, Alexander Korotkov wrote:
> > ! BEGIN_BUFSTATE_CAS_LOOP(bufHdr);
> >
> > ! Assert(BUF_STATE_GET_REFCOUNT(state) > 0);
> > ! wasDirty = (state & BM_DIRTY) ? true : false;
> > ! state |= BM_DIRTY |
On 2016-03-31 20:21:02 +0300, Alexander Korotkov wrote:
> ! BEGIN_BUFSTATE_CAS_LOOP(bufHdr);
>
> ! Assert(BUF_STATE_GET_REFCOUNT(state) > 0);
> ! wasDirty = (state & BM_DIRTY) ? true : false;
> ! state |= BM_DIRTY | BM_JUST_DIRTIED;
> ! if (state == oldstate)
> !
Hi,
On 2016-04-06 21:58:50 -0400, Robert Haas wrote:
> I spent a lot of time testing things on power2 today
Thanks for that!
> It's fairly mysterious to me why there is so much jitter in the
> results on this machine. By doing prewarming in a consistent fashion,
> we make sure that every disk ru
On Wed, Apr 6, 2016 at 10:04 AM, Dilip Kumar wrote:
> On Wed, Apr 6, 2016 at 3:22 PM, Andres Freund wrote:
>> Which scale did you initialize with? I'm trying to reproduce the
>> workload on hydra as precisely as possible...
>
> I tested with scale factor 300, shared buffer 8GB.
>
> My test script
On Wed, Apr 6, 2016 at 3:22 PM, Andres Freund wrote:
> Which scale did you initialize with? I'm trying to reproduce the
> workload on hydra as precisely as possible...
>
I tested with scale factor 300, shared buffer 8GB.
My test script is attached with the mail (perf_pgbench_ro.sh).
I have don
On 2016-04-06 11:52:28 +0200, Andres Freund wrote:
> Hi,
>
> On 2016-04-03 16:47:49 +0530, Dilip Kumar wrote:
>
> > Summary Of the Run:
> > -
> > 1. Throughout one run if we observe TPS every 30 seconds its stable in one
> > run.
> > 2. With Head 64 client run vary bet
Hi,
On 2016-04-03 16:47:49 +0530, Dilip Kumar wrote:
> Summary Of the Run:
> -
> 1. Throughout one run if we observe TPS every 30 seconds its stable in one
> run.
> 2. With Head 64 client run vary between ~250,000 to ~45. you can see
> below results.
>
> run1: 434
On 2016-04-05 12:56:46 +0530, Dilip Kumar wrote:
> On Mon, Apr 4, 2016 at 2:28 PM, Andres Freund wrote:
>
> > Hm, interesting. I suspect that's because of the missing backoff in my
> > experimental patch. If you apply the attached patch ontop of that
> > (requires infrastructure from pinunpin), h
On Tue, Apr 5, 2016 at 5:45 PM, Andres Freund wrote:
> On 2016-04-05 17:36:49 +0300, Alexander Korotkov wrote:
> > Could the reason be that we're increasing concurrency for LWLock state
> > atomic variable by placing queue spinlock there?
>
> Don't think so, it's the same cache-line either way.
>
On Tue, Apr 5, 2016 at 1:04 PM, Andres Freund wrote:
> On 2016-04-05 12:14:35 -0400, Robert Haas wrote:
>> On Tue, Apr 5, 2016 at 11:30 AM, Andres Freund wrote:
>> > On 2016-04-05 20:56:31 +0530, Amit Kapila wrote:
>> >> This fluctuation started appearing after commit 6150a1b0 which we have
>> >>
On 2016-04-05 12:14:35 -0400, Robert Haas wrote:
> On Tue, Apr 5, 2016 at 11:30 AM, Andres Freund wrote:
> > On 2016-04-05 20:56:31 +0530, Amit Kapila wrote:
> >> This fluctuation started appearing after commit 6150a1b0 which we have
> >> discussed in another thread [1] and a colleague of mine is
On Tue, Apr 5, 2016 at 11:30 AM, Andres Freund wrote:
> On 2016-04-05 20:56:31 +0530, Amit Kapila wrote:
>> This fluctuation started appearing after commit 6150a1b0 which we have
>> discussed in another thread [1] and a colleague of mine is working on to
>> write a patch to try to revert it on cur
On Tue, Apr 5, 2016 at 9:00 PM, Andres Freund wrote:
>
> On 2016-04-05 20:56:31 +0530, Amit Kapila wrote:
> > This fluctuation started appearing after commit 6150a1b0 which we have
> > discussed in another thread [1] and a colleague of mine is working on to
> > write a patch to try to revert it on
On 2016-04-05 20:56:31 +0530, Amit Kapila wrote:
> This fluctuation started appearing after commit 6150a1b0 which we have
> discussed in another thread [1] and a colleague of mine is working on to
> write a patch to try to revert it on current HEAD and then see the results.
I don't see what that b
On Tue, Apr 5, 2016 at 8:15 PM, Andres Freund wrote:
>
> On 2016-04-05 17:36:49 +0300, Alexander Korotkov wrote:
> > Could the reason be that we're increasing concurrency for LWLock state
> > atomic variable by placing queue spinlock there?
>
> Don't think so, it's the same cache-line either way.
On 2016-04-05 17:36:49 +0300, Alexander Korotkov wrote:
> Could the reason be that we're increasing concurrency for LWLock state
> atomic variable by placing queue spinlock there?
Don't think so, it's the same cache-line either way.
> But I wonder why this could happen during "pgbench -S", becaus
On Tue, Apr 5, 2016 at 10:26 AM, Dilip Kumar wrote:
>
> On Mon, Apr 4, 2016 at 2:28 PM, Andres Freund wrote:
>
>> Hm, interesting. I suspect that's because of the missing backoff in my
>> experimental patch. If you apply the attached patch ontop of that
>> (requires infrastructure from pinunpin)
On Mon, Apr 4, 2016 at 2:28 PM, Andres Freund wrote:
> Hm, interesting. I suspect that's because of the missing backoff in my
> experimental patch. If you apply the attached patch ontop of that
> (requires infrastructure from pinunpin), how does performance develop?
>
I have applied this patch a
Hi,
On 2016-04-03 16:47:49 +0530, Dilip Kumar wrote:
> 6. With Head+ pinunpin-cas-8 +
> 0001-WIP-Avoid-the-use-of-a-separate-spinlock-to-protect performance is
> almost same as with
> Head+pinunpin-cas-8, only sometime performance at 128 client is low
> (~250,000 instead of 650,000)
Hm, interesti
On Sun, Apr 3, 2016 at 2:28 PM, Amit Kapila wrote:
>
> What is the conclusion of this test? As far as I see, with the patch
> (0001-WIP-Avoid-the-use-of-a-separate-spinlock-to-protect), the performance
> degradation is not fixed, but with pin-unpin patch, the performance seems
> to be better in
On Sun, Apr 3, 2016 at 9:55 AM, Dilip Kumar wrote:
>
> On Fri, Apr 1, 2016 at 2:09 PM, Andres Freund wrote:
>
>> One interesting thing to do would be to use -P1 during the test and see
>> how much the performance varies over time.
>>
>
> I have run with -P option, I ran for 1200 second and set -
On Fri, Apr 1, 2016 at 2:09 PM, Andres Freund wrote:
> One interesting thing to do would be to use -P1 during the test and see
> how much the performance varies over time.
>
I have run with -P option, I ran for 1200 second and set -P as 30 second,
and what I observed is that when its low its low
On 2016-04-01 10:35:18 +0200, Andres Freund wrote:
> On 2016-04-01 13:50:10 +0530, Dilip Kumar wrote:
> > I think it needs more number of runs.. After seeing this results I did not
> > run head+pinunpin,
> >
> > Head 64 Client 128 Client
> > -
>
On 2016-04-01 13:50:10 +0530, Dilip Kumar wrote:
> I think it needs more number of runs.. After seeing this results I did not
> run head+pinunpin,
>
> Head 64 Client 128 Client
> -
> Run1 434860 356945
> Run2 275815 *275815*
> Run3 437872 366560
On Thu, Mar 31, 2016 at 5:52 PM, Andres Freund wrote:
> Here's a WIP patch to evaluate. Dilip/Ashutosh, could you perhaps run
> some benchmarks, to see whether this addresses the performance issues?
>
> I guess it'd both be interesting to compare master with master + patch,
> and this thread's la
On Thu, Mar 31, 2016 at 8:21 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
>
> I think these changes worth running benchmark again. I'm going to run it
> on 4x18 Intel.
>
The results are following.
clients master v3 v5 v9
1 11671 12507 12679 12408
2 246
On Thu, Mar 31, 2016 at 7:14 PM, Andres Freund wrote:
> > +/*
> > + * The following two macros are aimed to simplify buffer state
> modification
> > + * in CAS loop. It's assumed that variable "uint32 state" is defined
> outside
> > + * of this loop. It should be used as following:
> > + *
> >
Hi,
> +/*
> + * The following two macros are aimed to simplify buffer state modification
> + * in CAS loop. It's assumed that variable "uint32 state" is defined outside
> + * of this loop. It should be used as following:
> + *
> + * BEGIN_BUFSTATE_CAS_LOOP(bufHdr);
> + * modifications of state v
Hi!
On Thu, Mar 31, 2016 at 4:59 PM, Amit Kapila
wrote:
> On Tue, Mar 29, 2016 at 10:52 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> Hi, Andres!
>>
>> Please, find next revision of patch in attachment.
>>
>>
> Couple of minor comments:
>
> + * The following two macroses
>
>
Andres Freund writes:
> Oh. I confused my approaches. I was thinking about going for 2):
>> 2) Replace the lwlock spinlock by a bit in LWLock->state. That'd avoid
>> embedding the spinlock, and actually might allow to avoid one atomic
>> op in a number of cases.
> precisely because of that conce
On Tue, Mar 29, 2016 at 10:52 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> Hi, Andres!
>
> Please, find next revision of patch in attachment.
>
>
Couple of minor comments:
+ * The following two macroses
is macroses right word to be used here?
+ * of this loop. It should be us
On 2016-03-31 12:58:55 +0200, Andres Freund wrote:
> On 2016-03-31 06:54:02 -0400, Robert Haas wrote:
> > On Wed, Mar 30, 2016 at 3:16 AM, Andres Freund wrote:
> > > Yea, as Tom pointed out that's not going to work. I'll try to write a
> > > patch for approach 1).
> >
> > Does this mean that any
On 2016-03-31 06:54:02 -0400, Robert Haas wrote:
> On Wed, Mar 30, 2016 at 3:16 AM, Andres Freund wrote:
> > Yea, as Tom pointed out that's not going to work. I'll try to write a
> > patch for approach 1).
>
> Does this mean that any platform that wants to perform well will now
> need a sub-4-by
On Wed, Mar 30, 2016 at 3:16 AM, Andres Freund wrote:
> On 2016-03-30 07:13:16 +0530, Dilip Kumar wrote:
>> On Tue, Mar 29, 2016 at 10:43 PM, Andres Freund wrote:
>>
>> > My gut feeling is that we should do both 1) and 2).
>> >
>> > Dilip, could you test performance of reducing ppc's spinlock to
On Wed, Mar 30, 2016 at 10:16 AM, Andres Freund wrote:
> On 2016-03-30 07:13:16 +0530, Dilip Kumar wrote:
> > On Tue, Mar 29, 2016 at 10:43 PM, Andres Freund
> wrote:
> >
> > > My gut feeling is that we should do both 1) and 2).
> > >
> > > Dilip, could you test performance of reducing ppc's spi
On 2016-03-30 07:13:16 +0530, Dilip Kumar wrote:
> On Tue, Mar 29, 2016 at 10:43 PM, Andres Freund wrote:
>
> > My gut feeling is that we should do both 1) and 2).
> >
> > Dilip, could you test performance of reducing ppc's spinlock to 1 byte?
> > Cross-compiling suggest that doing so "just works
On Tue, Mar 29, 2016 at 10:43 PM, Andres Freund wrote:
> My gut feeling is that we should do both 1) and 2).
>
> Dilip, could you test performance of reducing ppc's spinlock to 1 byte?
> Cross-compiling suggest that doing so "just works". I.e. replace the
> #if defined(__ppc__) typedef from an i
On 2016-03-29 14:09:42 -0400, Tom Lane wrote:
> Andres Freund writes:
> > There's actually lbarx/stbcx - but it's not present in all ISAs. So I
> > guess it's clear where to go.
>
> Hm. We could certainly add a configure test to see if the local assembler
> knows these instructions --- but it's
Andres Freund writes:
> On 2016-03-29 13:24:40 -0400, Tom Lane wrote:
>> AFAICS, lwarx/stwcx are specifically *word* wide.
> There's actually lbarx/stbcx - but it's not present in all ISAs. So I
> guess it's clear where to go.
Hm. We could certainly add a configure test to see if the local asse
On 2016-03-29 20:22:00 +0300, Alexander Korotkov wrote:
> > > + while (true)
> > > {
> > > - if (buf->usage_count == 0)
> > > - buf->usage_count = 1;
> > > + /* spin-wait till lock is free */
> > > +
On 2016-03-29 13:24:40 -0400, Tom Lane wrote:
> Andres Freund writes:
> > Dilip, could you test performance of reducing ppc's spinlock to 1 byte?
> > Cross-compiling suggest that doing so "just works". I.e. replace the
> > #if defined(__ppc__) typedef from an int to a char.
>
> AFAICS, lwarx/stw
Andres Freund writes:
> Dilip, could you test performance of reducing ppc's spinlock to 1 byte?
> Cross-compiling suggest that doing so "just works". I.e. replace the
> #if defined(__ppc__) typedef from an int to a char.
AFAICS, lwarx/stwcx are specifically *word* wide.
On 2016-03-29 13:09:05 -0400, Robert Haas wrote:
> On Mon, Mar 28, 2016 at 9:09 AM, Andres Freund wrote:
> > On 2016-03-28 11:48:46 +0530, Dilip Kumar wrote:
> >> On Sun, Mar 27, 2016 at 5:48 PM, Andres Freund wrote:
> >> > What's sizeof(BufferDesc) after applying these patches? It should better
On Mon, Mar 28, 2016 at 9:09 AM, Andres Freund wrote:
> On 2016-03-28 11:48:46 +0530, Dilip Kumar wrote:
>> On Sun, Mar 27, 2016 at 5:48 PM, Andres Freund wrote:
>> > What's sizeof(BufferDesc) after applying these patches? It should better
>> > be <= 64...
>> >
>>
>> It is 72.
>
> Ah yes, miscalc
Andres Freund wrote:
> On 2016-03-28 15:46:43 +0300, Alexander Korotkov wrote:
> > @@ -932,8 +936,13 @@ ReadBuffer_common(SMgrRelation smgr, cha
> >
> > if (isLocalBuf)
> > {
> > - /* Only need to adjust flags */
> > - bufHdr->flags |= BM_VALID;
> > + /*
> >
On 2016-03-28 15:46:43 +0300, Alexander Korotkov wrote:
> diff --git a/src/backend/storage/buffer/bufmnew file mode 100644
> index 6dd7c6e..fe6fb9c
> --- a/src/backend/storage/buffer/bufmgr.c
> +++ b/src/backend/storage/buffer/bufmgr.c
> @@ -52,7 +52,6 @@
> #include "utils/resowner_private.h"
> #
On 2016-03-28 11:48:46 +0530, Dilip Kumar wrote:
> On Sun, Mar 27, 2016 at 5:48 PM, Andres Freund wrote:
>
> >
> > What's sizeof(BufferDesc) after applying these patches? It should better
> > be <= 64...
> >
>
> It is 72.
Ah yes, miscalculated the required alignment. Hm. So we got to get this
sm
On Sun, Mar 27, 2016 at 4:31 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Sun, Mar 27, 2016 at 3:10 PM, Andres Freund wrote:
>
>> On 2016-03-27 12:38:25 +0300, Alexander Korotkov wrote:
>> > On Sat, Mar 26, 2016 at 1:26 AM, Alexander Korotkov <
>> > a.korot...@postgrespro.ru> w
On Sun, Mar 27, 2016 at 5:48 PM, Andres Freund wrote:
>
> What's sizeof(BufferDesc) after applying these patches? It should better
> be <= 64...
>
It is 72.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Sun, Mar 27, 2016 at 3:10 PM, Andres Freund wrote:
> On 2016-03-27 12:38:25 +0300, Alexander Korotkov wrote:
> > On Sat, Mar 26, 2016 at 1:26 AM, Alexander Korotkov <
> > a.korot...@postgrespro.ru> wrote:
> >
> > > Thank you very much for testing!
> > > I also got access to 4 x 18 Intel server
On 2016-03-27 17:45:52 +0530, Dilip Kumar wrote:
> On Sun, Mar 27, 2016 at 5:37 PM, Andres Freund wrote:
>
> > On what hardware did you run these tests?
>
>
> IBM POWER 8 MACHINE.
>
> Architecture: ppc64le
> Byte Order:Little Endian
> CPU(s):192
> Threa
On Sun, Mar 27, 2016 at 5:37 PM, Andres Freund wrote:
> On what hardware did you run these tests?
IBM POWER 8 MACHINE.
Architecture: ppc64le
Byte Order:Little Endian
CPU(s):192
Thread(s) per core:8
Core(s) per socket:1
Socket(s):
On 2016-03-27 12:38:25 +0300, Alexander Korotkov wrote:
> On Sat, Mar 26, 2016 at 1:26 AM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
> > Thank you very much for testing!
> > I also got access to 4 x 18 Intel server with 144 threads. I'm going to
> > post results of tests on this s
On 2016-03-25 23:02:11 +0530, Dilip Kumar wrote:
> On Fri, Mar 25, 2016 at 8:09 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
> > Could anybody run benchmarks? Feature freeze is soon, but it would be
> > *very nice* to fit it into 9.6 release cycle, because it greatly improves
>
On Sat, Mar 26, 2016 at 1:26 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> Thank you very much for testing!
> I also got access to 4 x 18 Intel server with 144 threads. I'm going to
> post results of tests on this server in next Monday.
>
I've run pgbench tests on this machine: pgb
Hi, Dilip!
On Fri, Mar 25, 2016 at 8:32 PM, Dilip Kumar wrote:
> On Fri, Mar 25, 2016 at 8:09 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> Could anybody run benchmarks? Feature freeze is soon, but it would be
>> *very nice* to fit it into 9.6 release cycle, because it great
On Fri, Mar 25, 2016 at 8:09 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> Could anybody run benchmarks? Feature freeze is soon, but it would be
> *very nice* to fit it into 9.6 release cycle, because it greatly improves
> scalability on large machines. Without this patch PostgreS
On Tue, Mar 22, 2016 at 1:08 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Tue, Mar 22, 2016 at 7:57 AM, Dilip Kumar
> wrote:
>
>>
>> On Tue, Mar 22, 2016 at 12:31 PM, Dilip Kumar
>> wrote:
>>
>>> ! pg_atomic_write_u32(&bufHdr->state, state);
>>> } while (!StartBufferIO(bufHd
On Tue, Mar 22, 2016 at 7:57 AM, Dilip Kumar wrote:
>
> On Tue, Mar 22, 2016 at 12:31 PM, Dilip Kumar
> wrote:
>
>> ! pg_atomic_write_u32(&bufHdr->state, state);
>> } while (!StartBufferIO(bufHdr, true));
>>
>> Better Write some comment, about we clearing the BM_LOCKED from stage
>> directly a
On Tue, Mar 22, 2016 at 12:31 PM, Dilip Kumar wrote:
> ! pg_atomic_write_u32(&bufHdr->state, state);
> } while (!StartBufferIO(bufHdr, true));
>
> Better Write some comment, about we clearing the BM_LOCKED from stage
> directly and need not to call UnlockBufHdr explicitly.
> otherwise its confu
On Sun, Mar 20, 2016 at 4:10 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> Actually, we behave like old code and do such modifications without
> increasing number of atomic operations. We can just calculate new value of
> state (including unset of BM_LOCKED flag) and write it to th
On Sat, Mar 19, 2016 at 3:22 PM, Dilip Kumar wrote:
>
> On Mon, Mar 14, 2016 at 3:09 AM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> I've drawn graphs for these measurements. The variation doesn't look
>> random here. TPS is going higher from measurement to measurement. I bet
On Mon, Mar 14, 2016 at 3:09 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> I've drawn graphs for these measurements. The variation doesn't look
> random here. TPS is going higher from measurement to measurement. I bet
> you did measurements sequentially.
> I think we should do mor
On Fri, Mar 11, 2016 at 7:08 AM, Dilip Kumar wrote:
>
> On Thu, Mar 10, 2016 at 8:26 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> I don't think we can rely on median that much if we have only 3 runs.
>> For 3 runs we can only apply Kornfeld method which claims that confidence
On Thu, Mar 10, 2016 at 8:26 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> I don't think we can rely on median that much if we have only 3 runs.
> For 3 runs we can only apply Kornfeld method which claims that confidence
> interval should be between lower and upper values.
> Since c
On Mon, Mar 7, 2016 at 6:19 PM, Robert Haas wrote:
> On Sat, Mar 5, 2016 at 7:22 AM, Dilip Kumar wrote:
> > On Wed, Mar 2, 2016 at 11:05 AM, Dilip Kumar
> wrote:
> >> And this latest result (no regression) is on X86 but on my local
> machine.
> >>
> >> I did not exactly saw what this new versio
On Sat, Mar 5, 2016 at 7:22 AM, Dilip Kumar wrote:
> On Wed, Mar 2, 2016 at 11:05 AM, Dilip Kumar wrote:
>> And this latest result (no regression) is on X86 but on my local machine.
>>
>> I did not exactly saw what this new version of patch is doing different,
>> so I will test this version in ot
1 - 100 of 163 matches
Mail list logo