On 2014-10-30 19:05:06 +0530, Amit Kapila wrote:
> On Thu, Oct 30, 2014 at 6:58 PM, Andres Freund
> wrote:
> > On 2014-10-30 18:54:57 +0530, Amit Kapila wrote:
> > > On Thu, Oct 30, 2014 at 5:52 PM, Andres Freund
> > > wrote:
> > > > Hm. What commit did you apply the series ontop? I managed to
>
On Thu, Oct 30, 2014 at 6:58 PM, Andres Freund
wrote:
> On 2014-10-30 18:54:57 +0530, Amit Kapila wrote:
> > On Thu, Oct 30, 2014 at 5:52 PM, Andres Freund
> > wrote:
> > > Hm. What commit did you apply the series ontop? I managed to
reproduce a
> > > hang, but it was just something that heikki h
On 2014-10-30 18:54:57 +0530, Amit Kapila wrote:
> On Thu, Oct 30, 2014 at 5:52 PM, Andres Freund
> wrote:
> > On 2014-10-21 12:40:56 +0530, Amit Kapila wrote:
> > > I have ran it for half an hour, but it doesn't came out even after
> > > ~2 hours. It doesn't get reproduced every time, currently
On Thu, Oct 30, 2014 at 5:52 PM, Andres Freund
wrote:
>
> On 2014-10-21 12:40:56 +0530, Amit Kapila wrote:
> > While doing performance tests, I noticed a hang at higher client
> > counts with patch. I have tried to check call stack for few of
> > processes and it is as below:
> >
> > #0 0x008
On 2014-10-21 12:40:56 +0530, Amit Kapila wrote:
> While doing performance tests, I noticed a hang at higher client
> counts with patch. I have tried to check call stack for few of
> processes and it is as below:
>
> #0 0x008010933e54 in .semop () from /lib64/libc.so.6
> #1 0x10286e4
On Fri, Oct 17, 2014 at 11:41 PM, Andres Freund
wrote:
> On 2014-10-17 17:14:16 +0530, Amit Kapila wrote:
> > On Tue, Oct 14, 2014 at 11:34 AM, Amit Kapila
> > wrote:
> > HEAD – commit 494affb + wait free lw_shared_v2
> >
> > Shared_buffers=8GB; Scale Factor = 3000
> >
> > Client Count/No. Of
On 2014-10-17 17:14:16 +0530, Amit Kapila wrote:
> On Tue, Oct 14, 2014 at 11:34 AM, Amit Kapila
> wrote:
> >
> >
> > I am not sure why we are seeing difference even though running
> > on same m/c with same configuration.
>
> I have tried many times, but I could not get the numbers you have
> pos
On Tue, Oct 14, 2014 at 11:34 AM, Amit Kapila
wrote:
>
>
> I am not sure why we are seeing difference even though running
> on same m/c with same configuration.
I have tried many times, but I could not get the numbers you have
posted above with HEAD, however now trying with the latest version
[1]
On Wed, Oct 15, 2014 at 12:06 AM, Merlin Moncure wrote:
>
> A while back, I submitted a minor tweak to the clock sweep so that,
> instead of spinlocking every single buffer header as it swept it just
> did a single TAS as a kind of a trylock and punted to the next buffer
> if the test failed on th
On Tue, Oct 14, 2014 at 8:58 AM, Andres Freund wrote:
> On 2014-10-14 08:40:49 -0500, Merlin Moncure wrote:
>> On Fri, Oct 10, 2014 at 11:00 AM, Andres Freund
>> wrote:
>> > Which is nearly trivial now that atomics are in. Check out the attached
>> > WIP patch which eliminates the spinlock from
On 2014-10-14 08:40:49 -0500, Merlin Moncure wrote:
> On Fri, Oct 10, 2014 at 11:00 AM, Andres Freund
> wrote:
> > On 2014-10-10 16:41:39 +0200, Andres Freund wrote:
> >> FWIW, the profile always looks like
> >> - 48.61% postgres postgres [.] s_lock
> >>- s_lock
> >>
On Fri, Oct 10, 2014 at 11:00 AM, Andres Freund wrote:
> On 2014-10-10 16:41:39 +0200, Andres Freund wrote:
>> FWIW, the profile always looks like
>> - 48.61% postgres postgres [.] s_lock
>>- s_lock
>> + 96.67% StrategyGetBuffer
>> + 1.19% UnpinBuffer
>> +
On Sat, Oct 11, 2014 at 7:02 PM, Amit Kapila
wrote:
> On Sat, Oct 11, 2014 at 6:40 PM, Andres Freund
wrote:
> > On 2014-10-11 07:26:57 +0530, Amit Kapila wrote:
> > > On Sat, Oct 11, 2014 at 7:00 AM, Andres Freund
> > > > And since
> > > > your general performance numbers are a fair bit lower th
On 2014-10-11 15:10:45 +0200, Andres Freund wrote:
> Hi,
>
> On 2014-10-11 07:26:57 +0530, Amit Kapila wrote:
> > On Sat, Oct 11, 2014 at 7:00 AM, Andres Freund
> > > And since
> > > your general performance numbers are a fair bit lower than what I see
> > > with, hopefully, the same code on the
On Sat, Oct 11, 2014 at 6:40 PM, Andres Freund
wrote:
> On 2014-10-11 07:26:57 +0530, Amit Kapila wrote:
> > On Sat, Oct 11, 2014 at 7:00 AM, Andres Freund
> > > And since
> > > your general performance numbers are a fair bit lower than what I see
> > > with, hopefully, the same code on the same
Hi,
On 2014-10-11 07:26:57 +0530, Amit Kapila wrote:
> On Sat, Oct 11, 2014 at 7:00 AM, Andres Freund
> > And since
> > your general performance numbers are a fair bit lower than what I see
> > with, hopefully, the same code on the same machine...
>
> You have reported numbers at 1000 scale fact
On Sat, Oct 11, 2014 at 7:00 AM, Andres Freund
wrote:
> On 2014-10-11 06:49:54 +0530, Amit Kapila wrote:
> > On Sat, Oct 11, 2014 at 6:29 AM, Andres Freund
> > wrote:
> > >
> > > On 2014-10-11 06:18:11 +0530, Amit Kapila wrote:
> > > I've run some short tests on hydra:
> > >
>
> > Could you pleas
On 2014-10-11 06:49:54 +0530, Amit Kapila wrote:
> On Sat, Oct 11, 2014 at 6:29 AM, Andres Freund
> wrote:
> >
> > On 2014-10-11 06:18:11 +0530, Amit Kapila wrote:
> > I've run some short tests on hydra:
> >
> > scale 1000:
> >
> > base:
> > 4GB:
> > tps = 296273.004800 (including connections esta
On Sat, Oct 11, 2014 at 6:29 AM, Andres Freund
wrote:
>
> On 2014-10-11 06:18:11 +0530, Amit Kapila wrote:
> I've run some short tests on hydra:
>
> scale 1000:
>
> base:
> 4GB:
> tps = 296273.004800 (including connections establishing)
> tps = 296373.978100 (excluding connections establishing)
>
On 2014-10-11 06:18:11 +0530, Amit Kapila wrote:
> On Fri, Oct 10, 2014 at 8:11 PM, Andres Freund
> wrote:
> > On 2014-10-10 17:18:46 +0530, Amit Kapila wrote:
> > > On Fri, Oct 10, 2014 at 1:27 PM, Andres Freund
> > > wrote:
> > > > > Observations
> > > > > --
> > > > > a. Th
On Fri, Oct 10, 2014 at 8:11 PM, Andres Freund
wrote:
> On 2014-10-10 17:18:46 +0530, Amit Kapila wrote:
> > On Fri, Oct 10, 2014 at 1:27 PM, Andres Freund
> > wrote:
> > > > Observations
> > > > --
> > > > a. The patch performs really well (increase upto ~40%) incase all
the
On 2014-10-10 16:41:39 +0200, Andres Freund wrote:
> FWIW, the profile always looks like
> - 48.61% postgres postgres [.] s_lock
>- s_lock
> + 96.67% StrategyGetBuffer
> + 1.19% UnpinBuffer
> + 0.90% PinBuffer
> + 0.70% hash_search_with_hash_value
> +
On 2014-10-10 17:18:46 +0530, Amit Kapila wrote:
> On Fri, Oct 10, 2014 at 1:27 PM, Andres Freund
> wrote:
> > > Observations
> > > --
> > > a. The patch performs really well (increase upto ~40%) incase all the
> > > data fits in shared buffers (scale factor -100).
> > > b. Inc
On Fri, Oct 10, 2014 at 1:27 PM, Andres Freund
wrote:
> On 2014-10-10 10:13:03 +0530, Amit Kapila wrote:
> > I have done few performance tests for above patches and results of
> > same is as below:
>
> Cool, thanks.
>
> > Performance Data
> > --
> > IBM POWER-7 16 cores
Hi Robert,
On 2014-10-08 16:01:53 -0400, Robert Haas wrote:
> [ comment fixes ]
Thanks, I've incorporated these + a bit more.
Could you otherwise make sense of the explanation and the algorithm?
> +/* yipeyyahee */
>
> Although this will be clear to individuals with a good command
Hi,
On 2014-10-10 10:13:03 +0530, Amit Kapila wrote:
> I have done few performance tests for above patches and results of
> same is as below:
Cool, thanks.
> Performance Data
> --
> IBM POWER-7 16 cores, 64 hardware threads
> RAM = 64GB
> max_connections =210
> Databa
On Wed, Oct 8, 2014 at 7:05 PM, Andres Freund
wrote:
>
> Hi,
>
> Attached you can find the next version of my LW_SHARED patchset. Now
> that atomics are committed, it seems like a good idea to also add their
> raison d'être.
>
> Since the last public version I have:
> * Addressed lots of Amit's co
On 10/9/14, 4:57 PM, Andres Freund wrote:
If you modify either, you better grep for them... I don't think that's
going to happen anyway. Requiring it during startup would mean exposing
SHARED_LOCK_MASK outside of lwlock.c which'd be ugly. We could possibly
stick a StaticAssert() someplace in lwlo
On 2014-10-09 16:52:46 -0500, Jim Nasby wrote:
> On 10/8/14, 8:35 AM, Andres Freund wrote:
> >+#define EXCLUSIVE_LOCK (((uint32) 1) << (31 - 1))
> >+
> >+/* Must be greater than MAX_BACKENDS - which is 2^23-1, so we're fine. */
> >+#define SHARED_LOCK_MASK (~EXCLUSIVE_LOCK)
>
> There should at lea
On 10/8/14, 8:35 AM, Andres Freund wrote:
+#define EXCLUSIVE_LOCK (((uint32) 1) << (31 - 1))
+
+/* Must be greater than MAX_BACKENDS - which is 2^23-1, so we're fine. */
+#define SHARED_LOCK_MASK (~EXCLUSIVE_LOCK)
There should at least be a comment where we define MAX_BACKENDS about the
relati
On Wed, Oct 8, 2014 at 9:35 AM, Andres Freund wrote:
> 2) Implement the wait free LW_SHARED algorithm.
+ * too high for workloads/locks that were locked in shared mode very
s/locked/taken/?
+ * frequently. Often we were spinning in the (obviously exlusive) spinlock,
exclusive.
+ * acquiration
On 2014-10-08 15:23:22 -0400, Robert Haas wrote:
> On Wed, Oct 8, 2014 at 9:35 AM, Andres Freund wrote:
> > 1) Convert PGPROC->lwWaitLink into a dlist. The old code was frail and
> >verbose. This also does:
> > * changes the logic in LWLockRelease() to release all shared lockers
> >
On Wed, Oct 8, 2014 at 9:35 AM, Andres Freund wrote:
> 1) Convert PGPROC->lwWaitLink into a dlist. The old code was frail and
>verbose. This also does:
> * changes the logic in LWLockRelease() to release all shared lockers
> when waking up any. This can yield some significant perform
Hi,
Attached you can find the next version of my LW_SHARED patchset. Now
that atomics are committed, it seems like a good idea to also add their
raison d'être.
Since the last public version I have:
* Addressed lots of Amit's comments. Thanks!
* Peformed a fair amount of testing.
* Rebased the cod
34 matches
Mail list logo