On Jul18, 2011, at 04:36 , Robert Haas wrote:
On Fri, Jul 8, 2011 at 6:02 AM, Florian Pflug f...@phlo.org wrote:
I don't want to fiddle with your git repo, but if you attach a patch
that applies to the master branch I'll give it a spin if I have time.
Patch attached.
Beware that it needs
On Fri, Jul 8, 2011 at 6:02 AM, Florian Pflug f...@phlo.org wrote:
I don't want to fiddle with your git repo, but if you attach a patch
that applies to the master branch I'll give it a spin if I have time.
Patch attached.
Beware that it needs at least GCC 4.1, otherwise it'll use a
On Jul 12, 2011, at 8:10 PM, Florian Pflug f...@phlo.org wrote:
On Jul13, 2011, at 00:10 , Robert Haas wrote:
On Jul 12, 2011, at 8:03 AM, Florian Pflug f...@phlo.org wrote:
The algorithm is quite straight forward, if one assumes a lock-free
implementation of a queue (More on that below)
On Jul13, 2011, at 22:04 , Robert Haas wrote:
On Jul 12, 2011, at 8:10 PM, Florian Pflug f...@phlo.org wrote:
I wonder if clearing the waiters-present bit only upon clearing the
queue completely is necessary for correctness. Wouldn't it be OK
to clear the bit after waking up at least one
On Jul7, 2011, at 03:35 , Robert Haas wrote:
Some poking around suggests that the problem isn't that
spinlocks are routinely contended - it seems that we nearly always get
the spinlock right off the bat. I'm wondering if the problem may be
not so much that we have continuous spinlock
On Jul 12, 2011, at 8:03 AM, Florian Pflug f...@phlo.org wrote:
The algorithm is quite straight forward, if one assumes a lock-free
implementation of a queue (More on that below)
This is similar to the CAS-based LWLocks I played around with, though I didn't
use a lock-free queue. I think I
On Jul13, 2011, at 00:10 , Robert Haas wrote:
On Jul 12, 2011, at 8:03 AM, Florian Pflug f...@phlo.org wrote:
The algorithm is quite straight forward, if one assumes a lock-free
implementation of a queue (More on that below)
This is similar to the CAS-based LWLocks I played around with,
On Jul7, 2011, at 18:09 , Robert Haas wrote:
On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug f...@phlo.org wrote:
In effect, the resulting thing is an LWLock with a partitioned shared
counter. The partition one backend operates on for shared locks is
determined by its backend id.
I've added
Florian Pflug f...@phlo.org writes:
Patch attached.
Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
spin lock instead of locked xadd to increment the shared counters.
That's already sufficient reason to reject the patch. Not everyone
uses gcc, let alone very recent
On Jul8, 2011, at 16:21 , Tom Lane wrote:
Florian Pflug f...@phlo.org writes:
Patch attached.
Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
spin lock instead of locked xadd to increment the shared counters.
That's already sufficient reason to reject the patch.
On 07/08/2011 04:21 PM, Tom Lane wrote:
Florian Pflug f...@phlo.org writes:
Patch attached.
Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
spin lock instead of locked xadd to increment the shared counters.
That's already sufficient reason to reject the patch.
On Jul8, 2011, at 22:27 , Stefan Kaltenbrunner wrote:
On 07/08/2011 04:21 PM, Tom Lane wrote:
Florian Pflug f...@phlo.org writes:
Patch attached.
Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
spin lock instead of locked xadd to increment the shared counters.
On Jul7, 2011, at 03:35 , Robert Haas wrote:
On Thu, Jun 23, 2011 at 11:42 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug f...@phlo.org wrote:
On Jun12, 2011, at 23:39 , Robert Haas wrote:
So, the majority (60%) of the excess spinning appears to be
On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug f...@phlo.org wrote:
In effect, the resulting thing is an LWLock with a partitioned shared
counter. The partition one backend operates on for shared locks is
determined by its backend id.
I've added the implementation to the lock benchmarking tool
On Thu, Jun 23, 2011 at 11:42 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug f...@phlo.org wrote:
On Jun12, 2011, at 23:39 , Robert Haas wrote:
So, the majority (60%) of the excess spinning appears to be due to
SInvalReadLock. A good chunk are due
On Jun23, 2011, at 23:40 , Robert Haas wrote:
I tried rewriting the LWLocks using CAS. It actually seems to make
things slightly worse on the tests I've done so far, perhaps because I
didn't make it respect spins_per_delay. Perhaps fetch-and-add would
be better, but I'm not holding my
On Tue, Jun 28, 2011 at 2:33 PM, Florian Pflug f...@phlo.org wrote:
[ testing of various spinlock implementations ]
I set T=30 and N=1 2 4 8 16 32 and tried this out on a 32-core
loaner from Nate Boley:
100 counter increments per cycle
worker 1 2 4 8
On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas robertmh...@gmail.com wrote:
user-32:
none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2),pg_lwlock(588.2),spin(1264.7)
I may not be following all this correctly, but doesn't this suggest a
huge potential upside for the cas based patch you
On Tue, Jun 28, 2011 at 5:33 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas robertmh...@gmail.com wrote:
user-32:
none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2),pg_lwlock(588.2),spin(1264.7)
I may not be following all this correctly,
On Jun28, 2011, at 23:48 , Robert Haas wrote:
On Tue, Jun 28, 2011 at 5:33 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas robertmh...@gmail.com wrote:
user-32:
none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2),pg_lwlock(588.2),spin(1264.7)
On Tue, Jun 28, 2011 at 5:55 PM, Florian Pflug f...@phlo.org wrote:
On Jun28, 2011, at 23:48 , Robert Haas wrote:
On Tue, Jun 28, 2011 at 5:33 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas robertmh...@gmail.com wrote:
user-32:
On Jun28, 2011, at 22:18 , Robert Haas wrote:
On Tue, Jun 28, 2011 at 2:33 PM, Florian Pflug f...@phlo.org wrote:
[ testing of various spinlock implementations ]
I set T=30 and N=1 2 4 8 16 32 and tried this out on a 32-core
loaner from Nate Boley:
Cool, thanks!
100 counter increments per
On Tue, Jun 28, 2011 at 8:11 PM, Florian Pflug f...@phlo.org wrote:
I wrote a little script to show to reorganize this data in a
possibly-easier-to-understand format - ordering each column from
lowest to highest, and showing each algorithm as a multiple of the
cheapest value for that column:
On Sat, Jun 25, 2011 at 8:26 PM, Greg Stark st...@mit.edu wrote:
On Thu, Jun 23, 2011 at 4:42 PM, Robert Haas robertmh...@gmail.com wrote:
ProcArrayLock looks like a tougher nut to crack - there's simply no
way, with the system we have right now, that you can take a snapshot
without locking
On Thu, Jun 23, 2011 at 4:42 PM, Robert Haas robertmh...@gmail.com wrote:
ProcArrayLock looks like a tougher nut to crack - there's simply no
way, with the system we have right now, that you can take a snapshot
without locking the list of running processes. I'm not sure what to
do about that,
On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug f...@phlo.org wrote:
On Jun12, 2011, at 23:39 , Robert Haas wrote:
So, the majority (60%) of the excess spinning appears to be due to
SInvalReadLock. A good chunk are due to ProcArrayLock (25%).
Hm, sizeof(LWLock) is 24 on X86-64, making
On 23.06.2011 18:42, Robert Haas wrote:
ProcArrayLock looks like a tougher nut to crack - there's simply no
way, with the system we have right now, that you can take a snapshot
without locking the list of running processes. I'm not sure what to
do about that, but we're probably going to have to
On Jun23, 2011, at 17:42 , Robert Haas wrote:
On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug f...@phlo.org wrote:
On Jun12, 2011, at 23:39 , Robert Haas wrote:
So, the majority (60%) of the excess spinning appears to be due to
SInvalReadLock. A good chunk are due to ProcArrayLock (25%).
On Thu, Jun 23, 2011 at 12:19 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 23.06.2011 18:42, Robert Haas wrote:
ProcArrayLock looks like a tougher nut to crack - there's simply no
way, with the system we have right now, that you can take a snapshot
without locking the
On Thu, Jun 23, 2011 at 1:34 PM, Florian Pflug f...@phlo.org wrote:
On Jun23, 2011, at 17:42 , Robert Haas wrote:
On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug f...@phlo.org wrote:
On Jun12, 2011, at 23:39 , Robert Haas wrote:
So, the majority (60%) of the excess spinning appears to be due to
On Thu, Jun 23, 2011 at 2:34 PM, Florian Pflug f...@phlo.org wrote:
It seems hard to believe that there isn't some effect of two locks
sharing a cache line. There are architectures (even some of the
Intel architectures, I believe) where cache lines are 32 bit, though -
so measuring this
On Jun23, 2011, at 22:15 , Robert Haas wrote:
On Thu, Jun 23, 2011 at 2:34 PM, Florian Pflug f...@phlo.org wrote:
It seems hard to believe that there isn't some effect of two locks
sharing a cache line. There are architectures (even some of the
Intel architectures, I believe) where cache lines
On Thu, Jun 23, 2011 at 5:35 PM, Florian Pflug f...@phlo.org wrote:
Well, I'm sure there is some effect, but my experiments seem to
indicate that it's not a very important one. Again, please feel free
to provide contrary evidence. I think the basic issue is that - in
the best possible case -
33 matches
Mail list logo