On 3/13/09 10:16 AM, "Tom Lane" <t...@sss.pgh.pa.us> wrote:

Robert Haas <robertmh...@gmail.com> writes:
> I think that changing the locking behavior is attacking the problem at
> the wrong level anyway.

Right.  By the time a patch here could have any effect, you've already
lost the game --- having to deschedule and reschedule a process is a
large cost compared to the typical lock hold time for most LWLocks.  So
it would be better to look at how to avoid blocking in the first place.

                        regards, tom lane

In an earlier post in this thread I mentioned the three main ways to solve 
scalability problems with respect to locking:
Avoid locking (atomics, copy-on-write, etc), finer grained locks (data 
structure partitioning, etc) and optimizing the locks themselves.

I don't know which of the above has the greatest opportunity in postgres.   My 
base assumption was that lock avoidance was something that had been worked on 
significantly already, and that since lock algorithm optimization is 
rediculously hardware dependant, there was probably low hanging fruit there.

Messing with unfair locks does not have to be the solution to the problem, but 
it can be a means to an end:
It takes less time and lines of code to change the lock and see what the 
benefit less locking would cause, than it does to change the code to avoid the 
locks.

So what we have here, is a tool - not necessarily what you want to use in 
production, but a handy tool.  If you switch to unfair locks, and things speed 
up, you're lock bound and avoiding those locks will make things faster.  The 
Dtrace data is also a great tool, that is showing the same thing but without 
the ability to know how large or small the gain is or being sure what the next 
bottleneck will be.

Reply via email to