On Wed, 18 Mar 2009, Jignesh K. Shah wrote:
I thought about that.. Except without putting a restriction a huge queue will 
cause lot of time spent in manipulating the lock
list every time. One more thing will be to maintain two list shared and 
exclusive and round robin through them for every time you
access the list so manipulation is low.. But the best thing is to allow 
flexibility to change the algorithm since some workloads
may work fine with one and others will NOT. The flexibility then allows to 
tinker for those already reaching the limits.

Yeah, having two separate queues is the obvious way of doing this. It would make most operations really trivial. Just wake everything in the shared queue at once, and you can throw it away wholesale and allocate a new queue. It avoids a whole lot of queue manipulation.

Matthew

--
Software suppliers are trying to make their software packages more
'user-friendly'.... Their best approach, so far, has been to take all
the old brochures, and stamp the words, 'user-friendly' on the cover.
-- Bill Gates

-
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to