> From: Tom Lane [mailto:[EMAIL PROTECTED] wrote
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> >> From: Tom Lane [mailto:[EMAIL PROTECTED] wrote
> >> I've looked at this before and I think it's a nonstarter;
> >> increasing the
> >> size of a spinlock to 128 bytes is just not reasonable.
>
> > Well, the performance is unreasonably poor, so its time to
> do something,
> > which might if it is unreasonable for the general case
> would need to be
> > port specific.
>

> But first lets see some evidence that this actually helps?

Yes, thats what we're planning. Currently just brainstorming ideas for
prototyping, rather than suggesting definitive things to go into the
codebase.

There are some other suggestions there also coming from our Unisys
friends: some additional Intel tweaks to avoid processor stalls and the
like. Again... I'll show the performance figures first.

> Well, it might be worth allocating a full 128 bytes just for the fixed
> LWLocks (BufMgrLock and friends) and skimping on the per-buffer locks,
> which should be seeing far less contention than the fixed
> locks anyway.

Yes, that seems like a likely future way. Right now we're going to pad
the whole structure, to save prototyping time and because we can on a
test box. Working on this now.

My concern about the per-buffer locks is with the CLog and Subtrans
buffer pools. Because we have 8 buffers for each of those it looks like
they'll all be in a single cache line. That means fine grained locking
is ineffective for those caches. With 1000s of shared_buffers, there's
less problem with cache line contention. Possibly Ken's suggestion of
pseudo-randomising the allocation of locks in LWLockAcquire would reduce
the effect on those smaller buffer pools.

Best Regards, Simon Riggs


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to