Tom Lane wrote:
It could be that I'm all wet and there is no relationship between the
cache line thrashing and the seemingly excessive BufMgrLock contention.
Is it important? The fix is identical in both cases: per-bucket locks
for the hash table and a buffer aging strategy that doesn't need on
Manfred Spraul <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> It could be that I'm all wet and there is no relationship between the
>> cache line thrashing and the seemingly excessive BufMgrLock contention.
>>
> Is it important? The fix is identical in both cases: per-bucket locks
> for the has
Manfred Spraul <[EMAIL PROTECTED]> writes:
> But: According to the descriptions the problem is a context switch
> storm. I don't see that cache line bouncing can cause a context switch
> storm. What causes the context switch storm?
As best I can tell, the CS storm arises because the backends get
Manfred,
> How complicated are Tom's test scripts? His immediate reply was that I
> should retest with Fedora, to rule out any gentoo bugs.
We've done some testing on other Linux.Linking in pthreads reduced CSes by
< 15%, which was no appreciable impact on real performance.
Gavin/Neil's ful
Mark Wong wrote:
I've heard that simply linking to the pthreads libraries, regardless of
whether you're using them or not creates a significant overhead. Has
anyone tried it for kicks?
That depends on the OS and the functions that are used. The typical
worst case is buffered IO of single chara
Mark Wong wrote:
Pretty, simple. One to load the database, and 1 to query it. I'll
attach them.
I've tested it on my dual-cpu computer:
- it works, both cpus run within the postmaster. It seems something your
gentoo setup is broken.
- the number of context switch is down slightly, but not s
Tom Lane wrote:
Manfred Spraul <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
The bigger problem here is that the SMP locking bottlenecks we are
currently seeing are *hardware* issues (AFAICT anyway). The only way
that futexes can offer a performance win is if they have a smarter way
of execu
Tom Lane wrote:
Manfred Spraul <[EMAIL PROTECTED]> writes:
Has anyone tried to replace the whole lwlock implementation with
pthread_rwlock? At least for Linux with recent glibcs, pthread_rwlock is
implemented with futexes, i.e. we would get a fast lock handling without
os specific hacks.
Mark Wong wrote:
Here are some other details, per Manfred's request:
Linux 2.6.8.1 (on a gentoo distro)
How complicated are Tom's test scripts? His immediate reply was that I
should retest with Fedora, to rule out any gentoo bugs.
I have a dual-cpu system with RH FC, I could use it for testing
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Josh Berkus wrote:
| Gaetano,
|
|
|>I proposed weeks ago to see how the CSStorm is affected by stick each
|>backend in one processor ( where the process was born ) using the
|>cpu-affinity capability ( kernel 2.6 ), is this proposal completely out of
|>
Tom Lane wrote:
Gaetano Mendola <[EMAIL PROTECTED]> writes:
I proposed weeks ago to see how the CSStorm is affected by stick each
backend in one processor ( where the process was born ) using the
cpu-affinity capability ( kernel 2.6 ), is this proposal completely
out of mind ?
That was investigate
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> I proposed weeks ago to see how the CSStorm is affected by stick each
> backend in one processor ( where the process was born ) using the
> cpu-affinity capability ( kernel 2.6 ), is this proposal completely
> out of mind ?
That was investigated long a
Josh Berkus wrote:
> Tom,
>
>
>>The bigger problem here is that the SMP locking bottlenecks we are
>>currently seeing are *hardware* issues (AFAICT anyway). The only way
>>that futexes can offer a performance win is if they have a smarter way
>>of executing the basic atomic-test-and-set sequence t
On Thu, Oct 21, 2004 at 07:45:53AM +0200, Manfred Spraul wrote:
> Mark Wong wrote:
>
> >Here are some other details, per Manfred's request:
> >
> >Linux 2.6.8.1 (on a gentoo distro)
> >
> >
> How complicated are Tom's test scripts? His immediate reply was that I
> should retest with Fedora, to
Forgive my naivete, but do futex's implement some priority algorithm for
which process gets control. One of the problems as I understand it is
that linux does (did ) not implement a priority algorithm, so it is
possible for the context which just gave up control to be the next
context woken up,
On Wed, Oct 20, 2004 at 07:39:13PM +0200, Manfred Spraul wrote:
>
> But: According to the descriptions the problem is a context switch
> storm. I don't see that cache line bouncing can cause a context switch
> storm. What causes the context switch storm? If it's the pg_usleep in
> s_lock, then
Manfred Spraul <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> The bigger problem here is that the SMP locking bottlenecks we are
>> currently seeing are *hardware* issues (AFAICT anyway). The only way
>> that futexes can offer a performance win is if they have a smarter way
>> of executing the b
On Sun, Oct 17, 2004 at 09:39:33AM +0200, Manfred Spraul wrote:
> Neil wrote:
>
> >. In any case, the "futex patch"
> >uses the Linux 2.6 futex API to implement PostgreSQL spinlocks.
> >
> Has anyone tried to replace the whole lwlock implementation with
> pthread_rwlock? At least for Linux with
Josh Berkus <[EMAIL PROTECTED]> writes:
>> The bigger problem here is that the SMP locking bottlenecks we are
>> currently seeing are *hardware* issues (AFAICT anyway).
> Well, initial results from Gavin/Neil's patch seem to indicate that, while
> futexes do not cure the CSStorm bug, they do less
Tom,
> The bigger problem here is that the SMP locking bottlenecks we are
> currently seeing are *hardware* issues (AFAICT anyway). ÂThe only way
> that futexes can offer a performance win is if they have a smarter way
> of executing the basic atomic-test-and-set sequence than we do;
> and if so,
Manfred Spraul <[EMAIL PROTECTED]> writes:
> Has anyone tried to replace the whole lwlock implementation with
> pthread_rwlock? At least for Linux with recent glibcs, pthread_rwlock is
> implemented with futexes, i.e. we would get a fast lock handling without
> os specific hacks.
"At least for
Neil wrote:
. In any case, the "futex patch"
uses the Linux 2.6 futex API to implement PostgreSQL spinlocks.
Has anyone tried to replace the whole lwlock implementation with
pthread_rwlock? At least for Linux with recent glibcs, pthread_rwlock is
implemented with futexes, i.e. we would get a fa
On Thu, 2004-10-14 at 04:57, Mark Wong wrote:
> I have some DBT-3 (decision support) results using Gavin's original
> futex patch fix.
I sent an initial description of the futex patch to the mailing lists
last week, but it never appeared (from talking to Marc I believe it
exceeded the size limit
Hi guys,
I have some DBT-3 (decision support) results using Gavin's original
futex patch fix. It's on out 8-way Pentium III Xeon systems
in our STP environment. Definitely see some overall throughput
performance on the tests, about 15% increase, but not change with
respect to the number of conte
24 matches
Mail list logo