Hi,
> In the slow path of a spinlock_acquire they busy wait for a few
> cycles, and then call schedule with a zero timeout assuming that
> it'll basically do the same as a sched_yield() but more portably.
The obvious problem with this is that we bounce in and out of schedule()
a few times befo
Hi,
> Thanks for looking into postgresql/pgbench related locking. Yes,
> apparently postgresql uses a synchronization scheme that uses select()
> to effect delays for backing off while attempting to acquire a lock.
> However, it seems to me that runqueue lock contention was not entirely due
[EMAIL PROTECTED] said:
> On a uniprocessor system, a simple fallback is to just use a semaphore
> instead of a spinlock, since you can guarantee that there's no point
> in scheduling the current task until the holder of the "lock" releases
> it.
Yeah, that works. But I'm not all that intereste
On Tue, Mar 06, 2001 at 10:12:17PM -0500, Jeff Dike wrote:
> [EMAIL PROTECTED] said:
> > If you're a UP system, it never makes sense to spin in userland, since
> > you'll just burn up a timeslice and prevent the lock holder from
> > running. I haven't looked, but assume that their code only uses
>
[EMAIL PROTECTED] said:
> Here it is:
> http://oss.sgi.com/projects/postwait/
> Check out the download section for a 2.4.0 patch.
After having thought about this a bit more, I don't see why pw_post and
pw_wait can't be implemented in userspace as:
int pw_post(uid_t uid)
{
return(
Jeff Dike wrote:
[ ... ]
>
> > Another synchronization method popular with database peeps is "post/
> > wait" for which SGI have a patch available for Linux. I understand
> > that this is relatively "light weight" and might be a better choice
> > for PG.
>
> URL?
>
>
[EMAIL PROTECTED] said:
> If you're a UP system, it never makes sense to spin in userland, since
> you'll just burn up a timeslice and prevent the lock holder from
> running. I haven't looked, but assume that their code only uses
> spinlocks on SMP. If you're an SMP system, then you shouldn't be u
On Tue, Mar 06, 2001 at 11:39:17PM +, Matthew Kirkwood wrote:
> On Tue, 6 Mar 2001, Jonathan Lahr wrote:
>
> [ sorry to reply over another reply, but I don't have
> the original of this ]
>
> > > Tridge and I tried out the postgresql benchmark you used here and this
> > > contention is due
On Tue, 6 Mar 2001, Jonathan Lahr wrote:
[ sorry to reply over another reply, but I don't have
the original of this ]
> > Tridge and I tried out the postgresql benchmark you used here and this
> > contention is due to a bug in postgres. From a quick strace, we found
> > the threads do a load o
> Tridge and I tried out the postgresql benchmark you used here and this
> contention is due to a bug in postgres. From a quick strace, we found
> the threads do a load of select(0, NULL, NULL, NULL, {0,0}). Basically all
> threads are pounding on schedule().
...
> Our guess is that the app has s
Manfred Spraul [[EMAIL PROTECTED]] wrote:
>
> > lock contention work would be appreciated. I'm aware of timer scalability
> > work ongoing at people.redhat.com/mingo/scalable-timers, but is anyone
> > working on reducing sem_ids contention?
>
> Is that really a problem?
> The contention is high,
Hi,
> To discover possible locking limitations to scalability, I have collected
> locking statistics on a 2-way, 4-way, and 8-way performing as networked
> database servers. I patched the [48]-way kernels with Kravetz's multiqueue
> patch in the hope that mitigating runqueue_lock contention
Jonathan Lahr wrote:
>
> To discover possible locking limitations to scalability, I have collected
> locking statistics on a 2-way, 4-way, and 8-way performing as networked
> database servers. I patched the [48]-way kernels with Kravetz's multiqueue
> patch in the hope that mitigating runqueue_l
13 matches
Mail list logo