Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Kris Kennaway
OK, David's patch fixes the umtx thundering herd (and seems to give a 4-6% boost). I also fixed a thundering herd in FILEDESC_UNLOCK (which was also waking up 2-7 CPUs at once about 30% of the time) by doing s/wakeup/wakeup_one/. This did not seem to give a performance impact on this test though.

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Matthew D. Fuller
On Sun, May 07, 2006 at 07:47:27PM +0100 I heard the voice of Robert Watson, and lo! it spake thus: > > In past discussion, I think a reasonable conclusion has been some > amount of both. I've still got sitting around in my bookmarks f

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Kris Kennaway
On Mon, May 08, 2006 at 07:22:06AM +0800, David Xu wrote: > On Monday 08 May 2006 07:04, Kris Kennaway wrote: > > > i.e. apparently not a large difference, but still a large proportion > > of cases where multiple CPUs are woken at once on the same chain. > > > > Kris > This becauses there is no s

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread David Xu
On Monday 08 May 2006 07:04, Kris Kennaway wrote: > i.e. apparently not a large difference, but still a large proportion > of cases where multiple CPUs are woken at once on the same chain. > > Kris This becauses there is no sleepable mutex available, so I had to use msleep and wakeup, this is subo

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Kris Kennaway
On Sun, May 07, 2006 at 05:41:53PM -0400, Kris Kennaway wrote: > static int > kern_sigtimedwait(struct thread *td, sigset_t waitset, ksiginfo_t *ksi, > struct timespec *timeout) > { > ... > td->td_sigmask = savedmask; > SIGSETNAND(td->td_sigmask, waitset); > signoti

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Kris Kennaway
On Sun, May 07, 2006 at 05:04:26PM -0400, Kris Kennaway wrote: > >477 23472709 2810986 8 5671248 1900047 > > kern/kern_synch.c:220 (process lock) > > > > The top 10 heavily contended mutexes are very different (but note the > > number of mutex acquisitions, column 3, is

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Kris Kennaway
On Sat, May 06, 2006 at 06:19:08PM -0400, Kris Kennaway wrote: > x norwatson-8 > + rwatson-8 > ++ > | + | > | + + + x x| > |+

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Kris Kennaway
On Sun, May 07, 2006 at 11:27:22PM +0300, Sven Petai wrote: > On Sunday 07 May 2006 22:16, you wrote: > > On Sun, May 07, 2006 at 10:00:41PM +0300, Sven Petai wrote: > > > The results in my mail were mean values over 2 runs, > > > only once did I see really huge (more than 10%) differences between

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Sven Petai
On Sunday 07 May 2006 22:16, you wrote: > On Sun, May 07, 2006 at 10:00:41PM +0300, Sven Petai wrote: > > The results in my mail were mean values over 2 runs, > > only once did I see really huge (more than 10%) differences between > > several subsequent runs with same settings, this case was clearl

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Robert Watson
On Sun, 7 May 2006, Kris Kennaway wrote: Typically, I do 12 runs of supersmack in each configuration, and discard the first 2 runs in which the cache and scheduler (etc) are still settling, as I'm interested in the steady state. Yeah, forgot to mention that too. Also keeping in mind that my

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Kris Kennaway
On Sun, May 07, 2006 at 08:57:21PM +0100, Robert Watson wrote: > > >On Sun, May 07, 2006 at 10:00:41PM +0300, Sven Petai wrote: > > > >>The results in my mail were mean values over 2 runs, > >>only once did I see really huge (more than 10%) differences between > >>several > >>subsequent runs with

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Robert Watson
On Sun, May 07, 2006 at 10:00:41PM +0300, Sven Petai wrote: The results in my mail were mean values over 2 runs, only once did I see really huge (more than 10%) differences between several subsequent runs with same settings, this case was clearly mentioned in the results. FYI, 2 is not real

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Kris Kennaway
On Sun, May 07, 2006 at 10:00:41PM +0300, Sven Petai wrote: > The results in my mail were mean values over 2 runs, > only once did I see really huge (more than 10%) differences between several > subsequent runs with same settings, this case was clearly mentioned in the > results. FYI, 2 is not

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Jason Evans
Kris Kennaway wrote: Also, I see a slow but statistically significant deterioration in performance over time. Maybe mysql's memory is getting fragmented or something. I have tools to qualitatively assess fragmentation. I can help with this, should you be interested in looking into it. Jas

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Sven Petai
> Anyhow, what I'm getting at is this: make sure when you measure MySQL > performance, you do a series of runs, discard the first few, and then take > an average of the remainder, and watch out for outliers. The results in my mail were mean values over 2 runs, only once did I see really huge (more

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Robert Watson
On Sun, 7 May 2006, Mike Jakubik wrote: The difference in performance is just ridiculous. Is mysql written to be slow on freebsd or is there a problem with freebsd? In past discussion, I think a reasonable conclusion has been some amount of both. We've identified a few particular areas wher

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Mike Jakubik
Sven Petai wrote: scheduler: ULE thr_lib socket nicequeries threads update select thr unix0 1 100 477913724 thr unix0 10 10 647325172 thr unix-10 1 100 496920662 thr unix-10 10 10 6418

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Kris Kennaway
On Sun, May 07, 2006 at 07:16:34PM +0100, Robert Watson wrote: > > On Sun, 7 May 2006, Sven Petai wrote: > > >I performed tests on a 4 * dualcore 2Ghz opteron system (so 8 cores in > >total). > > > >In general with 10 parallel smacker threads the performance seems to go up > >with your patch by

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Robert Watson
On Sun, 7 May 2006, Sven Petai wrote: I performed tests on a 4 * dualcore 2Ghz opteron system (so 8 cores in total). In general with 10 parallel smacker threads the performance seems to go up with your patch by ~44% and with 100 parallel threads it goes down ~25% This is an interesting eff

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread Sven Petai
hi I performed tests on a 4 * dualcore 2Ghz opteron system (so 8 cores in total). In general with 10 parallel smacker threads the performance seems to go up with your patch by ~44% and with 100 parallel threads it goes down ~25% Here's a graph of select smack performance with your patch: http://

Re: Fine-grained locking for POSIX local sockets (UNIX domain sockets)

2006-05-07 Thread David Xu
On Saturday 06 May 2006 22:16, Robert Watson wrote: > Dear all, > > Attached, please find a patch implementing more fine-grained locking for > the POSIX local socket subsystem (UNIX domain socket subsystem). In the > current implementation, we use a single global subsystem lock to protect > all lo