On Fri, Jun 11, 2010 at 02:26:50PM -0700, J. J. Farrell wrote: > Can anyone point me at any measurements and/or analysis of the cost > of moving back and forth between user and kernel space - when doing > an ioctl call into a driver, for example. Interested in current > OpenSolaris on x86-64 in particular. > > I'm looking at the additional costs of sharing work between modules > in user and kernel space, trying to get some quantitative feel for > how much it costs to hop back and forth. It's obvious we want to > minimise the number of transitions, but I'd like some understanding > of the numbers to see how much effort it's worth putting into > minimising them. > > I'm also interested in other context switch costs - thread sleep and > wakeup principally. A much more complex area to analyse, but any > pointers to useful write-ups or measurements would be welcome. >
no offence intended, but unless you have some performance data indicating that your application is spending too much time context switching it seems to me like you're over-optimizing. that said, i don't know of any generic no syscall overhead writups lying around. but you could always measure this yourself using something like libmicro: http://hub.opensolaris.org/bin/view/Project+libmicro/ just try benchmarking a super simple system call like getpid(). you should be aware of the fact that different x86 machines use different system call mechanisms. if you look in /usr/lib/libc/ you'll see three different version of libc, all which use different syscall mechanisms. the default mechanism for a system calls is chosen at boot time by lofs mounting one of the copies of libc above onto /lib/libc.so.1. so depending on what version of libc your using you'll get different performance numbers. (and not all syscall mechanisms are supported by all x86 processors.) ed _______________________________________________ driver-discuss mailing list driver-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/driver-discuss