On Friday, February 18, 2011 04:15:10 pm you wrote:
> Benchmark utilities to measure the overhead of syscalls. It's cheating
> to do for getpid, but for other things like gettimeofday, it's
> *extremely* nice. Linux's gettimeofday(2) beats the socks off of the
> rest of the time implementations. About the only faster thing is to
> get CPU speed and use rdtsc. Certainly no other OS allows you to get
> the timestamp faster with a syscall.

Would you mind explaining what technique is used by Linux to speed up the 
gettimeofday()? I'd guess it's not per-process caching... and if it's not, 
then it involves two context-switches; not the fastest thing in my books.



As for performance in general, some speculative fiction:
in general, drivers are kept in kernel for two reasons -- to protect resources 
from processes going rogue and to provide common, infrequently changing API to 
diverse hardware. The later reason is pretty much security-insensitive and 
serves to aid cross-platform development.

In principle, the read-only parts of some drivers could be embedded in 
processes (things like system timer, rather than harddrive). Is there any OS 
out there that actually lets processes embed the read-only parts of drivers to 
avoid context switches for going through kernel?

The closest thing I can think of is Google's Native Client, which lets 
untrusted code execute (within a trusted `host process') with constrained, 
readexecute-only access to trusted code so it can execute hand-picked syscalls 
& communication with the host process.

Perhaps a *constrained* read-write driver for harddrive (and filesystem) access 
could perhaps also be held rx-only in virtual memory of untrusted code...

-- 
dexen deVries

[[[↓][→]]]

> how does a C compiler get to be that big? what is all that code doing?

iterators, string objects, and a full set of C macros that ensure
boundary conditions and improve interfaces.

ron minnich, in response to Charles Forsyth

http://9fans.net/archive/2011/02/90

Reply via email to