> There is something to be said for microkernels
> they allow for a lot less headaches with kernel
> development, and it could drastically improve plan 9's
> portability, a key feature.

term% pwd
/usr/forsyth/src/xen3/xen-unstable/xen/arch/x86
term% wc -l mm.c shadow32.c shadow_guest32.c 
   3519 mm.c
   3397 shadow32.c
     18 shadow_guest32.c
   6934 total
# and that could be the tip of an iceberg, since that's x86-only

# now let's look at several non-micro/hyper kernels
term% cd /sys/src/9/pc
term% wc -l mmu.c
   1043 mmu.c

term% cd /usr/inferno/os/pc
term% wc -l mmu.c
    321 mmu.c

now if you're using the first implementation above,
you still also need something like the second or third as well (but a little 
different).

that's a hypervisor (but one that is claimed in a paper to be `microkernels 
done right').
its code is much bigger because it actually does much more than 9's or 
Inferno's.
i'm sure last time i looked (which to be fair was years ago)
mach had quite a bit of complex mmu code too.

which is likely to give you more headaches, and how strong?
perhaps we should have an ibuprofen rating for kernels?

portability? i have done kernels (including small micro-ish ones) myself,
and i have worked with other systems extensively over the years.
in my experience, some of the interfaces the micro/hypers present
is HARDER to drive than the underlying hardware, possibly more
frustrating, not as well documented, and changes.  and of course
there's more code in the end.  somtimes much more.
it wouldn't be so bad if people hadn't forgotten an important lesson
from THE: the idea is for each layer to provide increasingly higher levels
of abstraction, the better to reason about.  of course, in several cases, the
newer systems are the way they are to make porting Linux easier (well,
that's my impression), presumably on the grounds that its interfaces are
all over the place, x86-oriented, and hard to change.

then there are the interfaces for device drivers...

not that i'm bitter.

the way to get good portability is to have clear, well-designed interfaces
that abstract away from hardware peculiarities, and map those to the
hardware (rather than, say, reflecting in the interfaces the union of every
peculiarity of all hardware known to you at the time).

some of the hard bits about kernel development are:
- getting accurate documentation for the processor, devices, existing 
bootstrap, etc.
- getting anything loaded into the wretched machine at all
- deciding how your kernel should look, what it should do, how it should change
- working out a good infrastructure for networks, devices coming and going, 
power, etc.
- finding time and/or money to do any of it

now, while it can be really tedious when you get yourself into the state where
the hardware resets without notice (and worse, takes quite some time to get to 
that state,
or requires ... something ... but what is it???), it often isn't something that 
would
be fixed by a micro-kernel, but rather by better hardware, documentation, more 
careful
coding, fewer interruptions, more time to think, more energy, and of course 
more intelligence.
lacking any or all of these, it's still usually easier to debug a component of 
a smaller system with a
straightforward model overall.  that might be true of some micro-kernels, but 
not all, and
it isn't limited to them; a more `conventional' kernel can be quite acceptably 
modular.

Reply via email to