> 
As to "who said what" does'nt matter.

> 
> > > Overmodularisation is, IMO, pointless and doesn't gain you anything.
> > > the only thing that's modular in my kernel is the ppp compression
> > > and that's only because I can't compile it in. Everything from IDE,
> > > SCSI, my filesystems and even sound is compile din and works great
> > > and with no mess.
> > 
> > True, monolithic kernels are simpler, but there is speed to be gained by
> > modularization.  If a module is not needed, it is unloaded, which means that
> > the kernel is smaller.  Smaller means faster.  You might not notice any
> > speed difference on anything faster than, say, 200 Mhz, but it's there.  If
> > we were all still running 25Mhz 386's, nobody would still use a monolith.
> 
> Erm.. where's the speed advantage come from? How does making the
> kernel smaller speed it up?

The speed advantage is because the "_biggest and best advantage_" of kernel
modules is memory usage, the smaller the kernel the more memory below 640K
that gets freed at bootime. 

However no one has mentioned another very important thing with makeing a
large kernel, if one makes a large kernel with bzImage then the following
problem comes when lilo is rerun, remember the thread a while ago, 

lilo "kernel Too Large".

On another note, using Redhat 6.0 for my example, the kernel 2.2.5-15 which
is supplied is very small in itself, it has just about all kernel-modules
one could want, execpt for Radio hams, but thats another story.

> 
> -- 
> CaT ([EMAIL PROTECTED])                       URL: http://www.zip.com.au/dev/null
> 


-- 
Regards Richard.
[EMAIL PROTECTED]

Reply via email to