Re: Modularized vs Monolithic kernel
The story goes that Andrew Tannenbaum (Comp Sci professor and creator of MINIX, which few can dispute was an inspiration for Linux) criticized Linux as out of date, being monolithic. See http://www.oreilly.com/catalog/opensources/book/appa.html Enjoyable reading! Most older operating systems are monolithic, that is, the whole operating system is a single a.out file that runs in 'kernel mode.' This binary contains the process management, memory management, file system and the rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS, and many more. I can't comment on his other qualifications, but I find 'monolithic' to be a very poor characterisation of MVS at all levels. If you take Supervisor State to be equivalent to kernel mode then he's way off base - very little of MVS runs that way. Key 0 may be a better approximation, but even that isn't so much. People have been mixing'n'matching bits of MVS releases, VSAM, DFSMS, JES for years. True, you don't have quite the same flexibility that you inevitably get with a open source system, but nevertheless ... -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039
Re: Modularized vs Monolithic kernel
Phil Payne Wrote: The story goes that Andrew Tannenbaum (Comp Sci professor and creator of MINIX, which few can dispute was an inspiration for Linux) criticized Linux as out of date, being monolithic. See http://www.oreilly.com/catalog/opensources/book/appa.html Enjoyable reading! Most older operating systems are monolithic, that is, the whole operating system is a single a.out file that runs in 'kernel mode.' This binary contains the process management, memory management, file system and the rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS, and many more. I can't comment on his other qualifications, but I find 'monolithic' to be a very poor characterisation of MVS at all levels. If you take Supervisor State to be equivalent to kernel mode then he's way off base - very little of MVS runs that way. Key 0 may be a better approximation, but even that isn't so much. People have been mixing'n'matching bits of MVS releases, VSAM, DFSMS, JES for years. True, you don't have quite the same flexibility that you inevitably get with a open source system, but nevertheless ... My understanding of microKernel architectures may be a little wimpy; My experience was through fighting through QNX. The following is a barely-informed opinion and is in no way to be considered authoritative: In a microKernel architecture, the only *true* kernel item is the scheduler. Everything else is a module that runs as if it's a user process; Your device drivers are, in effect, userland processes (albeit potentially multithreaded). Each component communicates to other components via messages rather than the syscall (SVC etc) mechanism. One advantage (AFAIK) is that it's easier to identify and isolate the layers of the servicing system. The downside (I learned this almost 10 years ago) is that if it's proprietary *and* a microkernel, if a component doesn't work right there's no way around it. Customers tend to build systems that aren't easily predicted by the vendor, so the interactions may get pathological. So part of it is a matter of how fine-grained the scheduler work and how far apart each component of the system is. It'd be like having VM run one VM instance providing disk services (or instance per controller, instance per drive), another set of instances to handle the tape system, more instances to deal with communications (one instance per layer, for instance) and they'd ALL communication via either VCTC or IUCV. The only job for the hypervisor is to manage scheduling and memory management. No hypervisor services, just message passing. So the scheduler has to be pretty fscking quick consider how many context switches are needed to get any kind of work done. So MVS *is* monolithic; The fragments of OS services are not fine-grained enough despite the modularity of the system. Linux is also, as is *BSD. GNU's HURD is based upon the Mach microKernel. AIX is modular but monolithic. -- John R. Campbell Speaker to Machines [EMAIL PROTECTED] - As a SysAdmin, yes, I CAN read your e-mail, but I DON'T get that bored! It is impossible for ANY man to learn about impotence the hard way. - me ZIF is not a desirable trait when selecting a spouse. - me
Modularized vs Monolithic kernel
I was reading an article (http://www.openna.com/documentations/articles/kernel/) that discussed the differences between modularized and monolithic Linux kernels which got me wondering what were the pros and cons when it comes to a S/390 or zSeries box. Anyone have any thoughts? Thanks Dave David Froberg Phone: 202-312-9807 Email: [EMAIL PROTECTED]
Re: Modularized vs Monolithic kernel
On Wed, 2002-12-11 at 15:49, Froberg, David C wrote: the pros and cons [of modules vs. statically linked kernel code] when it comes to a S/390 or zSeries box. Most S/390 shops are serious about uptime, and imsmod is a heckuva lot less disruptive than rebuilding the kernel. I believe there are license issues as well, that you cannot link non-GPL code into the kernel. Some of the S/390 drivers are OCO. -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED]
Re: Modularized vs Monolithic kernel
On Wed, 11 Dec 2002, Froberg, David C wrote: I was reading an article (http://www.openna.com/documentations/articles/kernel/) that discussed the differences between modularized and monolithic Linux kernels which got me wondering what were the pros and cons when it comes to a S/390 or zSeries box. Anyone have any thoughts? In theory, if you're building a kernel for lots of disparate hardware, use modules and load what you need. This is what Red Hat does. If you're building a kernel for a specific machine (or lots the same), then you don't need modules. That's what I used to do. The second can have the disadvantage that when you add new (different) hardware you need to build a new kernel. Ditto when there's an upgrade because of a fixed security problem you care about. I also wonder about vendor-supplied initialisation scripts. In some cases they expect you're using the vendor-supplied kernel. These days, when I build a kernel I make it like the vendor kernel in all relevant areas. I use modules where my vendor uses modules, and I include support for all the stuff _I_ might use. -- Cheers John. Join the Linux Support by Small Businesses list at http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
Re: Modularized vs Monolithic kernel
On Wed, 2002-12-11 at 21:59, Rick Troth wrote: Given the loadable module support in Linux, one could almost call it modular. (I can hear Alan Cox now!) Perhaps it will evolve into more of what the microkernel purists would demand. I hope so! Even now, it is a far cry from the truly monolithic thing it once was. Modular - good engineering Microkernel - strange religion Not that there are not some *very* good uses for a Microkernel done right. QNX is a fine example, as is AmigaOS. Microkernel cores are also a very good way to do OS partitioning on top of a mathematically verifiable security layer. Mach is not a microkernel either - its *huge*. Something like L4 is.
Re: Modularized vs Monolithic kernel
Rick Troth wrote: The story goes that Andrew Tannenbaum (Comp Sci professor and creator of MINIX, which few can dispute was an inspiration for Linux) criticized Linux as out of date, being monolithic. The subject line of the Usenet message on comp.os.minix in which he responded to the appearance of Linux read LINUX is obsolete. Obviously a balanced and moderate observation, which has meanwhile been confirmed by history. ;-) The Linux crowd, of course, was so delighted to have a kernel that WORKED and that was UNCONSTRAINED (MINIX is not GPL) Actually, GPL wasn't the issue. The issue was that MINIX had a license that, although fairly open and permissive for its time, did not allow redistribution, so management of the various third-party changes that Andy wouldn't integrate into the main product because they didn't help the primary function that he developed MINIX for (teaching) became a royal pain, with all sorts of patch sets that one needed to apply to the base source that one bought from Prentice Hall. Some years ago, Andy finally managed to get P-H to re-license the whole thing under a plain, simple BSD style license. Had he done that ten years earlier, things might have gone different. that they did not let this deter them. (HURD was unheard of and Mach remains mockingly daunting.) Actually, HURD was not unheard of, it just had been in the mythical state form some years, and Linus made explicit reference to its development status in the discussion (I think he even mentioned that the MACH microkernel alone, not counting the HURD or BSD Unix servers, is already way larger than the entire (large, monolithic) Linux kernel was at the time...). The discussion between Andy and Linus is famous and has been retained in the archives. Andy felt very strongly about the micro-kernel approach, and Linus felt very strongly that that might be a theoretically nicer design, but with existing technology not practically feasible (yet). -- Willem Konynenberg [EMAIL PROTECTED] Konynenberg Software Engineering
Re: Modularized vs Monolithic kernel
Mach is not a microkernel either - its *huge*. Something like L4 is. Depends on what you consider to be Mach. The core systems services that make up the Mach microkernel ARE tiny -- less than 10Kloc on the Vax. They're just not very useful in that form -- a barebones Mach microkernel can't even drive a terminal. The Mach that most people deal with (ie either the NeXT version or the version that DARPA paid for to get a ATT-free Unix implementation) is the microkernel plus a humungous 4.3BSD personality module. *THAT* is huge. There are several other personalities -- there was a AIX-like one, Convex did one, NeXTstep did some distributed memory extensions, etc -- even a VMS-like personality. Compared to the VMS personality module, the 4.3BSD personality is microscopic...8-) -- db
Re: Modularized vs Monolithic kernel
Rick Troth wrote: The story goes that Andrew Tannenbaum (Comp Sci professor and creator of MINIX, which few can dispute was an inspiration for Linux) criticized Linux as out of date, being monolithic. The subject line of the Usenet message on comp.os.minix in which he responded to the appearance of Linux read LINUX is obsolete. Obviously a balanced and moderate observation, which has meanwhile been confirmed by history. ;-) Then again, when you look at Amoeba (Tannenbaum's next bit of cool gadgetry), he may have had a point. If you've never looked at Amoeba, check it out. Yet more proof that Andy Tannebaum is One Seriously Smart Dude. Totally distributed environment: distributed memory, single system image, distributed I/O -- his test environment was 300 nodes in 3 different *countries* all presenting a single system image to the programmer. You literally *didn't* know there were multiple systems involved. IMHO (and probably rank heresy here), Amoeba is way cooler than Linux. But, Amoeba is still an academic toy SO FAR, and Linux isn't. C'est la vie. Andy felt very strongly about the micro-kernel approach, and Linus felt very strongly that that might be a theoretically nicer design, but with existing technology not practically feasible (yet). One of the major reasons for the development of Amoeba.
Re: Modularized vs Monolithic kernel
At 15:59 12/11/2002 -0600, Rick Troth wrote: The story goes that Andrew Tannenbaum (Comp Sci professor and creator of MINIX, which few can dispute was an inspiration for Linux) criticized Linux as out of date, being monolithic. The O'Reilly Open Sources book published most of the exchange in an appendix. It's online at http://www.oreilly.com/catalog/opensources/book/appa.html for those who haven't seen it before (like, perhaps, ten years ago :-) ) Ross Patterson