> There is an existing standard definition for re-entrant code, and
> multiple tools and frameworks support it.
>
> https://en.wikipedia.org/wiki/Reentrancy_(computing)

My terminology is compliant with that definition.  A couple of quotes from what 
you provided:

"A computer program or subroutine is called reentrant if multiple invocations 
can safely run concurrently on multiple processors...", and "Often, subroutines 
accessible via the operating system kernel are not reentrant..."

> If you want something else, fine, but you don't get to unilaterally
> redefine existing words. Linux is already a reentrant kernel: the
> kerne's code is reentrant and the kernel can preempt itself.

I haven't redefined anything.  Individual subroutines may be re-entrant but the 
entire kernel itself is not.  E.g., you can't have the _same_ copy of the 
kernel running on multiple CPU's concurrently (which is what the definition 
says).  Preempting a single running instance of something is not the same thing 
as running multiple copies concurrently.  The kernel can divide what it's 
trying to accomplish among the resources of multiple CPU's, but it does work 
the other way -- the CPU's don't (currently) divide the resources of the OS 
kernel among each other.

As long as the code (and even the data) is static and not self-modifying, you 
can have the exact _same_ copy of the code running on different cores/CPU's 
simultaneously and you never need to worry about synchronization or preemption 
or re-entrancy.  You do need to worry about multiple CPU's trying to read the 
same portion of RAM (where the code might be stored) at the same time, but that 
is a memory access issue and not a re-entrancy issue.

Here's another way of thinking about it.  One of the primary things the OS does 
is resource management (memory, I/O, CPU, time, etc.).  What if instead of one 
centralized, monolithic resource manager (the OS kernel) the resource 
management was distributed among the resources themselves?  E.g., what if the 
memory, at least in a sense, "managed itself" and each running OS could request 
memory resources from the (external) "memory manager" when it needed them?

Here's another thing that has happened over the years.  The concentration has 
been on making individual CPU's faster.  With the 486's (and maybe 386's?) you 
could have multiple CPU's on the same computer, and now we're into multiple 
cores on the same chip die, but the concentration has always been focused on 
making an individual CPU faster.  The problem is, the CPU's _themselves_ really 
haven't gotten a whole lot faster than they were in the 386 days.  They've 
"played games" with things like pipelining and caching to make the CPU's _seem_ 
faster than they really are.  But if you disable the caching of modern CPU's 
they are really very lackluster and you would not be happy at all with their 
performance.

We've really just about reached the physical limits of what we can do with raw 
CPU clock speed.  That's why we've starting going with multiple cores/CPU's and 
rewriting code to take advantage of that and are hoping for some 
"step-function" leap in technology like quantum computing or something.  What 
if we took a step back and started putting 100's of older technology (like 
386-class) CPU's on the same chip die and had them running at around 5 GHz (I 
think something like that might be possible)?  How would the computing world be 
different if we had started doing that a long time ago -- distributing the load 
(including resource management) instead of consolidating it?

> Windows is too, AIUI, and Solaris, AIX, HP-UX, macOS, QNX, and most
> mainstream OS kernels.

No, you're conflating preemption with re-entrancy and sub-functions with 
kernels.

> Your description is not clear to me, but it sounds similar to
> several existing technologies:

I know it's not -- it's probably not clear to most people since it's it's a 
completely different way of thinking than what they're used to.  And again, I'm 
not saying that it's a "good" idea or that it should even be implemented -- but 
it can at least give a different perspective on how things _could_ be done.

> One kernel loading another:
> https://sourceforge.net/projects/monte/
>
> Linux kernel as a user space process under another kernel:
> https://en.wikipedia.org/wiki/Cooperative_Linux
>
> Linux kernel as a user space process under the Linux kernel:
> https://en.wikipedia.org/wiki/User-mode_Linux

Again, while interesting, none of those are what I'm talking about.  The second 
one is interesting (running Windows and Linux on the same hardware), but they 
are not running concurrently.  They are task-switching with a cooperative model 
instead of a preemptive model.  I think you would probably classify that as a 
step backwards instead of forward.

That model sort of illustrates the concept, though.  Both Windows and Linux 
want to manage the _entire_ machine's resources while they are running and one 
OS must give give up that control to let the other OS take over.  I'm proposing 
that the control be outside either OS -- in a sense, give it back to the 
"BIOS".   The "BIOS" in this case can be considered a common Hardware 
Abstraction Layer (HAL) that all OS's can share, but it could be "subdivided" 
(e.g., a "memory BIOS" and an I/O BIOS" and ...).  Some might consider that a 
step backwards, but I'm not so sure.  The direction we've been heading may 
actually be backwards, or at least a dead-end.


_______________________________________________
Freedos-devel mailing list
Freedos-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-devel

Reply via email to