>From: David Given [mailto:[EMAIL PROTECTED]]
>[...]
>>Reallistically we need to do this or something similair (modules?).
>
>The problem with modules is that they'd have to have different 
>code and data 
>segments. This means that you can't refer to the main kernel 
>code and data 
>with 16-bit pointers. You could compile in medium, which gives 
>16-bit data 
>pointers and 32-bit code pointers, or huge, which gives 32-bit 
>data and code 
>pointers.
>
>This means you have to change huge amounts of stuff throughout 
>the kernel, 
>which makes everything bigger, reduces efficiency and 
>decreases stability (a 
>stray pointer has more chances of pointing at something 
>outside the process' 
>address space). Also we don't have a medium or huge compiler.
>
>The other solution use a message-passing architecture to pass 
>data back and 
>forth from the main kernel to a driver (which may or may not live in 
>userland). This can be slow (this is what kills Minix). We 
>could do a fast 
>version which doesn't involve the scheduler, but there'd still 
>be overhead.
>
>We've yet to resolve this. It seems easier to keep going the 
>way we are until 
>we really run out of room in the kernel.
>
so the real problem is that we are working in a segmented architecture with
a compiler that doesn't really understand segments and we hide our heads in
the sand and hope the problem goes away.

This of course is begging the response that if i think the compiler should
handle other memory models i should change it to do so.

ummm

certainly IPC would be easier with far pointers (we could do shared memory
for a start)

thoughts anyone, or are we going round in circles?

Cheers

Paul

Reply via email to