Ronald G Minnich <[EMAIL PROTECTED]> writes:

> On 5 Jul 2002, Eric W. Biederman wrote:
> 
> > Hmm.  In the kernel you already have to call ioremap, which
> > is essentially a vmalloc but using memory from pci-io space.  And
> > last I looked this implemented the "red zones" using virtual addresses
> > instead of physicall addresses.
> 
> OK if that will work on "everything" then I have no problem. You've looked
> at this a lot more than I have :-)
> 
> It still bugs me a bit to have the physical addresses of the cards
> contiguous -- more of a "I've been burned by this" feeling but maybe it
> will never again be an issue. Although some arches do just direct-map PCI
> space and there you only get red zones if you have them in the physical
> address mapping.

You get spaces, but you don't exactly get useful red-zones, unless the
hardware also raises exceptions when you write to unmapped memory, (A
tricky thing to do).  With a virtual page in between the archtecture
gives you an exception on accesses to unmapped addresses.

I agree that it is a concern, and it is probably worth giving
etherboot a page table just to prevent this kind of driver bug.

I haven't been burnt by that kind of issue yet, so I really don't have
the feel of it that way.  I really don't see how any red-zone magic
can prevent one driver stomping another.  Red-zones just reduce the
odds, and make the odds one device driver will stomp on another
drivers device and increase the odds that bug will be found.  They
don't prevent it.

I admit though that ioremap is architecture specific so on some
architectures it may actually be a direct mapping, even on Linux.  On
x86 are seriously tight on address space, and I want a viable 64bit
alternative.  I hate having more physical addresses than virtual ones.

Comming up we have Hammer which has yet to prove it can ramp up the
clock rate enough to compete with the P4, but once it has done that
we have a commodity 64bit processor that eases a lot of these
concerns.

We also have Itanium2, trying to play in the high end processor
market.  And the high end processor market gets smaller evey year 
by erosion from the commodity processor market.  And I haven't seen
any signs the Itanium2 is an enough better processor to justify a
continued high end processor market.  So far it looks like it can
probably keep up with a P4, but that just means it doesn't totally
suck like the current Itanium.

Anyway when Itanium2 and Hammer start competing I can probably relax
and say use a 64bit processor, if you want large amounts of ram and
pci devices.  But until then I have to try and fit as much as possible
into the poor little x86 address space.

For the embedded folks none of this should be an issue, because
it is unlikely they will have > 2GB of ram.  For me I've had to deal
with as much as 6GB already.  But you get issues with 4GB, like MCR
has.

Eric

Reply via email to