On Wed, Aug 22, 2007 at 07:57:56PM -0700, Linus Torvalds wrote: > > > On Thu, 23 Aug 2007, Nick Piggin wrote: > > > > > Irix actually had an io_unlock() routine that did this > > > implicitly, but iirc that was shot down for Linux... > > > > Why was it shot down? Seems like a pretty good idea to me ;) > > It's horrible. We'd need it for *every* single spinlock type. We have lots > of them. > > So the choice is between: > > - sane: > > mmiowb() > > followed by any of the existing "spin_unlock()" variants (plain, > _irq(), _bh(), _irqrestore()) > > - insane: multiply our current set of unlock primitives by two, by making > "io" versions for them all: > > spin_unlock_io[_irq|_irqrestore|_bh]() > > but there's actually an EVEN WORSE problem with the stupid Irix approach, > namely that it requires that the unlocker be aware of the exact details of > what happens inside the lock. If the locking is done at an outer layer, > that's not at all obvious!
OK, but we'd have some kind of functions that are called not to serialise the CPUs, but to serialise the IO. It would be up to the calling code to already provide CPU synchronisation. serialize_io(); / unserialize_io(); / a nicer name If we could pass in some kind of relevant resoure (eg. the IO memory or device or something), then we might even be able to put debug checks there to ensure two CPUs are never inside the same critical IO section at once. > In other words, Irix (once again) made a horrible and idiotic choice. We could make a better one. I don't think mmiowb is really insane, but I'd worry it being confused with a regular type of barrier and that CPU synchronisation needs to be provided for it to work or make sense. > Big surprise. Irix was probably the flakiest and worst of all the > commercial proprietary unixes. No taste. Is it? I've never used it ;) _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev