* Matt Dillon <[EMAIL PROTECTED]> [010417 17:47] wrote:
...
> 
>     Interrupts by definition know precisely what they are going to do, so by
>     definition they know precisely which mutexes (if any) they may need
>     to get.  This means that, in fact, it is possible to implement a check
>     to determine if any of the mutexes an interrupt might want to get are
>     already being held by the SAME cpu or not, and if they are to do the
>     equivalent of what our delayed-interrupt stuff does in the stable's
>     spl/splx code, but instead do the check when a mutex is released.
> 
>     The result is:  No need for an idle process to support interrupt
>     contexts, no need to implement interrupts as threads, and no need
>     to implement fancy scheduling tricks or Giant handling.
> 
...
>     And there you have it.  The mutex/array test is takes very little time
>     being a read-only test that requires no bus locking, and the collision
>     case is cheap also because the current cpu already owns the mutex, allowing
>     us to set the interrupt-pending bit in that mutex without any bus
>     locking.  The check during the release of the mutex is two instructions,
>     no bus locking required.  The whole thing can be implemented without any
>     additional bus locking and virtually no contention.
> 
>     The case could be further optimized by requiring that interrupts only
>     use a single mutex, period.  This would allow the mainline interrupt
>     routine to obtain the mutex on entry to the interrupt and allow the 
>     reissuing code to reissue the interrupt without freeing the mutex that
>     caused the reissue, so the mutex is held throughout and then freed by
>     the interrupt itself.
> 
>     Holy shit.  I think that's it!  I don't think it can get much better then
>     that.  It solves all of BDE's issues, solves the interrupt-as-thread
>     issue (by not using threads for interrupts at all), and removes a huge
>     amount of unnecessary complexity from the system.  We could even get rid
>     of the idle processes if we wanted to.

We can switch to this mechism at a later date.

There's issues here though:

  Mutex creation can be expensive as it seems like each interrupt
  needs to register what sort of mutex it's interested in, when a
  mutex is created the list must be scanned and each interrupt
  updated.

  Interrupts do not know "exactly" which mutexes they will need, they
  know about a subset of the mutexes they may need, this scheme causes
  several problems:
    1) interrupts are again fan-in, meaning if you block an interrupt
    class on one cpu you block them on all cpus
    2) when we may have things like per-socket mutexes we are blocking
    interrupts that may not need twiddling by the interrupt handler,
    yet we need to block the interrupt anyway because it _may_ want
    the same mutex that we have.

  Windriver has a full time developer working on the existing
  implementation, as far as I know we can only count on you for
  weekends and spare time.

  I'm starting to feel that I'm wasting time trying to get you to
  see the bigger picture; the fact that niether system means diddly
  unless we get to work on locking the rest of the kernel.

With that said, I'd really like to see the better of the two schemes
implemented when the dust settles.  The problem is that right now
neither scheme is buying us much other than overhead without
signifigant parts of the kernel being converted over to a mutexed
system.

Your proposal is valueable and might be something that we switch
to, however for the time being it's far more important to work on
locking down subsystems than working on the locking subsystem.

In fact if you proposed a new macro wrapper for mtx_* that would
make it easier at a later date to implement _your_ version of the
locking subsystem I would back it just to get you interested in
participating in locking down the other subsystems.

-Alfred

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to