--- Dave Grothe <[EMAIL PROTECTED]> wrote:
> >So these options are global for all streams drivers/modules/multiplexors?
>
> The option applies on a per-driver basis. If you driver is loadable and
> has no Config file you can call a registration routine in LiS from your
> driver's module init function to declare your locking style.
>
> int lis_register_module_qlock_option(modID_t id, int qlock_option);
Ok, that's good to hear... I have to admit, no disrespect to you skills, but
this is going to be a rough row to hoe. In Solaris there is a whole notion
of "perimeter queues" which are used in putnext() if the upstream (or
downstream) module is locked on the opposite side and you cannot run the
modules put routine. For example, my driver wants to send a message upstream
and calls putnext(), but the upstream module is running it's write side
thread. In this case the read side put routine cannot run if you are set up
for qpair synchronization. What do you do? In Solaris, they put the message
on a perimeter queue and return from putnext(). Does LiS block until the
read side put routine can run, or does it do something similar?
It blocks (but see further comments below). Oddly enough, I am just finishing up with implementing a message list for deferred puts as a queue head and tail pointer inside the queue structure. I didn't want to use it for "perimeter" purposes -- the intended use was to freeze put/srv activities on the stream during open/close/I_LINK/I_PUSH type of operations.
That mechanism could be the basis for something similar to what Solaris now does but I would have to noodle on it a bit to see if there is an easy way to do it.
I say a rough row to hoe because I know that it took Solaris a few tries to
work all of this out correctly, and they had (several) people working on it
full time....
Right. I am just me and have many other things on my plate as well.
If it's too long winded, I guess I can read the code, but if you are willing
and have the time it would be nice to see a medium level description of how
you are making these changes.
There's not much to it.
Locking style 0 does not do any locking at all when calling put/srv.
Locking style 1 sets up a semaphore in each half of the queue (read/write) and uses the two semaphores independently. This is how LiS worked before I implemented the qlock option.
Locking style 2 sets up a semaphore in one side of the queue and uses the same one for put/srv to either side. This is the example that you gave above.
Locking style 3 uses a global semaphore for both queue halves. All drivers using locking style 3 share the same global semaphore.
As drivers call putnext() through a chain, LiS continues to acquire the locks (styles 1, 2, 3) and the lock ownerships accumulate as you go. However, a single thread can acquire one of these locks in nested fashion so that a chain of putnext() calls with qreply() calls back down the stack will not block. Other threads that attempt to do puts into this chain of locked queues will have to wait. Likewise another thread attempting to call a service procedure in the locked chain will have to wait until the particular queue is unlocked.
Note that the locking does not apply to putq() calls. They can be done from anywhere.
-- Dave