Gordon Ross wrote: > Hi Rao, (and others) > > We're porting some code for the CIFS client project that originally > used a socket "up-call" mechanism to wakup a service thread. > In BSD/Darwin that service thread would sleep until woken by > either socket input or occasional "events" that basically "kick" > the service thread to ask it to process its list of "events". > > I couldn't find a way to do exactly as above in Solaris, so for now > I've made this service thread block for network input, with periodic > timeouts so it can go check for events to work on. (Events are > handled with some delay now, but events are rare, basically for > connection setup and teardown, etc.) > > To improve on the above, I'd like to find a way for other threads to > cause the service thread to wakeup early from its blocking read > on a network socket. (It's blocked in t_kspoll). Ideas? > > We know the thread pointer, and I tried tsignal(threadp, sig), > but that failed because the thread was created in-kernel using > zthread_create, and threads so created ignore signals in > cv_wait or cv_timedwait calls (bummer). Are you sure about this. Looking at the code I don't see why the thread would not be woken up on a signal, unless signals were blocked. What are you passing as waitflg to t_kspoll ? Can you see what are the t_state and t_flag for the thread.
Rao. > > One thing I noticed is that the /dev/poll and poll(2) support code > (and VOP_POLL) appears to allow an in-kernel caller to provide > their own pollhead_t, so maybe if we used VOP_POLL we could > force that to return by calling pollwakeup() on that pollhead. I'm > not familiar with the poll.c code, so I'd appreciate hearing from > anyone who knows whether this approach might work. > > Or I guess we could wait for you folks to provide an up-call... > but I'd prefer to have a solution that would work now, and in > both current Nevada and S10 kernels. > > Practical suggestions appreciated. > > Thanks, > Gordon Ross > _______________________________________________ networking-discuss mailing list [email protected]
