On Thu, 2013-09-26 at 16:19 +0000, Mark Vitale wrote: > On Sep 24, 2013, at 8:54 AM, Jeffrey Altman <[email protected]> > wrote: > > > On 9/23/2013 3:52 PM, Mark Vitale wrote: > >> 2) Volume releases suffer from poor performance and occasionally fail with > >> timeouts. > >> - root cause was heavier-than-normal vlserver load (perhaps caused by > >> disk performance slowdowns); this starved LWP IOMGR, which in turn > >> prevented LWP rx_Listener from being dispatched (priority inversion), > >> leading to a grossly delayed rxevent queue. > > > > Did you intend to write "rx_Listener" in the above sentence or did you > > mean the rx_Event thread? > In the LWP case I did mean to type rx_Listener, which is the "name" of the > LWP. In LWP, there are not separate threads for listener and event handler > logic; they run alternately in the same (listener) thread. The actual code it > runs is rx_ListenerProc() -> rxi_ListenerProc(), which loops and alternately > calls rxevent_RaiseEvents() and IOMGR_Select(). In this case the > high-priority listener thread is blocked on IOMGR_Select(), which is blocked > on the low priority IOMGR thread to be dispatched, which in turn is blocked > by medium priority worker threads. In this situation, rxevent_RaiseEvents() > will never > be invoked until ALL the worker threads are no longer runnable.
True, but that should be fairly common -- any thread doing I/O on a call is going to end up calling IOMGR_Select() eventually. Threads that do lots of computation or other things without doing I/O need to periodically call IOMGR_Poll(), so that threads waiting for IOMGR get to make progress. This is a normal property of the design of LWP. It sounds like what's going on here is that you have threads making what are expected to be "fast" calls such as disk I/O that are actually taking longer than expected. If your vlserver is behaving this way, I'd look for another hardware problem, not an issue with the event scheduler. -- Jeff _______________________________________________ OpenAFS-devel mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-devel
