On 11/10/2011 3:39 AM, Adrian Chadd wrote:
There's no locking around the OACTIVE flag set/clear, right?
Is it possible that multiple TX threads are fiddling with OACTIVE and
then it's not being properly cleared and tx kicked?


Adrian
sorry! I forgot to cleanup the the last message ... here is the correct one:

If we check for OACTIVE periodically (for instance, in local_timer) and under
transient resource shortage, the driver will finally end up with OACTIVE
cleared. Under frequent resource shortages, the driver may remain OACTIVE
longer than it is ~OACTIVE or it may constantly toggles but there is
not much the driver can do about this and a simple locking around OACTIVE 
set/clear
does not change the situation. The problem _is_ low resources and the only
fix is to increase it.

The problems we should focus on here are two things:

1- The driver _must_ be able to recover from OACTIVE after transient resource 
shortages.
2- It is desirable to do this as fast as possible.

Doing recovery in local_timer accommodates the first need but it is very far 
from
from the second.

One possible solution for 2 would be to defer setting OACTIVE until N 
consecutive
transmissions fail (i.e., N == 75% (if_snd.ifq_maxlen - if_snd.ifq_len)). The 
overhead
is a little wasted cpu time consumed in longer OACTIVE states. We still need 
local_timer
to recover from these states.

_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to