Philippe De Muyter wrote:
On Tue, Jul 10, 2007 at 12:38:45PM -0400, Jeff Garzik wrote:
Philippe De Muyter wrote:
This patch
- avoids 7990 blocking when no tx buffer is available,
[...]
diff -r 6c0a10cc415a drivers/net/7990.c
--- a/drivers/net/7990.c        Thu Jul  5 16:10:16 2007 -0700
+++ b/drivers/net/7990.c        Fri Jul  6 11:27:20 2007 +0200
[...]
@@ -541,9 +546,6 @@ int lance_start_xmit (struct sk_buff *sk
        static int outs;
        unsigned long flags;

-        if (!TX_BUFFS_AVAIL)
-                return -1;
-
        netif_stop_queue (dev);

        skblen = skb->len;

NAK

It "avoids" by removing an overrun check in hard_start_xmit that should not be removed.

Yup, sorry.

The real fact is still that this prevents/fixes lance/driver blocking on my
board, while the tx_timeout mechanism does not succeed at that, and that
on my board the driver is blocked when we return -1 on !TX_BUFFS_AVAIL.

Note that it should be returning a NETDEV_TX_xxx return value, which may be confusing the net stack. You have to let it know what happened to the skb passed to ->hard_start_xmit(), which is normally the responsibility of the ->hard_start_xmit() hook to free or queue as conditions warrant.

Yeah, you will need to investigate further what's going on here.


PS : did you apply the rest of the patch ?

No, I don't apply partial patches. You are welcome to resubmit a patch containing the non-controversial changes. In fact, it's normal and encouraged in Linux to submit multiple patches for different logical changes. Splitting cleanups and a TX code path change into two separate patches is certainly the best way to go. If there is a problem, that allows users to use 'git bisect' to quickly locate which specific patch caused the problem. If patches are split up properly, good-or-bad changes are identified more rapidly.

        Jeff


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to