From: Malcolm Crossley
Trying to batch Tx response events results in poor performance because
this delays freeing the transmitted skbs.
Instead use the standard RING_FINAL_CHECK_FOR_RESPONSES() macro to be
notified once the next Tx response is placed on the ring.
Signed-off-by: Malcolm Crossley
Signed-off-by: David Vrabel
---
drivers/net/xen-netfront.c | 15 +++
1 file changed, 3 insertions(+), 12 deletions(-)
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d6abf19..9cb45be 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -364,6 +364,7 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
RING_IDX cons, prod;
unsigned short id;
struct sk_buff *skb;
+ int more_to_do;
BUG_ON(!netif_carrier_ok(queue->info->netdev));
@@ -398,18 +399,8 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
queue->tx.rsp_cons = prod;
- /*
-* Set a new event, then check for race with update of tx_cons.
-* Note that it is essential to schedule a callback, no matter
-* how few buffers are pending. Even if there is space in the
-* transmit ring, higher layers may be blocked because too much
-* data is outstanding: in such cases notification from Xen is
-* likely to be the only kick that we'll get.
-*/
- queue->tx.sring->rsp_event =
- prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
- mb(); /* update shared area */
- } while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
+ RING_FINAL_CHECK_FOR_RESPONSES(&queue->tx, more_to_do);
+ } while (more_to_do);
xennet_maybe_wake_tx(queue);
}
--
2.1.4
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel