Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-22 Thread Herbert Xu
On Mon, Jun 22, 2009 at 11:16:03AM +0530, Krishna Kumar2 wrote:

 I was curious about queueing it in the driver part: why is this bad? Do
 you
 anticipate any performance problems, or does it break QoS, or something
 else I
 have missed?

Queueing it in the driver is bad because it is no different than
queueing it at the upper layer, which is what will happen when
you return TX_BUSY.

Because we've ripped out the qdisc requeueing logic (which is
horribly complex and only existed because of TX_BUSY), it means
that higher priority packets cannot preempt a packet that is queued
in this way.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-22 Thread Krishna Kumar2
Hi Herbert,

 Herbert Xu herb...@gondor.apana.org.au wrote on 06/19/2009 10:06:13 AM:

  We either remove the API, or fix it.  I think fixing it is better,
because my
  driver will be simpler and it's obvious noone wants to rewrite 50
drivers and
  break several of them.

 My preference is obviously in the long term removal of TX_BUSY.
 Due to resource constraints that cannot be done immediately.  So
 at least we should try to stop its proliferation.

 BTW removing TX_BUSY does not mean that your driver has to stay
 complicated.  As I have said repeatedly your driver should be
 checking the stop-queue condition after transmission, not before.

 In fact queueing it in the driver is just as bad as return TX_BUSY!

I was curious about queueing it in the driver part: why is this bad? Do
you
anticipate any performance problems, or does it break QoS, or something
else I
have missed?

thanks,

- KK

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-22 Thread Krishna Kumar2
Thanks Herbert. I thought lesser processing is required for those skbs
queued at
the driver (avoid qdisc_restart, and repeated calls to dequeue_skb where
skb from
the cached 'gso_skb' is checked if send'able and put back till the queue is
re-enabled)
and hence some gains is possible. So far, my testing of queueing in the
driver shows
good results for some test cases and bad results for others, hence my
question on
the topic as I am not able to figure out why some cases test worse.

- KK

Herbert Xu herb...@gondor.apana.org.au wrote on 06/22/2009 01:04:17 PM:

 Herbert Xu herb...@gondor.apana.org.au
 06/22/2009 01:04 PM

 To

 Krishna Kumar2/India/i...@ibmin

 cc

 David Miller da...@davemloft.net, Matt Carlson mcarl...@broadcom.com,

 net...@vger.kernel.org, Rusty Russell ru...@rustcorp.com.au,
 virtualization@lists.linux-foundation.org

 Subject

 Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an
extra skb.

 On Mon, Jun 22, 2009 at 11:16:03AM +0530, Krishna Kumar2 wrote:
 
  I was curious about queueing it in the driver part: why is this bad?
Do
  you
  anticipate any performance problems, or does it break QoS, or something
  else I
  have missed?

 Queueing it in the driver is bad because it is no different than
 queueing it at the upper layer, which is what will happen when
 you return TX_BUSY.

 Because we've ripped out the qdisc requeueing logic (which is
 horribly complex and only existed because of TX_BUSY), it means
 that higher priority packets cannot preempt a packet that is queued
 in this way.

 Cheers,
 --
 Visit Openswan at http://www.openswan.org/
 Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
 Home Page: http://gondor.apana.org.au/~herbert/
 PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-19 Thread Rusty Russell
On Fri, 19 Jun 2009 02:06:13 pm Herbert Xu wrote:
 On Fri, Jun 19, 2009 at 01:07:19PM +0930, Rusty Russell wrote:
  You didn't comment on my patch which tried to fix NETDEV_TX_BUSY tho?
 However, that is still wrong for many packet schedulers.  For
 example, if the requeued packet is of a lower priority, and a
 higher priority packet comes along, we want the higher priority
 packet to preempt the requeued packet.  Right now it just doesn't
 happen.

 This is not as trivial as it seems because on a busy host this can
 happen many times a second.  With TX_BUSY the QoS guarantees are
 simply not workable.

Your use of the word guarantee here indicates an idealized concept of QoS 
which cannot exist on any NIC which has a queue.  We should try to approach 
the ideal, but understand we cannot reach it.

AFAICT, having a non-resortable head entry in the queue is exactly like having 
one-packet slightly longer queue on the NIC.  A little further from the ideal, 
but actually *less* damaging to QoS idea unless it happens on every second 
packet.

On the other hand, we're underutilizing the queue to avoid it.  I find that a 
little embarrassing.

  We provided an API, people used it.  Constantly trying to disclaim our
  responsibility for the resulting mess makes me fucking ANGRY.

 Where have I disclaimed responsibility? If we were doing that
 then we wouldn't be having this discussion.

Anyway, I don't think we should reshape our APIs based on how
broken the existing users are.

Perhaps I was reading too much into it, but the implication that we should 
blame the driver authors for writing their drivers in what I consider to be 
the most straightforward and efficient way.

I feel we're being horribly deceptive by giving them a nice API, and upon 
review, telling them don't use that.  And it's been ongoing for far too 
long.

 In fact queueing it in the driver is just as bad as return TX_BUSY!

Agreed (modulo the tcpdump issue).  And worse, because it's ugly and complex!

Thanks,
Rusty.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-19 Thread Herbert Xu
On Fri, Jun 19, 2009 at 11:20:44PM +0930, Rusty Russell wrote:

 Your use of the word guarantee here indicates an idealized concept of QoS 
 which cannot exist on any NIC which has a queue.  We should try to approach 
 the ideal, but understand we cannot reach it.

I'm just pointing out that it's better to not have to do this.

Since TX_BUSY and requeueing the packet is unnecessary in the
first place because we can avoid it by managing the stop-queue
action properly, there is no reason to do this because of its
downsides.

 On the other hand, we're underutilizing the queue to avoid it.  I find that a 
 little embarrassing.

Here's why I think this is not an issue.  If your NIC is high
bandwidth then your ring is going to have to be huge so the
amount that is underutilised (a 64K packet) is tiny.  If your
NIC is low bandwidth then this is where you often need QoS and
in that case you do *NOT* want to fully utilise the HW queue.

 I feel we're being horribly deceptive by giving them a nice API, and upon 
 review, telling them don't use that.  And it's been ongoing for far too 
 long.

If you look at our API documentation it actually says:

Return codes:
o NETDEV_TX_OK everything ok.
o NETDEV_TX_BUSY Cannot transmit packet, try later
  Usually a bug, means queue start/stop flow control is broken in
  the driver. Note: the driver must NOT put the skb in its DMA ring.
o NETDEV_TX_LOCKED Locking failed, please retry quickly.
  Only valid when NETIF_F_LLTX is set.

So I don't feel too bad :)

  In fact queueing it in the driver is just as bad as return TX_BUSY!
 
 Agreed (modulo the tcpdump issue).  And worse, because it's ugly and complex!

The right solution is to stop the queue properly.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-18 Thread Rusty Russell
On Sun, 14 Jun 2009 04:15:28 pm Herbert Xu wrote:
 On Sat, Jun 13, 2009 at 10:00:37PM +0930, Rusty Russell wrote:
  But re your comment that the 67 drivers using TX_BUSY are doing it
  because of driver bugs, that's hard to believe.  It either hardly ever
  happens (in which case just drop the packet), or it happens (in which
  case we should handle it correctly).

 Most of them just do this:

 start_xmit:

 if (unlikely(queue is full)) {
   /* This should never happen. */
   return TX_BUSY;
 }

OK, so I did a rough audit, trying to figure out the never happens ones (N, 
could be kfree_skb(skb); return NETDEV_TX_OK) from the will actually happen 
ones (Y).

One question: can netif_queue_stopped() ever be true inside start_xmit?  Some 
drivers test that, like sun3lance.c.

Some have multiple returns, and some are unclear, but my best guess on a 
quickish reading is below.

Summary: we still have about 54 in-tree drivers which actually use 
NETDEV_TX_BUSY for normal paths.  Can I fix it now?

Thanks,
Rusty.

sungem.c: Y, N
fs_enet: N
mace.c: N
sh_eth.c: Y
de620.c: N
acenic.c: N (but goes through stupid hoops to avoid it)
e1000_main.c: N, Y
korina.c: N (Buggy: frees skb and returns TX_BUSY.)
sky2.c: N
cassini.c: N
ixgbe: N
b44.c: N, Y, Y (last two buggy: OOM, does not stop q)
macb.c: N
niu.c: N
smctr.c: N
olympic.c: Y
tms380tr.c: N
3c359.c: Y
lanstreamer.c: Y
lib8390.c: N
xirc2ps_cs.c: Y
smc91c92_cs.c: N
fmvj18x_cs.c: N (and buggy: can't happen AFAICT, and return 0 above?)
axnet_cs.c: N
smsc9420.c: N (and buggy: doesn't stop q)
mkiss.c: N (In xmit, can netif_running be false? Can netif_queue_stopped?)
skge.c: N
qlge: N, N, Y, N (buggy, OOM, does not stop q) 
chelsio: N
s2io.c: Y, Y?
macmace.c: N
3c505.c: Y
defxx.c: Y
myri10ge: N
sbni.c: Y
wanxl.c: N
cycx_x25.c: N, N, Y?
dlci.c: Y
qla3xxx.c: N, N (buggy, OOM, does not stop q), Y, N, 
tlan.c: Y
skfp.c: N
cs89x0.c: N
smc9194.c: N
fec_mpc52xx.c: N
mv643xx_eth.c: N (buggy, OOM, does not stop q)
ll_temac_main.c: Y, Y
netxen: Y
tsi108_eth.c: N, N
ni65.c: N
sunhme.c: N
atl1c.c: Y
ps3_gelic_net.c: Y
igbvf: N
csgb3.c: N
ks8695net.c: N, N (buggy, neither stops q, latter OOM)
ether3.c: N
at91_ether.c: N
bnx2x_main.c: N, N
dm9000.c: N
jme.c: N
3c537.c: Y (plus, leak on skb_padto fail)
arcnet.c: N?
3c59x.c: N
au1000_eth.c: Y
ixgb: N
de600.c: N, N, N
myri_sbus.c: Y
bnx2.c: N
atl1e: Y
sonic.c: who cares, that won't even compile... (missing semicolon)
sun3_82586.c: N
3c515.c: N
ibm_newemac.c: Y
donaubae.c:Y?, Y?, Y?, Y (but never stops q)
sir_dev.c: Y
au1k_ir.c: Y, Y
cpmac.c: N (no stop q, and leak on skb_padto fail), Y
davinci_emac.c: N (no stop q), Y
de2104x.c: N
uli526x.c: N
dmfe.c: N
xircom_cb.c: N
iwmc3200wifi: Y
orinoco: N, N, N, N (no stop q)
atmel.c: Y
p54common.c: N, Y?
arlan-main.c: Y?
libipw_tx.c: Y (no stop q), N (alloc failure)
hostap_80211_tx.c: Y
strip.c: N
wavelan.c: N, N, N
at76c50x-usb.c: N
libertas/tx.c: Y
ray_cs.c: N, N
airo.c: Y, Y, Y
plip.c: N, N, N (starts q, BUSY on too big pkt?)
ns83820.c: N, N
ehea: Y, Y (no stop q)
rionet.c: N
enic: N
sis900.c: N
starfire.c: Y
r6040.c: N
sun3lance.c: N, N
sfc: Y, N, Y
mac89x0.c: N
sb1250-mac.c: Y
pasemi_mac.c: Y
8139cp.c: N
e1000e: N
r8169.c: N?
sis190.c: N
e100.c: N
tg3.c: N, Y?, N
fec.c: N (no stop q), N
hamachi.c: N
forcedeth.c: Y, Y
vxge: Y?, Y?
ks8842.c: Y
spider_net.c: Y
igb: N
ewrk3.c: N
gianfar.c: Y
sunvnet.c: N
mlx4: Y
atlx: Y, Y
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-18 Thread Herbert Xu
On Thu, Jun 18, 2009 at 04:47:50PM +0930, Rusty Russell wrote:
 
 One question: can netif_queue_stopped() ever be true inside start_xmit?  Some 
 drivers test that, like sun3lance.c.

The driver should never test that because even if it is true
due to driver shutdown the xmit function should ignore it as
the upper layer will wait for it anyway.

 Summary: we still have about 54 in-tree drivers which actually use 
 NETDEV_TX_BUSY for normal paths.  Can I fix it now?

You can fix it but I don't quite understand your results below :)

 sungem.c: Y, N

This driver does the bug check in addition to a race check that
should simply drop the packet instead of queueing.  In fact chances
are the race check is unnecessary anyway.

 fs_enet: N

This is either just a bug check or the driver is broken in that
it should stop the queue when the said condition can be true.

 mace.c: N

Just a bug check.

 sh_eth.c: Y

This driver should check the queue after transmitting, just like
virtio-net :)

So from a totally non-representative sample of 4, my conclusion
is that none of them need TX_BUSY.  Do you have an example that
really needs it?

Anyway, I don't think we should reshape our APIs based on how
broken the existing users are.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-18 Thread Rusty Russell
On Thu, 18 Jun 2009 05:04:22 pm Herbert Xu wrote:
 On Thu, Jun 18, 2009 at 04:47:50PM +0930, Rusty Russell wrote:
  Summary: we still have about 54 in-tree drivers which actually use
  NETDEV_TX_BUSY for normal paths.  Can I fix it now?

 You can fix it but I don't quite understand your results below :)

You didn't comment on my patch which tried to fix NETDEV_TX_BUSY tho?

  sungem.c: Y, N

 This driver does the bug check in addition to a race check that
 should simply drop the packet instead of queueing.  In fact chances
 are the race check is unnecessary anyway.

OK, N means can be simply replaced with kfree_skb(skb); return 
NETDEV_TX_OK;.  Y means driver will break if we do that, needs rewriting.
I didn't grade how hard or easy the rewrite would be, but later on I got more 
picky (I would have said this is N, N: the race can be replaced with a drop).

  fs_enet: N

 This is either just a bug check or the driver is broken in that
 it should stop the queue when the said condition can be true.

  mace.c: N

 Just a bug check.

Err, that's why they're N (ie. does not need TX_BUSY).

  sh_eth.c: Y

 This driver should check the queue after transmitting, just like
 virtio-net :)

 So from a totally non-representative sample of 4, my conclusion
 is that none of them need TX_BUSY.  Do you have an example that
 really needs it?

First you asserted Most of them just do this:... /* Never happens */.  Now 
I've found ~50 drivers which don't do that, it's Do any of them really need 
it?.

So, now I'll look at that.  Some are just buggy (I'll send patches for those).  
Most I just have no idea what they're doing; they're pretty ugly.  These ones 
are interesting:

e1000/e1000_main.c: fifo bug workaround?
ehea/ehea_main.c: ?
starfire.c: we may not have enough slots even when it seems we do.?
tg3.c: tg3_gso_bug

ISTR at least one driver claimed practice showed it was better to return 
TX_BUSY, and one insisted it wouldn't wasn't going to waste MAX_FRAGS on the 
stop-early scheme.

 Anyway, I don't think we should reshape our APIs based on how
 broken the existing users are.

We provided an API, people used it.  Constantly trying to disclaim our 
responsibility for the resulting mess makes me fucking ANGRY.

We either remove the API, or fix it.  I think fixing it is better, because my 
driver will be simpler and it's obvious noone wants to rewrite 50 drivers and 
break several of them.

I don't know how many times I can say the same thing...
Rusty.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-18 Thread Herbert Xu
On Fri, Jun 19, 2009 at 01:07:19PM +0930, Rusty Russell wrote:

 You didn't comment on my patch which tried to fix NETDEV_TX_BUSY tho?

I think fixing it would only encourage more drivers to use and
abuse TX_BUSY.  The fundamental problem with TX_BUSY is that
you're doing the check before transmitting a packet instead of
after transmitting it.

Let me explain why this is wrong, beyond the fact that tcpdump
may see the packet twice which you've tried to fix.  The problem
is that requeueing is fundamentally hard.  We use to have this
horrible logic in the schedulers to handle this.  Thankfully that
seems to have been replaced with a single device-level packet
holder shared with GSO.

However, that is still wrong for many packet schedulers.  For
example, if the requeued packet is of a lower priority, and a
higher priority packet comes along, we want the higher priority
packet to preempt the requeued packet.  Right now it just doesn't
happen.

This is not as trivial as it seems because on a busy host this can
happen many times a second.  With TX_BUSY the QoS guarantees are
simply not workable.

BTW you pointed out that GSO also uses TX_BUSY, but that is
different because the packet schedulers treat a GSO packet as
a single entity so there is no issue of preemption.  Also tcpdump
will never see it twice by design.

 e1000/e1000_main.c: fifo bug workaround?

The workaround should work just as well as a stop-queue check
after packet transmission.

 ehea/ehea_main.c: ?

Ahh! The bastard LLTX drivers are still around.  LLTX was the
worst abuse associated with TX_BUSY.  Thankfully not many of them
are left.

The fix is to not use LLTX and use the xmit_lock like normal
drivers.

 starfire.c: we may not have enough slots even when it seems we do.?

Just replace skb_num_frags with SKB_MAX_FRAGS and move the check
after the transmit.

 tg3.c: tg3_gso_bug

A better solution would in fact be to disable hardware TSO when
we encounter such a packet (and drop the first one).

Because once you get one you're likely to get a lot more.  The
difference between hardware TSO and GSO on a card like tg3 is
negligible anyway.

Alternatively just disable TSO completely on those chips.

Ccing the tg3 maintainer.
 
 We provided an API, people used it.  Constantly trying to disclaim our 
 responsibility for the resulting mess makes me fucking ANGRY.

Where have I disclaimed responsibility? If we were doing that
then we wouldn't be having this discussion.

 We either remove the API, or fix it.  I think fixing it is better, because my 
 driver will be simpler and it's obvious noone wants to rewrite 50 drivers and 
 break several of them.

My preference is obviously in the long term removal of TX_BUSY.
Due to resource constraints that cannot be done immediately.  So
at least we should try to stop its proliferation.

BTW removing TX_BUSY does not mean that your driver has to stay
complicated.  As I have said repeatedly your driver should be
checking the stop-queue condition after transmission, not before.

In fact queueing it in the driver is just as bad as return TX_BUSY!

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-14 Thread Herbert Xu
On Sat, Jun 13, 2009 at 10:00:37PM +0930, Rusty Russell wrote:
 
 But re your comment that the 67 drivers using TX_BUSY are doing it because of 
 driver bugs, that's hard to believe.  It either hardly ever happens (in which 
 case just drop the packet), or it happens (in which case we should handle it 
 correctly).

Most of them just do this:

start_xmit:

if (unlikely(queue is full)) {
/* This should never happen. */
return TX_BUSY;
}

transmit

if (queue is full)
stop queue

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-13 Thread Rusty Russell
On Mon, 8 Jun 2009 02:52:47 pm Herbert Xu wrote:
 No you've misunderstood my complaint.  I'm not trying to get you
 to replace NETDEV_TX_BUSY by the equally abhorrent queue in the
 driver, I'm saying that you should stop the queue before you get
 a packet that overflows by looking at the amount of free queue
 space after transmitting each packet.

 For most drivers this is easy to do.  What's so different about
 virtio-net that makes this impossible?

If we assume the worst case; ie. that the next packet will use max frags, we 
get close (make add_buf take a unsigned int *descs_left arg).  Obviously, 
this is suboptimal use of the ring.  We can still get kmalloc failures w/ 
indirect descriptors, but dropping a packet then isn't a huge deal IMHO.

But re your comment that the 67 drivers using TX_BUSY are doing it because of 
driver bugs, that's hard to believe.  It either hardly ever happens (in which 
case just drop the packet), or it happens (in which case we should handle it 
correctly).

TX_BUSY makes me queasy:  you haven't convinced me it shouldn't be killed or 
fixed.

Did you look at my attempted fix?
Rusty.








___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-07 Thread Herbert Xu
On Wed, Jun 03, 2009 at 12:47:04PM +0930, Rusty Russell wrote:
 
 We could figure out if we can take the worst-case packet, and underutilize
 our queue.  And fix the other *67* drivers.

Most of those are for debugging purposes, i.e., they'll never
happen unless the driver is buggy.

 Of course that doesn't even work, because we return NETDEV_TX_BUSY from dev.c!

If and when your driver becomes part of the core and it has to
feed into other drivers, then you can use this argument :)

 Hi, core netdevs here.  Don't use NETDEV_TX_BUSY.   Yeah, we can't figure out
 how to avoid it either.  But y'know, just hack something together.

No you've misunderstood my complaint.  I'm not trying to get you
to replace NETDEV_TX_BUSY by the equally abhorrent queue in the
driver, I'm saying that you should stop the queue before you get
a packet that overflows by looking at the amount of free queue
space after transmitting each packet.

For most drivers this is easy to do.  What's so different about
virtio-net that makes this impossible?

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-02 Thread Mark McLoughlin
On Fri, 2009-05-29 at 23:46 +0930, Rusty Russell wrote:

 This effectively reverts 99ffc696d10b28580fe93441d627cf290ac4484c
 virtio: wean net driver off NETDEV_TX_BUSY.
 
 The complexity of queuing an skb (setting a tasklet to re-xmit) is
 questionable,

It certainly adds some subtle complexities to start_xmit() 

  especially once we get rid of the other reason for the
 tasklet in the next patch.
 
 If the skb won't fit in the tx queue, just return NETDEV_TX_BUSY.  It
 might be frowned upon, but it's common and not going away any time
 soon.
 
 Signed-off-by: Rusty Russell ru...@rustcorp.com.au
 Cc: Herbert Xu herb...@gondor.apana.org.au
 ---
  drivers/net/virtio_net.c |   49 
 ++-
  1 file changed, 11 insertions(+), 38 deletions(-)
 
 diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
 --- a/drivers/net/virtio_net.c
 +++ b/drivers/net/virtio_net.c
  
 @@ -526,27 +517,14 @@ again:
   /* Free up any pending old buffers before queueing new ones. */
   free_old_xmit_skbs(vi);
  
 - /* If we has a buffer left over from last time, send it now. */
 - if (unlikely(vi-last_xmit_skb) 
 - xmit_skb(vi, vi-last_xmit_skb) != 0)
 - goto stop_queue;
 + /* Put new one in send queue and do transmit */
 + __skb_queue_head(vi-send, skb);
 + if (likely(xmit_skb(vi, skb) == 0)) {
 + vi-svq-vq_ops-kick(vi-svq);
 + return NETDEV_TX_OK;
 + }

Hmm, is it okay to leave the skb on the send queue if we return
NETDEV_TX_BUSY?

Cheers,
Mark.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-02 Thread Herbert Xu
On Fri, May 29, 2009 at 11:46:04PM +0930, Rusty Russell wrote:
 
 This effectively reverts 99ffc696d10b28580fe93441d627cf290ac4484c
 virtio: wean net driver off NETDEV_TX_BUSY.
 
 The complexity of queuing an skb (setting a tasklet to re-xmit) is
 questionable, especially once we get rid of the other reason for the
 tasklet in the next patch.
 
 If the skb won't fit in the tx queue, just return NETDEV_TX_BUSY.  It
 might be frowned upon, but it's common and not going away any time
 soon.

This looks like a step backwards to me.  If we have to do this
to fix a bug, sure.  But just doing it for the sake of it smells
wrong.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-02 Thread Rusty Russell
On Tue, 2 Jun 2009 06:35:52 pm Herbert Xu wrote:
 On Fri, May 29, 2009 at 11:46:04PM +0930, Rusty Russell wrote:
  This effectively reverts 99ffc696d10b28580fe93441d627cf290ac4484c
  virtio: wean net driver off NETDEV_TX_BUSY.
 
  The complexity of queuing an skb (setting a tasklet to re-xmit) is
  questionable, especially once we get rid of the other reason for the
  tasklet in the next patch.
 
  If the skb won't fit in the tx queue, just return NETDEV_TX_BUSY.  It
  might be frowned upon, but it's common and not going away any time
  soon.

 This looks like a step backwards to me.  If we have to do this
 to fix a bug, sure.  But just doing it for the sake of it smells
 wrong.

I disagree.  We've introduced a third queue, inside the driver, one element 
long.  That feels terribly, terribly hacky and wrong.

What do we do if it overflows?  Discard the packet, even if we have room in the 
tx queue.   And when do we flush this queue?  Well, that's a bit messy.  
Obviously we need to enable the tx interrupt when the tx queue is full, but we 
can't just netif_wake_queue, we have to also flush the queue.  We can't do that 
in an irq handler, since we need to block the normal tx path (or introduce 
another lock and disable interrupts in our xmit routine).  So we add a tasklet 
to do this re-transmission.

Or, we could just return NETDEV_TX_BUSY;.  I like that :)

Hope that clarifies,
Rusty.




 Cheers,

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-02 Thread Rusty Russell
On Tue, 2 Jun 2009 05:37:30 pm Mark McLoughlin wrote:
 On Fri, 2009-05-29 at 23:46 +0930, Rusty Russell wrote:
  This effectively reverts 99ffc696d10b28580fe93441d627cf290ac4484c
  virtio: wean net driver off NETDEV_TX_BUSY.
 
  The complexity of queuing an skb (setting a tasklet to re-xmit) is
  questionable,

 It certainly adds some subtle complexities to start_xmit()

   especially once we get rid of the other reason for the
  tasklet in the next patch.
 
  If the skb won't fit in the tx queue, just return NETDEV_TX_BUSY.  It
  might be frowned upon, but it's common and not going away any time
  soon.
 
  Signed-off-by: Rusty Russell ru...@rustcorp.com.au
  Cc: Herbert Xu herb...@gondor.apana.org.au
  ---
   drivers/net/virtio_net.c |   49
  ++- 1 file changed, 11
  insertions(+), 38 deletions(-)
 
  diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
  --- a/drivers/net/virtio_net.c
  +++ b/drivers/net/virtio_net.c
 
  @@ -526,27 +517,14 @@ again:
  /* Free up any pending old buffers before queueing new ones. */
  free_old_xmit_skbs(vi);
 
  -   /* If we has a buffer left over from last time, send it now. */
  -   if (unlikely(vi-last_xmit_skb) 
  -   xmit_skb(vi, vi-last_xmit_skb) != 0)
  -   goto stop_queue;
  +   /* Put new one in send queue and do transmit */
  +   __skb_queue_head(vi-send, skb);
  +   if (likely(xmit_skb(vi, skb) == 0)) {
  +   vi-svq-vq_ops-kick(vi-svq);
  +   return NETDEV_TX_OK;
  +   }

 Hmm, is it okay to leave the skb on the send queue if we return
 NETDEV_TX_BUSY?

Certainly not.  That's a bug.  Incremental fix is:

diff -u b/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
--- b/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -524,8 +524,9 @@
return NETDEV_TX_OK;
}
 
-   /* Ring too full for this packet. */
+   /* Ring too full for this packet, remove it from queue again. */
pr_debug(%s: virtio not prepared to send\n, dev-name);
+   __skb_unlink(skb, vi-send);
netif_stop_queue(dev);
 
/* Activate callback for using skbs: if this returns false it

Thanks!
Rusty.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.

2009-06-02 Thread Herbert Xu
On Tue, Jun 02, 2009 at 11:25:57PM +0930, Rusty Russell wrote:

 Or, we could just return NETDEV_TX_BUSY;.  I like that :)

No you should fix it so that you check the queue status after
transmitting a packet so we never get into this state in the
first place.  NETDEV_TX_BUSY is just passing the problem to
someone else, which is not nice at all.

For example, anyone running tcpdump will now see the packet
twice.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization