OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Note: there are now actually 2 calls to fee_old_xmit_skbs on
data path so instead of passing flag
Add API to check ring capacity. Because of the option
to use indirect buffers, this returns the worst
case, not the normal case capacity.
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
drivers/virtio/virtio_ring.c |8
include/linux/virtio.h |5 +
2 files
In the (rare) case where new descriptors are used
while virtio_net enables vq callback for the TX vq,
virtio_net uses the number of sg entries in the skb it frees to
calculate how many descriptors in the ring have just been made
available. But this value is an overestimate: with indirect buffers
Current code might introduce a lot of latency variation
if there are many pending bufs at the time we
attempt to transmit a new one. This is bad for
real-time applications and can't be good for TCP either.
Free up just enough to both clean up all buffers
eventually and to be able to xmit the next
Add an option to modify the notificatin
hand-off in virtio to be basically like Xen:
each side published an index, the other side only triggers
an event when it crosses that index value
(Xen event indexes start at 1, ours start at 0 for
backward-compatiblity, but that's minor).
Since we've run
On Wed, 1 Jun 2011 13:25:48 +0300, Michael S. Tsirkin m...@redhat.com wrote:
Add an option to modify the notificatin
hand-off in virtio to be basically like Xen:
each side published an index, the other side only triggers
an event when it crosses that index value
(Xen event indexes start at 1,
On Wed, 1 Jun 2011 03:24:29 -0400, Mark Wu d...@redhat.com wrote:
Current index allocation in virtio-blk is based on a monotonically
increasing variable index. It could cause some confusion about disk
name in the case of hot-plugging disks. And it's impossible to find the
lowest available
On Wed, 1 Jun 2011 12:50:03 +0300, Michael S. Tsirkin m...@redhat.com wrote:
Current code might introduce a lot of latency variation
if there are many pending bufs at the time we
attempt to transmit a new one. This is bad for
real-time applications and can't be good for TCP either.
Free up
On Wed, 1 Jun 2011 12:49:46 +0300, Michael S. Tsirkin m...@redhat.com wrote:
Add API to check ring capacity. Because of the option
to use indirect buffers, this returns the worst
case, not the normal case capacity.
Can we drop the silly add_buf() returns capacity hack then?
Thanks,
Rusty.
On Wed, 1 Jun 2011 12:49:54 +0300, Michael S. Tsirkin m...@redhat.com wrote:
In the (rare) case where new descriptors are used
while virtio_net enables vq callback for the TX vq,
virtio_net uses the number of sg entries in the skb it frees to
calculate how many descriptors in the ring have
10 matches
Mail list logo