skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to
Commit 2613af0ed18a (virtio_net: migrate mergeable rx buffers to page frag
allocators) changed the mergeable receive buffer size from PAGE_SIZE to
MTU-size, introducing a single-stream regression for benchmarks with large
average packet size. There is no single optimal buffer size for all
Extend existing support for netdevice receive queue sysfs attributes to
permit a device-specific attribute group. Initial use case for this
support will be to allow the virtio-net device to export per-receive
queue mergeable receive buffer size.
Signed-off-by: Michael Dalton mwdal...@google.com
Add initial support for per-rx queue sysfs attributes to virtio-net. If
mergeable packet buffers are enabled, adds a read-only mergeable packet
buffer size sysfs attribute for each RX queue.
Signed-off-by: Michael Dalton mwdal...@google.com
---
drivers/net/virtio_net.c | 66
On Thu, Jan 16, 2014 at 01:38:46AM -0800, Michael Dalton wrote:
Add initial support for per-rx queue sysfs attributes to virtio-net. If
mergeable packet buffers are enabled, adds a read-only mergeable packet
buffer size sysfs attribute for each RX queue.
Signed-off-by: Michael Dalton
Sorry, just realized - I think disabling NAPI is necessary but not
sufficient. There is also the issue that refill_work() could be
scheduled. If refill_work() executes, it will re-enable NAPI. We'd need
to cancel the vi-refill delayed work to prevent this AFAICT, and also
ensure that no other
On Thu, 2014-01-16 at 09:27 -0800, Michael Dalton wrote:
Sorry, just realized - I think disabling NAPI is necessary but not
sufficient. There is also the issue that refill_work() could be
scheduled. If refill_work() executes, it will re-enable NAPI. We'd need
to cancel the vi-refill delayed
On Thu, Jan 16, 2014 at 10:04:41AM -0800, Eric Dumazet wrote:
On Thu, 2014-01-16 at 09:27 -0800, Michael Dalton wrote:
Sorry, just realized - I think disabling NAPI is necessary but not
sufficient. There is also the issue that refill_work() could be
scheduled. If refill_work() executes, it
On Thu, 2014-01-16 at 01:38 -0800, Michael Dalton wrote:
[...]
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
[...]
@@ -2401,6 +2416,23 @@ static inline int netif_copy_real_num_queues(struct
net_device *to_dev,
#endif
}
+#ifdef CONFIG_SYSFS
+static inline unsigned
On Jan 16, 2014 at 10:57 AM, Ben Hutchings bhutchi...@solarflare.com wrote:
Why write a loop when you can do:
i = queue - dev-_rx;
Good point, the loop approach was done in get_netdev_queue_index --
I agree your fix is faster and simpler. I'll fix in next patchset.
Thanks!
Best,
Mike
On Thu, 2014-01-16 at 11:07 -0800, Michael Dalton wrote:
On Jan 16, 2014 at 10:57 AM, Ben Hutchings bhutchi...@solarflare.com wrote:
Why write a loop when you can do:
i = queue - dev-_rx;
Good point, the loop approach was done in get_netdev_queue_index --
I agree your fix is faster
The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC
mergeable rx buffer allocations. This commit migrates virtio-net to use
per-receive queue page frags for GFP_ATOMIC allocation. This change unifies
mergeable rx buffer memory allocation, which now will use skb_refill_frag()
Commit 2613af0ed18a (virtio_net: migrate mergeable rx buffers to page frag
allocators) changed the mergeable receive buffer size from PAGE_SIZE to
MTU-size, introducing a single-stream regression for benchmarks with large
average packet size. There is no single optimal buffer size for all
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to
To ensure ewma_read() without a lock returns a valid but possibly
out of date average, modify ewma_add() by using ACCESS_ONCE to prevent
intermediate wrong values from being written to avg-internal.
Suggested-by: Eric Dumazet eric.duma...@gmail.com
Signed-off-by: Michael Dalton
Extend existing support for netdevice receive queue sysfs attributes to
permit a device-specific attribute group. Initial use case for this
support will be to allow the virtio-net device to export per-receive
queue mergeable receive buffer size.
Signed-off-by: Michael Dalton mwdal...@google.com
Add initial support for per-rx queue sysfs attributes to virtio-net. If
mergeable packet buffers are enabled, adds a read-only mergeable packet
buffer size sysfs attribute for each RX queue.
Suggested-by: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Michael Dalton mwdal...@google.com
---
On Thu, 2014-01-16 at 11:51 -0800, Michael Dalton wrote:
On Thu, Jan 16, 2014 Ben Hutchings bhutchi...@solarflare.com wrote:
It's simpler but we don't know if it's faster (and I don't believe that
matters for the current usage).
If one of these functions starts to be used in the data
On Thu, 2014-01-16 at 11:52 -0800, Michael Dalton wrote:
To ensure ewma_read() without a lock returns a valid but possibly
out of date average, modify ewma_add() by using ACCESS_ONCE to prevent
intermediate wrong values from being written to avg-internal.
Suggested-by: Eric Dumazet
On Thu, Jan 16, 2014 at 11:52:26AM -0800, Michael Dalton wrote:
The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC
mergeable rx buffer allocations. This commit migrates virtio-net to use
per-receive queue page frags for GFP_ATOMIC allocation. This change unifies
mergeable
On Thu, Jan 16, 2014 at 11:52:28AM -0800, Michael Dalton wrote:
Extend existing support for netdevice receive queue sysfs attributes to
permit a device-specific attribute group. Initial use case for this
support will be to allow the virtio-net device to export per-receive
queue mergeable
On Thu, Jan 16, 2014 at 11:52:29AM -0800, Michael Dalton wrote:
To ensure ewma_read() without a lock returns a valid but possibly
out of date average, modify ewma_add() by using ACCESS_ONCE to prevent
intermediate wrong values from being written to avg-internal.
Suggested-by: Eric Dumazet
On Thu, Jan 16, 2014 at 11:52:30AM -0800, Michael Dalton wrote:
Add initial support for per-rx queue sysfs attributes to virtio-net. If
mergeable packet buffers are enabled, adds a read-only mergeable packet
buffer size sysfs attribute for each RX queue.
Suggested-by: Michael S. Tsirkin
On Thu, Jan 16, 2014 at 11:52:27AM -0800, Michael Dalton wrote:
Commit 2613af0ed18a (virtio_net: migrate mergeable rx buffers to page frag
allocators) changed the mergeable receive buffer size from PAGE_SIZE to
MTU-size, introducing a single-stream regression for benchmarks with large
average
From: David Miller da...@davemloft.net
Date: Thu, 16 Jan 2014 15:28:00 -0800 (PST)
All 6 patches applied.
Actually, I reverted, please resubmit this series with the following
build warning corrected:
net/core/net-sysfs.c: In function ‘rx_queue_add_kobject’:
net/core/net-sysfs.c:767:21:
On Thu, Jan 16, 2014 at 3:30 PM, David Miller da...@davemloft.net wrote:
Actually, I reverted, please resubmit this series with the following
build warning corrected:
Thanks David, I will send out another patchset shortly with the warning
resolved and a header e-mail (and one other sysfs group
From: Jason Wang jasow...@redhat.com
Date: Thu, 16 Jan 2014 14:45:24 +0800
It looks like there's no need for those two fields:
- Unless there's a failure for the first refill try, rq-max should be always
equal to the vring size.
- rq-num is only used to determine the condition that we
The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC
mergeable rx buffer allocations. This commit migrates virtio-net to use
per-receive queue page frags for GFP_ATOMIC allocation. This change unifies
mergeable rx buffer memory allocation, which now will use skb_refill_frag()
Commit 2613af0ed18a (virtio_net: migrate mergeable rx buffers to page frag
allocators) changed the mergeable receive buffer size from PAGE_SIZE to
MTU-size, introducing a single-stream regression for benchmarks with large
average packet size. There is no single optimal buffer size for all
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces
Extend existing support for netdevice receive queue sysfs attributes to
permit a device-specific attribute group. Initial use case for this
support will be to allow the virtio-net device to export per-receive
queue mergeable receive buffer size.
Signed-off-by: Michael Dalton mwdal...@google.com
To ensure ewma_read() without a lock returns a valid but possibly
out of date average, modify ewma_add() by using ACCESS_ONCE to prevent
intermediate wrong values from being written to avg-internal.
Suggested-by: Eric Dumazet eric.duma...@gmail.com
Acked-by: Michael S. Tsirkin m...@redhat.com
Add initial support for per-rx queue sysfs attributes to virtio-net. If
mergeable packet buffers are enabled, adds a read-only mergeable packet
buffer size sysfs attribute for each RX queue.
Suggested-by: Michael S. Tsirkin m...@redhat.com
Acked-by: Michael S. Tsirkin m...@redhat.com
This series does not apply cleanly to net-next.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to
Commit 2613af0ed18a (virtio_net: migrate mergeable rx buffers to page frag
allocators) changed the mergeable receive buffer size from PAGE_SIZE to
MTU-size, introducing a single-stream regression for benchmarks with large
average packet size. There is no single optimal buffer size for all
Extend existing support for netdevice receive queue sysfs attributes to
permit a device-specific attribute group. Initial use case for this
support will be to allow the virtio-net device to export per-receive
queue mergeable receive buffer size.
Signed-off-by: Michael Dalton mwdal...@google.com
To ensure ewma_read() without a lock returns a valid but possibly
out of date average, modify ewma_add() by using ACCESS_ONCE to prevent
intermediate wrong values from being written to avg-internal.
Suggested-by: Eric Dumazet eric.duma...@gmail.com
Acked-by: Michael S. Tsirkin m...@redhat.com
Add initial support for per-rx queue sysfs attributes to virtio-net. If
mergeable packet buffers are enabled, adds a read-only mergeable packet
buffer size sysfs attribute for each RX queue.
Suggested-by: Michael S. Tsirkin m...@redhat.com
Acked-by: Michael S. Tsirkin m...@redhat.com
From: Michael Dalton mwdal...@google.com
Date: Thu, 16 Jan 2014 22:23:24 -0800
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
42 matches
Mail list logo