Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
net/tap-linux.c |4 ++--
net/tap-win32.c |2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/net/tap-linux.c b/net/tap-linux.c
index 059f5f3..0a6acc7 100644
--- a/net/tap-linux.c
+++ b/net/tap-linux.c
To support multiqueue, the patch introduce a helper qemu_get_queue()
which is used to get the NetClientState of a device. The following patches would
refactor this helper to support multiqueue.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
hw/cadence_gem.c|9
To support multiqueue, this patch introduces a helper qemu_get_nic() to get
NICState from a NetClientState. The following patches would refactor this helper
to support multiqueue.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
hw/cadence_gem.c|8
hw
-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
hw/e1000.c |2 +-
hw/eepro100.c|2 +-
hw/ne2000.c |2 +-
hw/pcnet-pci.c |2 +-
hw/rtl8139.c |2 +-
hw/usb/dev-network.c |2 +-
hw/virtio-net.c |2 +-
hw/xen_nic.c
In multiqueue, all NetClientState that belongs to the same netdev or nic has the
same id. So this patches introduces an helper qemu_find_net_clients_except()
which finds all NetClientState with the same id. This will be used by multiqueue
networking.
Signed-off-by: Jason Wang
Signed-off-by
This patch separates the setup of NetClientState from its allocation, this will
allow allocating an arrays of NetClientState and does the initialization one by
one which is what multiqueue needs.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
net/net.c | 29
To allow allocating an array of NetClientState and free it once, this patch
introduces destructor of NetClientState. Which could do type specific free,
which could be used by multiqueue to free the array once.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
include/net/net.h
allowed.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
hw/dp8393x.c|2 +-
hw/mcf_fec.c|2 +-
hw/qdev-properties-system.c | 46 +++---
hw/qdev-properties.h|6 +-
include/net/net.h | 18 +--
net/net.c
IFF_DETACH_QUEUE, the queue were disabled in the linux kernel. When doing this
ioctl with IFF_ATTACH_QUEUE, the queue were enabled in the linux kernel.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
net/tap-linux.h |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git
This patch factors out the common initialization of tap into a new helper
net_init_tap_one(). This will be used by multiqueue tap patches.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
net/tap.c | 130 ++---
1 files
is only supported
in Linux, return error on other platforms.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
net/tap-aix.c | 10 ++
net/tap-bsd.c | 10 ++
net/tap-haiku.c | 10 ++
net/tap-linux.c | 51
only done when
the tap was enabled.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
include/net/tap.h |2 ++
net/tap-win32.c | 10 ++
net/tap.c | 43 ---
3 files changed, 52 insertions(+), 3 deletions(-)
diff --git
its name after
creating the first queue.
Only linux has this support since it's the only platform that supports
multiqueue tap.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
include/net/tap.h |1 +
net/tap-aix.c |5 +
net/tap-bsd.c |5 +
ne
e multiqueue nic support, an N peers of NetClientState
were built up.
A new parameter, mq_required were introduce in tap_open() to create multiqueue
tap fds.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
include/net/tap.h |1 -
net/tap-aix.c |3 +-
net/tap-bsd.c
Add a queue_index to VirtQueue and a helper to fetch it, this could be used by
multiqueue supported device.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
hw/virtio.c |8
hw/virtio.h |1 +
2 files changed, 9 insertions(+), 0 deletions(-)
diff --git a/hw
To support multiqueue virtio-net, the first step is to separate the virtqueue
related fields from VirtIONet to a new structure VirtIONetQueue. The following
patches will add an array of VirtIONetQueue to VirtIONet based on this patch.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
This patch implements both userspace and vhost support for multiple queue
virtio-net (VIRTIO_NET_F_MQ). This is done by introducing an array of
VirtIONetQueue to VirtIONet.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
hw/virtio-net.c | 301
This patch add migration support for multiqueue virtio-net. Instead of bumping
the version, we conditionally send the info of multiqueue only when the device
support more than one queue to maintain the backward compatibility.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
hw
Disable multiqueue support for pre 1.4.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
hw/pc_piix.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/hw/pc_piix.c b/hw/pc_piix.c
index ba09714..0af436c 100644
--- a/hw/pc_piix.c
+++ b/hw/pc_piix.c
Some device (such as virtio-net) needs the ability to destroy or re-order the
virtqueues, this patch adds a helper to do this.
Signed-off-by: Jason Wang
Signed-off-by: Michael S. Tsirkin
---
hw/virtio.c |9 +
hw/virtio.h |2 ++
2 files changed, 11 insertions(+), 0 deletions
On 02/01/2013 03:39 PM, Jason Wang wrote:
> Hello all:
>
> This seires is an update of last version of multiqueue virtio-net support.
Hi Anthony:
This series is not applied cleanly on master, could you please pick
those for 1.4?
Thanks
--
To unsubscribe from this list: send
On 02/13/2013 05:21 AM, Alexander Graf wrote:
> On 01.02.2013, at 08:39, Jason Wang wrote:
>
>> This patch adds basic multiqueue support for qemu. The idea is simple, an
>> array
>> of NetClientStates were introduced in NICState, parse_netdev() were extended
>
On 02/11/2013 06:28 PM, Markus Armbruster wrote:
> Commit 264986e2 extended NetdevTapOptions without updating the
> documentation. Hasn't been addressed since. Must fix for 1.4, in my
> opinion.
Will send a patch to fix this.
Thanks
>
> This is the offending patch:
&g
simpler.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 60
drivers/vhost/vhost.c |3 ++
2 files changed, 13 insertions(+), 50 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 959b1cd..d1a03dd 100644
--- a
Hello all:
I meet an issue when testing multiqueue virtio-net. When I testing guest
small packets stream sending performance with netperf. I find an
regression of multiqueue. When I run 2 sessions of TCP_STREAM test with
1024 byte from guest to local host, I get following result:
1q result: 3457.
On 03/08/2013 11:05 PM, Eric Dumazet wrote:
> On Fri, 2013-03-08 at 14:24 +0800, Jason Wang wrote:
>> Hello all:
>>
>> I meet an issue when testing multiqueue virtio-net. When I testing guest
>> small packets stream sending performance with netperf. I find an
>>
On 03/09/2013 01:26 AM, Rick Jones wrote:
>
>>
>> Well, the point is : if your app does write(1024) bytes, thats probably
>> because it wants small packets from the very beginning. (See the TCP
>> PUSH flag ?)
>
> I think that raises the question of whether or not Jason was setting
> the test-speci
On 03/11/2013 12:50 AM, Michael S. Tsirkin wrote:
> On Thu, Mar 07, 2013 at 12:31:56PM +0800, Jason Wang wrote:
>> After commit 2b8b328b61c799957a456a5a8dab8cc7dea68575 (vhost_net: handle
>> polling
>> errors when setting backend), we in fact track the polling state thr
On 03/11/2013 03:09 PM, Jason Wang wrote:
> On 03/11/2013 12:50 AM, Michael S. Tsirkin wrote:
>> On Thu, Mar 07, 2013 at 12:31:56PM +0800, Jason Wang wrote:
>>> After commit 2b8b328b61c799957a456a5a8dab8cc7dea68575 (vhost_net: handle
>>> polling
>>> errors whe
On 03/11/2013 04:29 PM, Michael S. Tsirkin wrote:
> On Mon, Mar 11, 2013 at 03:09:10PM +0800, Jason Wang wrote:
>> On 03/11/2013 12:50 AM, Michael S. Tsirkin wrote:
>>> On Thu, Mar 07, 2013 at 12:31:56PM +0800, Jason Wang wrote:
>>>> After commit 2b8b328b61c799957a456
18.36/3230.11/+3.6% |
zerocopy enabled:
sessions|transaction rates|normalize|
before/after/+improvements
1 | 7318.33/11929.76/+63.0%| 521.86/843.30/+61.6% |
25| 167264.88/242422.15/+44.9% | 2181.60/2788.16/+27.8% |
50| 272181.02/294347.04/+8.1% | 3071.56/3257.85/+6.1% |
Signed-off-b
On 04/14/2013 11:16 PM, Sasha Levin wrote:
> On 04/14/2013 06:01 AM, Michael S. Tsirkin wrote:
>> On Sat, Apr 13, 2013 at 05:23:41PM -0400, Sasha Levin wrote:
>>> On 04/12/2013 07:36 AM, Rusty Russell wrote:
Sasha Levin writes:
> On 04/11/2013 12:36 PM, Will Deacon wrote:
>> Hello fol
On 05/07/2013 08:44 PM, Michael S. Tsirkin wrote:
> On Tue, May 07, 2013 at 02:13:44PM +0930, Rusty Russell wrote:
>> "Michael S. Tsirkin" writes:
>>> On Mon, May 06, 2013 at 03:41:36PM +0930, Rusty Russell wrote:
Asias He writes:
> Asias He (3):
> vhost: Remove vhost_enable_zcopy
virtualizat...@lists.linux-foundation.org;
> kvm@vger.kernel.org; net...@vger.kernel.org; linux-ker...@vger.kernel.org;
> Jason Wang
> Subject: Re: [PATCH] virtio-net: Reporting traffic queue distribution
> statistics through ethtool
>
> On Sun, May 19, 2013 at 04:09:48PM +, Narasim
On 05/20/2013 11:06 AM, Qinchuanyu wrote:
> Right now the wake_up_process func is included in spin_lock/unlock, but it
> could be done outside the spin_lock.
> I have test it with kernel 3.0.27 and guest suse11-sp2, it provide 2%-3% net
> performance improved.
>
> Signed-off-by: Chuanyu Qin
Mak
On 05/20/2013 12:22 PM, Qinchuanyu wrote:
> The patch below is base on
> https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/tree/drivers/vhost/vhost.c?id=refs/tags/next-20130517
>
> Signed-off-by: Chuanyu Qin
> --- a/drivers/vhost/vhost.c 2013-05-20 11:47:05.0 +0800
> ++
@vger.kernel.org; net...@vger.kernel.org; linux-ker...@vger.kernel.org;
> Jason Wang
> Subject: Re: [PATCH] virtio-net: Reporting traffic queue distribution
> statistics through ethtool
>
> On Sun, May 19, 2013 at 10:56:16PM +, Narasimhan, Sriram wrote:
>> Hi Michael,
&g
On 05/22/2013 05:59 PM, Zang Hongyong wrote:
> On 2013/5/20 15:43, Michael S. Tsirkin wrote:
>> On Mon, May 20, 2013 at 02:11:19AM +, Qinchuanyu wrote:
>>> Vhost thread provide both tx and rx ability for virtio-net.
>>> In the forwarding scenarios, tx and rx share the vhost thread, and
>>> thro
On 05/23/2013 04:50 PM, Michael S. Tsirkin wrote:
> Hey guys,
> I've updated the kvm networking todo wiki with current projects.
> Will try to keep it up to date more often.
> Original announcement below.
Thanks a lot. I've added the tasks I'm currently working on to the wiki.
btw. I notice the v
[vhost_net]
[] kthread+0xc6/0xd0
[] ? kthread_freezable_should_stop+0x70/0x70
[] ret_from_fork+0x7c/0xb0
[] ? kthread_freezable_should_stop+0x70/0x70
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers
On 06/05/2013 09:44 PM, Sergei Shtylyov wrote:
> Hello.
>
> On 05-06-2013 11:40, Jason Wang wrote:
>
>> When we decide not use zero-copy, msg.control should be set to NULL
>> otherwise
>> macvtap/tap may set zerocopy callbacks which may decrease the kref of
>
[vhost_net]
[] kthread+0xc6/0xd0
[] ? kthread_freezable_should_stop+0x70/0x70
[] ret_from_fork+0x7c/0xb0
[] ? kthread_freezable_should_stop+0x70/0x70
Acked-by: Michael S. Tsirkin
Signed-off-by: Jason Wang
---
The patch is needed for -stable.
Changes from v1:
- code style issue fix
---
drivers/vhost
On 06/07/2013 03:31 PM, Qinchuanyu wrote:
> the wake_up_process func is included by spin_lock/unlock in vhost_work_queue,
> but it could be done outside the spin_lock.
> I have test it with kernel 3.0.27 and guest suse11-sp2 using iperf, the num
> as below.
> orignal
&vq->mutex);
> vhost_zerocopy_signal_used(n, vq);
> mutex_unlock(&vq->mutex);
> @@ -1091,7 +1096,7 @@ err_used:
> vq->private_data = oldsock;
> vhost_net_enable_vq(n, vq);
> if (ubufs)
> - vhost_net_ubuf_put_
: Gleb Natapov
Cc: Paolo Bonzini
Cc: Vadim Rozenfeld
Cc: K. Y. Srinivasan
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Signed-off-by: Jason Wang
---
arch/x86/include/asm/kvm_para.h | 25 +
arch/x86/include/uapi/asm/kvm_para.h |1 +
: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: x...@kernel.org
Cc: Gleb Natapov
Cc: Paolo Bonzini
Cc: K. Y. Srinivasan
Signed-off-by: Jason Wang
---
arch/x86/kernel/cpu/hypervisor.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/
Anvin"
Cc: "Paolo Bonzini"
Cc: Gleb Natapov
Cc: x...@kernel.org
Signed-off-by: Jason Wang
---
arch/x86/include/asm/processor.h | 20
1 files changed, 20 insertions(+), 0 deletions(-)
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/a
Switch to use hypervisor_cpuid_base() to detect KVM.
Cc: Gleb Natapov
Cc: Paolo Bonzini
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: x...@kernel.org
Cc: kvm@vger.kernel.org
Signed-off-by: Jason Wang
---
arch/x86/include/asm/kvm_para.h | 17 ++--
ned-off-by: Jason Wang
---
arch/x86/include/asm/xen/hypervisor.h | 16 +---
1 files changed, 1 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/xen/hypervisor.h
b/arch/x86/include/asm/xen/hypervisor.h
index 125f344..d866959 100644
--- a/arch/x86/include/asm/xen/hy
On 07/23/2013 09:48 PM, Gleb Natapov wrote:
> On Tue, Jul 23, 2013 at 05:41:02PM +0800, Jason Wang wrote:
>> > This patch introduce hypervisor_cpuid_base() which loop test the hypervisor
>> > existence function until the signature match and check the number of
>> >
On 07/23/2013 10:48 PM, H. Peter Anvin wrote:
> On 07/23/2013 06:55 AM, KY Srinivasan wrote:
>> This strategy of hypervisor detection based on some detection order IMHO is
>> not
>> a robust detection strategy. The current scheme works since the only
>> hypervisor emulated
>> (by other hypervisor
On 07/24/2013 12:03 AM, H. Peter Anvin wrote:
> On 07/23/2013 04:16 AM, Paolo Bonzini wrote:
>> That's nicer, though strcmp is what the replaced code used to do in
>> patches 2 and 3.
>>
>> Note that memcmp requires the caller to use "KVMKVMKVM\0\0" as the
>> signature (or alternatively hypervisor_
On 07/24/2013 12:48 PM, H. Peter Anvin wrote:
> On 07/23/2013 09:37 PM, Jason Wang wrote:
>> On 07/23/2013 10:48 PM, H. Peter Anvin wrote:
>>> On 07/23/2013 06:55 AM, KY Srinivasan wrote:
>>>> This strategy of hypervisor detection based on some detection order I
On 07/25/2013 03:59 PM, Paolo Bonzini wrote:
> Il 24/07/2013 23:37, H. Peter Anvin ha scritto:
>> What I'm suggesting is exactly that except that the native hypervisor is
>> later in CPUID space.
> Me too actually.
>
> I was just suggesting an implementation of the idea (that takes into
> account
Tosatti
Cc: Gleb Natapov
Cc: Paolo Bonzini
Cc: Frederic Weisbecker
Cc: linux-ker...@vger.kernel.org
Cc: de...@linuxdriverproject.org
Cc: kvm@vger.kernel.org
Cc: xen-de...@lists.xensource.com
Cc: virtualizat...@lists.linux-foundation.org
Signed-off-by: Jason Wang
---
arch/x86/include/asm
Cc: Paolo Bonzini
Cc: Gleb Natapov
Cc: x...@kernel.org
Signed-off-by: Jason Wang
---
Changes from V1:
- use memcpy() and uint32_t instead of strcmp()
---
arch/x86/include/asm/processor.h | 15 +++
1 files changed, 15 insertions(+), 0 deletions(-)
diff --git a/arch/x86/include/asm
: Jason Wang
---
arch/x86/include/asm/xen/hypervisor.h | 16 +---
1 files changed, 1 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/xen/hypervisor.h
b/arch/x86/include/asm/xen/hypervisor.h
index 125f344..d866959 100644
--- a/arch/x86/include/asm/xen/hypervisor.h
Switch to use hypervisor_cpuid_base() to detect KVM.
Cc: Gleb Natapov
Cc: Paolo Bonzini
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: x...@kernel.org
Cc: kvm@vger.kernel.org
Signed-off-by: Jason Wang
---
Changes from V1:
- Introduce kvm_cpuid_base() which will be us
On 07/25/2013 04:54 PM, Jason Wang wrote:
> We try to handle the hypervisor compatibility mode by detecting hypervisor
> through a specific order. This is not robust, since hypervisors may implement
> each others features.
>
> This patch tries to handle this situation by always ch
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 969a859..280ee66 100644
--- a/drivers/vhost/net.c
+++ b
Hi all:
This series tries to unify and simplify vhost codes especially for zerocopy.
Plase review.
Thanks
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() returns void
vhost_net: use vhost_add_used_and_signal_n() in
vhost_zerocopy_signal_used()
vhost: switch to use
oming from guest. Guest can easily exceeds the limitation.
- We've already check upend_idx != done_idx and switch to non zerocopy then. So
even if all vq->heads were used, we can still does the packet transmission.
So remove this check completely.
Signed-off-by: Jason Wang
---
driver
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this choice
later. This could be avoided by determine zerocopy once by checking all
conditions at one time before.
Signed-off-by: Jason Wang
---
drivers
Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to
vhost_add_used_and_signal(). With the patch we will call at most 2 times
(consider done_idx warp around) compared to N times w/o this patch.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 13 -
1 files
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal to
be missed. Fix this by always poll the vhost thread before DMA is done.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |9
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication.
Signed-off-by: Jason Wang
---
drivers/vhost/vhost.c | 43 ++-
1 files changed, 2 insertions(+), 41 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
On 08/16/2013 05:54 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 16, 2013 at 01:16:26PM +0800, Jason Wang wrote:
>> > Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to
>> > vhost_add_used_and_signal(). With the patch we will call at most 2 times
>>
On 08/16/2013 05:56 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 16, 2013 at 01:16:27PM +0800, Jason Wang wrote:
>> > Let vhost_add_used() to use vhost_add_used_n() to reduce the code
>> > duplication.
>> >
>> > Signed-off-by: Jason Wang
> Does compiler
On 08/16/2013 06:00 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 16, 2013 at 01:16:29PM +0800, Jason Wang wrote:
>> We used to poll vhost queue before making DMA is done, this is racy if vhost
>> thread were waked up before marking DMA is done which can result the signal
>>
On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
>> We used to limit the max pending DMAs to prevent guest from pinning too many
>> pages. But this could be removed since:
>>
>> - We have the sk_wmem_alloc c
On 08/20/2013 10:33 AM, Jason Wang wrote:
> On 08/16/2013 05:54 PM, Michael S. Tsirkin wrote:
>> On Fri, Aug 16, 2013 at 01:16:26PM +0800, Jason Wang wrote:
>>>> Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to
>>>> vhost_add_used_and_signal(
On 08/20/2013 10:48 AM, Jason Wang wrote:
> On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
>> > On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
>>> >> We used to limit the max pending DMAs to prevent guest from pinning too
>>> >> many
>
On 08/25/2013 07:53 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 23, 2013 at 04:55:49PM +0800, Jason Wang wrote:
>> On 08/20/2013 10:48 AM, Jason Wang wrote:
>>> On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
>>>>> On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jas
On 08/25/2013 07:53 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 23, 2013 at 04:55:49PM +0800, Jason Wang wrote:
>> On 08/20/2013 10:48 AM, Jason Wang wrote:
>>> On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
>>>>> On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jas
into main loop. Tests shows about 5%-10%
improvement on per cpu throughput for guest tx. But a 5% drop on per cpu
transaction rate for a single session TCP_RR.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 15 ---
1 files changed, 4 insertions(+), 11 deletions(-)
diff --
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this choice
later. This could be avoided by determine zerocopy once by checking all
conditions at one time before.
Signed-off-by: Jason Wang
---
drivers
much more less times of used index
updating and memory barriers.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 13 -
1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 280ee66..8a6dd0d 100644
--- a/drivers/vhost/net.c
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication.
Signed-off-by: Jason Wang
---
drivers/vhost/vhost.c | 54 ++--
1 files changed, 12 insertions(+), 42 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal to
be missed. Fix this by always poll the vhost thread before DMA is done.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |9
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 969a859..280ee66 100644
--- a/drivers/vhost/net.c
+++ b
!= done_idx
to (upend_idx + 1) % UIO_MAXIOV == done_idx.
- Switch to use put_user() in __vhost_add_used_n() if there's only one used
- Keep the max pending check based on Michael's suggestion.
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() returns void
vhos
On 08/31/2013 12:44 AM, Ben Hutchings wrote:
> On Fri, 2013-08-30 at 12:29 +0800, Jason Wang wrote:
>> We used to poll vhost queue before making DMA is done, this is racy if vhost
>> thread were waked up before marking DMA is done which can result the signal
>> to
>> be
On 08/31/2013 02:35 AM, Sergei Shtylyov wrote:
> Hello.
>
> On 08/30/2013 08:29 AM, Jason Wang wrote:
>
>> Currently, even if the packet length is smaller than
>> VHOST_GOODCOPY_LEN, if
>> upend_idx != done_idx we still set zcopy_used to true and rollback
>>
On 08/31/2013 12:45 PM, Qin Chuanyu wrote:
> On 2013/8/30 0:08, Anthony Liguori wrote:
>> Hi Qin,
>
>>> By change the memory copy and notify mechanism ,currently
>>> virtio-net with
>>> vhost_net could run on Xen with good performance。
>>
>> I think the key in doing this would be to implement a pro
On 09/02/2013 01:50 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 30, 2013 at 12:29:18PM +0800, Jason Wang wrote:
>> > We tend to batch the used adding and signaling in vhost_zerocopy_callback()
>> > which may result more than 100 used buffers to be updated in
>> > v
On 09/02/2013 01:51 PM, Michael S. Tsirkin wrote:
> tweak subj s/returns/return/
>
> On Fri, Aug 30, 2013 at 12:29:17PM +0800, Jason Wang wrote:
>> > None of its caller use its return value, so let it return void.
>> >
>> > Signed-off-by: Jason Wang
&g
On 09/02/2013 01:56 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote:
>> As Michael point out, We used to limit the max pending DMAs to get better
>> cache
>> utilization. But it was not done correctly since it was one done when
On 09/02/2013 02:30 PM, Jason Wang wrote:
> On 09/02/2013 01:56 PM, Michael S. Tsirkin wrote:
>> > On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote:
>>> >> As Michael point out, We used to limit the max pending DMAs to get
>>> >> better cac
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal to
be missed. Fix this by always polling the vhost thread before DMA is done.
Signed-off-by: Jason Wang
---
- The patch is needed for stable
Let vhost_add_used() to use vhost_add_used_n() to reduce the code
duplication. To avoid the overhead brought by __copy_to_user(). We will use
put_user() when one used need to be added.
Signed-off-by: Jason Wang
---
drivers/vhost/vhost.c | 54 ++--
1
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this choice
later. This could be avoided by determining zerocopy once by checking all
conditions at one time before.
Signed-off-by: Jason Wang
into main loop. Tests shows about 5%-10%
improvement on per cpu throughput for guest tx.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 18 +++---
1 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8e9dc55..831eb4f 1
much less times of used index
updating and memory barriers.
2% performance improvement were seen on netperf TCP_RR test.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 13 -
1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 969a859..280ee66 100644
--- a/drivers/vhost/net.c
+++ b
check based on Michael's suggestion.
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() return void
vhost_net: use vhost_add_used_and_signal_n() in
vhost_zerocopy_signal_used()
vhost: switch to use vhost_add_used_n()
vhost_net: determine whether or not to use zerocopy a
On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote:
> On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote:
>> Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
>> upend_idx != done_idx we still set zcopy_used to true and rollback this
>> choice
On 09/04/2013 07:59 PM, Daniel Borkmann wrote:
> On 09/04/2013 01:27 PM, Eric Dumazet wrote:
>> On Wed, 2013-09-04 at 03:30 -0700, Eric Dumazet wrote:
>>> On Wed, 2013-09-04 at 14:30 +0800, Jason Wang wrote:
>>>
>>>>> And tcpdump would
On 09/23/2013 03:16 PM, Michael S. Tsirkin wrote:
> On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote:
>> > On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote:
>>> > > On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote:
>>>> > >> Curr
On 07/21/2014 09:23 PM, Razya Ladelsky wrote:
> Hello All,
>
> When vhost is waiting for buffers from the guest driver (e.g., more
> packets
> to send in vhost-net's transmit queue), it normally goes to sleep and
> waits
> for the guest to "kick" it. This kick involves a PIO in the guest, and
> t
On 07/23/2014 04:12 PM, Razya Ladelsky wrote:
> Jason Wang wrote on 23/07/2014 08:26:36 AM:
>
>> From: Jason Wang
>> To: Razya Ladelsky/Haifa/IBM@IBMIL, kvm@vger.kernel.org, "Michael S.
>> Tsirkin" ,
>> Cc: abel.gor...@gmail.com, Joel Nider/Haifa/IBM@IB
201 - 300 of 731 matches
Mail list logo